text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
E-learning User Interface Acceptance Based on Analysis of User's Style, USAbility and User Benefits E-learning does not function properly if the system is not in accordance with user needs. This study aims to establish an evaluation model for e-learning user interface according to user acceptance. The model is designed based on three categories: user learning style, USAbility and user benefits. Results of measurements of the three categories will determine the level of user acceptance of the e-learning interface. The data were taken using a questionnaire which was distributed to 125 ELS students from various countries. Then processed using SEM and Lisrel v8.80. This paper presents experimental set up for the general research and some results for technology acceptance theories. Introduction E-learning is a method of learning that is offered by many universities and educational institutions to support their learning process.Basically, the concept of e-learning is the provision of equal educational facilities to learn in a conventional school.The role of e-learning is expected to help the role of educational institutions an conventional training.E-learning process has different characteristic compared to common education.According to [1] E-learning has personalized for student, focused on student and is directly controlled by themselves, occurs only when required and has the strictly necessary duration, communicated by technology on the basis student has gotten knowledge and need proactive roles. The e-learning is a distance learning system which offers training courses and custom tailors to the needs of learners.An integrated environment which combines the advantages of e-learning and traditional classroom is called as blended eeducation [2].But, unused user interfaces are probably the single largest reasons why on all sides of interactive system computers and elearning fall in actual use.The design of applications purposes in term of ease of use is not an easy task [3].E-learning will become less optimal if the system is not effective used in accordance with user needs [1]. User Interface Evaluation The system interface is used to communicate with a user in an interactive system.The system interface can be divided into two sections; a front interface and back-end interface [1].E-learning interface design is especially critical, as the learning effectiveness and interface design are substantially intertwined.To design an e-learning interface should be determined by how people learn and the tasks they need to perform in the program.There are some features in the user interface that are still less efficient [3].Many theories that discuss the interface evaluation design, but the fact still weak and does not work in accordance with the e-learning user interface expected [4].Table I shows the related works in elearning user interface acceptance."The often problem is that it is impossible to determine which user interface design variant is better" [5].Empirical evaluation of subjective selection criteria cannot be the best interface.Therefore quantitative evaluation methods are needed user interface.Different interface designs can be evaluated with quantitative methods priority criteria.While [3] argues that interface design e-learning should be a goal, an integrated component of the overall e-learning products. User interface becomes the major channel to convey information in e-learning context: a welldesigned and friendly interface is thus the key element in helping users to get the best results quickly [6].Interface settings will affect the quality of students learning that accommodates their needs in terms of personalizing the content, structure, and presentation. User's Learning Style User's Learning or Style User's Style is student factors in learning such as, learning style, motivation, and knowledge ability.User learning style should be considered in the adaptive elearning development in order to optimize learning process [7]. Learning Style refers to how a learner perceives, interacts with, and responds to the learning environment; it is a measure of individual differences [8].According to [9] User Learning Style is developed from the individual's physiological characteristic, will be influenced by: 1) Psychology development, social environment and education experience.2) Learning time, study habits, learning approach, gender, ethnicity, learning time, the learning resource and the process of learn.3) Record the learning information for each student: the individual learning style, preferred study habits, learning approach, his dynamic learning situation and even detail information. Learning motivation is an individual's characteristic and consistent approach to organizing and processing information.The students learning motivation is divided into five categories: effort, confidence, satisfaction, sensory interest and cognitive interest [7].From these categories, effort is a fundamental indicator of a student's motivation.The exertion of effort in learning can be as a positive parameter.The student's effort is the amount of time the learner spends on learning and participation. The student's ability is also another factor that should be considered.The student's ability can be seen from the level of knowledge in their learning performance.To measure the learning performance is recognising the knowledge objectively through evaluation, such as quiz, class exercise, and exam [7]. Usability Evaluation Usability is a quality attribute that assesses how easy user interfaces are to use.The word "usability" refers to a method for improving easy of use during the design process [10] [11] [12].Definition of usability based on 3 different standardization organizations: A set of attributes that bear on the effort needed for use and on the individual assessment of such use, by a stated or implied set of users (ISO/IEC 9126, 1991).The extent to which a product can be used by specified users to Achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use (ISO 9241 to 11.1998).The ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component (IEEE Std.610.12-1990)[13]. Usability is important to determine whether something is useful.It matters that something is easy but it is not what you want [5][14] [15].Although there are many individual methods for evaluating usability; they are not well integrated into a single conceptual framework that facilitate their usage by developers.There are several standards or conceptual models for usability, and not all of this standards or models describe the same operational definitions and measures [16].It needs a measurement model and a structural model for evaluating the e-learning user interface acceptance model [8] [17]. The Technology Acceptance Model (TAM) There are several models that are built to analyze and understand the variables that affect the user acceptance of information technology [18], among others; Theory of Reasoned Action (TRA), Theory of Planned Behaviour (TPB), and the Technology Acceptance Model (TAM).TAM models are developed from a psychological theory, which describes the behavior of computer users that are based on beliefs, attitudes, desires and relationships user behavior.These models aim to explain the main factors of user behavior on user acceptance of technology as refered in Figure .1.This model places the attitudinal factors of individual user behavior with variables: ease of use (ease of use), utility (usefulness), use (Attitude Toward Using), behavior to keep using (Behavioral Intention To Use), the real conditions of use of the system (Actual System Usage).In statistical hypothesis 0 H if the corresponding parameter is zero, while a H if the parameter is not zero.Thus if 0 System H is rejected, it means that the research hypotheses concerned accepted.As for some of the hypotheses for this research are as referred in Figure 3. Results and Analysis The questionnaires was distributed to 125 ELS language Center students in Malaysia who come from 13 countries (Figure 4).The minimum sample size recommended [19] for the sample in this study, depending on the number of variables to be studied.The formula is as follows: k (k+1) / 2, where k is the number of variables.it needs at least samples to calculate the minimum model of in this research is 12 (12 +1) / 2 = 78 samples.The data collected in this study is ordinal data and the estimation method used is the method of ML (maximum likelihood).The data was processed by using SEM and Lisrel v8.80.The result of model measurement is very significant correlation between variables.Variable User's style, consisting of Y1, Y2, Y3, which also correlated with variables Y11, Y12.Usability of e-learning system, consisting of Y4, Y5, Y6, Y7, Y8, Y9, Y10 plus correlation with variable Y2, and the last indicator User's benefit, consisting of Y10, Y11, Y12, plus variable Y5, Y6, Y9 as shown in Figure 5 Based on the statistical data, the model of elearning user interface, has a highly significant correlation values and strong construction between variables, which is evidenced by the size of the construct reliability values above 0.70 and the value of its variance extracted 0.50.T value exceeds the critical value also has a significant level of 1.96 to 0.05 which means that the relevant variables significantly related to the concept of design-related.The high load factor (0.70) of each variable also proves the strength of the relationship between variables with its constructs (table III and table IV). Research generates model was estimated before we tested the Goodness of fits of the user interface acceptance model by using LISREL v8.80.The result of GOF measurement in this study also described information about the guidelines and limits the admissibility of GOF levels as shown in Table V. Table V shows the goodness of fit statistical theories implied in this study, column 1 represents the goodness of fit theories, column 2 indicator the target, column 3 is model I measurement and column 4 is model II measurement.Chi-square value shows the deviation between the sample covariance matrix and the model (fitted) covariance matrix.Chi-square is a measure of the poor fit of a model.Chi-square value of 0 indicates that the model has a perfect fit.Goodness of fit indices (GFI) is a measure of the accuracy of the model in generating observed covariance matrix.GFI value must be between 0 and 1.Although in theory GFI may have a negative value, but it should not happen, because the model has a negative value of GFI is the worst model of all existing models [10].The model has a GFI values > 0.90 indicate a good model fit.Expected cross validation index (ECVI) was used to assess the trend that the model, in a single sample, can be cross-validated on the sample size and the same population.ECVI measures the deviation between the fitted (model) covariance matrix of the sample being analyzed and the covariance matrix that would be obtained in other samples but has the same sample size.ECVI value models ECVI lower than that obtained in the model saturated and independence models, indicating that the model is fit.AIC and CAIC are used to assess the issue of parsimony in the assessment of model fit.AIC and CAIC are used in the comparison of two or more models, where the value of AIC and CAIC smaller than the AIC model of saturated and independence means having a better model fit. Normed Fit Index (NFI) and the Comparative Fit Index (CFI) ranges between 0 and 1 are derived from the comparison between the model and the hypothesized model of independence.A model is said to fit if it has a value of NFI and CFI> 0.90.While the Nonnormed Fit Index (NNFI), is used to overcome the problems arising from the complexity of the model.Similarly, Incremental Fit Index (IFI) is used to overcome the problem of parsimony and sample size associated with NFI.While Relative Fit Index (RFI) is used to measure the fit where values between 0 and 1. Conclusion This paper presents how to develop the construct model among user's style, usability and user benefit as indicator variables to measure the latent variable of user e-learning interface acceptance.According to research questioners analysis and Goodness of Fit measurement, it is shown that the high reliability in this study indicates that an indicator variable has a consistently high in measuring latent constructs.Test reliability by using two types of measurements that measure construct composite reliability and variance extracted measure.According to t-value, loading factors, and the relative suitability value of each structural equation model, we can conclude that the interface User Acceptance Model for E-learning in this study can be accepted. Recommendations This study has become one alternative model to get the user acceptance of e-learning interface.Hopefully this model can be considered in developing an e-learning application in the future. Fig 2 . Fig 2. Research HypothesesThe model of User Interface Acceptance in this study is a model 2ndCFA.For each of the research hypotheses will be defined in a statistical hypothesis testing is necessary as a means of hypothesis.Testing multiple statistical hypotheses through estimation of the parameters  and  contained in the research and LISREL models. Fig 3 . Fig 3. Research Hypotheses We can see the User Interface Acceptance Model attributes as shown in tables II. Fig. 5 Fig. 5 Model I Initial Measurement
2,910.6
2013-11-28T00:00:00.000
[ "Computer Science" ]
Deep Graph Convolutional Encoders for Structured Data to Text Generation Most previous work on neural text generation from graph-structured data relies on standard sequence-to-sequence methods. These approaches linearise the input graph to be fed to a recurrent neural network. In this paper, we propose an alternative encoder based on graph convolutional networks that directly exploits the input structure. We report results on two graph-to-sequence datasets that empirically show the benefits of explicitly encoding the input graph structure. The source data, differently from the machine translation task, is a structured representation of the content to be conveyed. Generally, it describes attributes and events about entities and relations among them. In this work we focus on two generation scenarios where the source data is graph structured. One is the generation of multi-sentence descriptions of Knowledge Base (KB) entities from RDF graphs (Perez-Beltrachini et al., 2016;Gardent et al., 2017a,b), namely the WebNLG task. 2 The number of KB relations modelled in this scenario is potentially large and generation involves solving various subtasks (e.g. lexicalisation 1 Code and data available at github.com/diegma/graph-2-text. 2 Resource Description Framework https://www.w3.org/RDF/ and aggregation). Figure (1a) shows and example of source RDF graph and target natural language description. The other is the linguistic realisation of the meaning expressed by a source dependency graph (Belz et al., 2011), namely the SR11Deep generation task. In this task, the semantic relations are linguistically motivated and their number is smaller. Figure (1b) illustrates a source dependency graph and the corresponding target text. Most previous work casts the graph structured data to text generation task as a sequenceto-sequence problem (Gardent et al., 2017b;Ferreira et al., 2017;Konstas et al., 2017). They rely on recurrent data encoders with memory and gating mechanisms (LSTM; (Hochreiter and Schmidhuber, 1997)). Models based on these sequential encoders have shown good results although they do not directly exploit the input structure but rather rely on a separate linearisation step. In this work, we compare with a model that explicitly encodes structure and is trained end-to-end. Concretely, we use a Graph Convolutional Network (GCN; (Kipf and Welling, 2016;Marcheggiani and Titov, 2017)) as our encoder. GCNs are a flexible architecture that allows explicit encoding of graph data into neural networks. Given their simplicity and expressiveness they have been used to encode dependency syntax and predicate-argument structures in neural machine translation (Bastings et al., 2017;Marcheggiani et al., 2018). In contrast to previous work, we do not exploit the sequential information of the input (i.e., with an LSTM), but we solely rely on a GCN for encoding the source graph structure. 3 The main contribution of this work is show- Figure 1: Source RDF graph -target description (a). Source dependency graph -target sentence (b). ing that explicitly encoding structured data with GCNs is more effective than encoding a linearized version of the structure with LSTMs. We evaluate the GCN-based generator on two graph-tosequence tasks, with different level of source content specification. In both cases, the results we obtain show that GCNs encoders outperforms standard LSTM encoders. Graph Convolutional-based Generator Formally, we address the task of text generation from graph-structured data considering as input a directed labeled graph X = (V, E) where V is a set of nodes and E is a set of edges between nodes in V. The specific semantics of X depends on the task at hand. The output Y is a natural language text verbalising the content expressed by X. Our generation model follows the standard attention-based encoder-decoder architecture (Bahdanau et al., 2015;Luong et al., 2015) and predicts Y conditioned on X as P (Y |X) = |Y | t=1 P (y t |y 1:t−1 , X). Graph Convolutional Encoder In order to explicitly encode structural information we adopt graph convolutional networks (GCNs). GCNs are a variant of graph neural networks (Scarselli et al., 2009) that has been recently proposed by Kipf and Welling (2016). The goal of GCNs is to calculate the representation of each node in a graph considering the graph structure. In this paper we adopt the parametrization proposed by Marcheggiani and Titov (2017) where edge labels and directions are explicitly modeled. Formally, given a directed graph X = (V, E), where V is a set of nodes, and E is a set of edges. We represent each node v ∈ V with a feature vector x v ∈ R d . The GCN calculates the representation of each node h ′ v in a graph using the following up-date rule: . ρ is a non-linearity (ReLU). g u,v are learned scalar gates which weight the importance of each edge. Although the main aim of gates is to down weight erroneous edges in predicted graphs, they also add flexibility when several GCN layers are stacked. As with standard convolutional neural networks (CNNs, (LeCun et al., 2001)), GCN layers can be stacked to consider non-immediate neighbours. 4 Skip Connections Between GCN layers we add skip connections. Skip connections let the gradient flows more efficiently through stacked hidden layers thus making possible the creation of deeper GCN encoders. We use two kinds of skip connections: residual connections (He et al., 2016) and dense connections (Huang et al., 2017). Residual connections consist in summing input and output representations of a GCN layer h r Whilst, dense connections consist in the concatenation of the input and output representations In this way, each GCN layer is directly fed with the output of every layer before itself. Decoder The decoder uses an LSTM and a soft attention mechanism (Luong et al., 2015) over the representation induced by the GCN encoder to generate one word y at the time. The prediction of word y t+1 is conditioned on the previously predicted words y 1:t encoded in the vector w t and a context vector c t dynamically created attending to the graph representation induced by the GCN encoder as P (y t+1 |y 1:t , X) = sof tmax(g(w t , c t )), where g(·) is a neural network with one hidden layer. The model is trained to optimize negative log likelihood: Generation Tasks In this section, we describe the instantiation of the input graph X for the generation tasks we address. WebNLG Task The WebNLG task (Gardent et al., 2017a,b) aims at the generation of entity descriptions from a set of RDF triples related to an entity of a given category (Perez-Beltrachini et al., 2016). RDF triples are of the form (subject relation object), e.g., (Aenir precededBy Castle), and form a graph in which edges are labelled with relations and vertices with subject and object entities. For instance, Figure (1a) shows a set of RDF triples related to the book Above the Veil and its verbalisation. The generation task involves several micro-planning decisions such as lexicalisation (followedBy is verbalised as sequel to), aggregation (sequel to Aenir and Castle), referring expressions (subject of the second sentence verbalised as pronoun) and segmentation (content organised in two sentences). Reification We formulate this task as the generation of a target description Y from a source graph X = (V, E) where X is build from a set of RDF triples as follows. We reify the relations (Baader, 2003) from the RDF set of triples. That is, we see the relation as a concept in the KB and introduce a new relation node for each relation of each RDF triple. The new relation node is connected to the subject and object entities by two new binary relations A0 and A1 respectively. For instance, (pre-cededBy A0 Aenir) and (precededBy A1 Castle). Thus, E is the set of entities including reified relations and V a set of labelled edges with labels {A0, A1}. The reification of relations is useful in two ways. The encoder is able to produce a hidden state for each relation in the input; and it permits to model an arbitrary number of KB relations efficiently. SR11Deep Task The surface realisation shared task (Belz et al., 2011) proposed two generation tasks, namely shallow and deep realisation. Here we focus on the deep task where the input is a semantic dependency graph that represents a target sentence using predicate-argument structures (NomBank; (Meyers et al., 2004), PropBank; (Palmer et al., 2005)). This task covers a more complex semantic representation of language meaning; on the other hand, the representation is closer to surface form. Nodes in the graph are lemmas of the target sentence. Only complementizers that, commas, and to infinitive nodes are removed. Edges are labelled with NomBank and PropBank labels. 5 Each node is also associated with morphological (e.g. num=sg) and punctuation features (e.g. bracket=r). The source graph X = (V, E) is a semantic dependency graph. We extend this representation to model morphological information, i.e. each node in V is of the form (lemma, features). For this task we modify the encoder, Section 2, to represent each input node as h v = [h l ; h f ], where each input node is the concatenation of the lemma and the sum of feature vectors. Experiments We tested our models on the WebNLG and SR11Deep datasets. The WebNLG dataset contains 18102 training and 871 development datatext pairs. The test dataset is split in two sets, test Seen (971 pairs) and a test set with new unseen categories for KB entities. As here we are interested only in the modelling aspects of the structured input data we focus on our evaluation only on the test partition with seen categories. Sequential Encoders For both WebNLG and SR11Deep tasks we used a standard sequence-to-sequence model (Bahdanau et al., 2015;Luong et al., 2015) with an LSTM encoder as baseline. Both take as input a linearised version of the source graph. For the WebNLG baseline, we use the linearisation scripts provided by (Gardent et al., 2017b). For the SR11Deep baseline we follow a similar linearisation procedure as proposed for AMR graphs (Konstas et al., 2017). We built a linearisation based on a depth first traversal of the input graph. Siblings are traversed in random order (they are anyway shuffled in the given dataset). We repeat a child node when a node is revisited by a cycle or has more than one parent. The baseline model for the WebNLG task uses one layer bidirectional LSTM encoder and one layer LSTM decoder with embeddings and hidden units set to 256 dimensions . For the SR11Deep task we used the same architecture with 500-dimensional hidden states and embeddings. All hyperparameters tuned on the development set. GCN Encoders The GCN models consist of a GCN encoder and LSTM decoder. For the WebNLG task, all encoder and decoder embeddings and hidden units use 256 dimensions. We obtained the best results with an encoder with four GCN layers with residual connections. For the SR11Deep task, we set the encoder and decoder to use 500-dimensional embeddings and hidden units of size 500. In this task, we obtained the best development performance by stacking seven GCN layers with dense connections. We use delexicalisation for the WebNLG dataset and apply the procedure provided for the baseline in (Gardent et al., 2017b). For the SR11Deep dataset, we performed entity anonymisation. First, we compacted nodes in the tree corresponding to a single named entity (see (Belz et al., 2011) for details). Next, we used a name entity recogniser (Stanford CoreNLP; ) to tag entities in the input with type information (e.g. person, location, date). Two entities of the same type in a given input will be given a numerical suffix, e.g. PER 0 and PER 1. A GCN-based Generator For the WebNLG task, we extended the GCN-based model to use pre-trained word Embeddings (GloVe (Pennington et al., 2014)) and Copy mechanism (See et al., 2017), we name this variant GCN EC . To this end, we did not use delexicalisation but rather represent multi-word subject (object) entities with each word as a separate node connected with special Named Entity (NE) labelled edges. For instance, the book entity Into Battle is represented as (Into NE Battle). Encoder (decoder) embeddings and hidden dimensions were set to 300. The model stacks six GCN layers and uses a single layer LSTM decoder. Evaluation metrics As previous works in these tasks, we evaluated our models using BLEU (Papineni et al., 2002), ME-TEOR (Denkowski and Lavie, 2014) and TER (Snover et al., 2006) automatic metrics. During preliminary experiments we noticed considerable variance from different model initialisations; we thus run 3 experiments for each model and report average and standard deviation for each metric. Results WebNLG task In Table 1 we report results on the WebNLG test data. In this setting, the model with GCN encoder outperforms a strong baseline that employs the LSTM encoder, with .009 BLEU points. The GCN model is also more stable than the baseline with a standard deviation of .004 vs .010. We also compared the GCN EC model with the neural models submitted to the WebNLG shared task. The GCN EC model outperforms PKUWRITER that uses an ensemble of 7 models and a further reinforcement learning step by .047 BLEU points; and MELBOURNE by .014 BLEU points. GCN EC is behind ADAPT which relies on sub-word encoding. SR11Deep task In this more challenging task, the GCN encoder is able to better capture the WebNLG (William Anders dateOfRetirement 1969 -09 -01) (Apollo 8 commander Frank Borman) (William Anders was a crew member of Apollo 8) (Apollo 8 backup pilot Buzz Aldrin) LSTM William Anders was a crew member of the OPERATOR operated Apollo 8 and retired on September 1st 1969 . GCN William Anders was a crew member of OPERATOR ' s Apollo 8 alongside backup pilot Buzz Aldrin and backup pilot Buzz Aldrin . GCNEC william anders , who retired on the 1st of september 1969 , was a crew member on apollo 8 along with commander frank borman and backup pilot buzz aldrin . SR11Deep (SROOT SROOT will) (will P .) (will SBJ temperature) (temperature A1 economy) (economy AINV the) (economy SUFFIX 's) (will VC be) (be VC take) (take A1 temperature) (take A2 from) (from A1 point) (point A1 vantage) (point AINV several) (take AM-ADV with) (with A1 reading) (reading A1 on) (on A1 trade) (trade COORD output) (output COORD housing) (housing COORD and) (and CONJ inflation) (take AM-MOD will) (take AM-TMP week) (week AINV this) Gold The economy 's temperature will be taken from several vantage points this week , with readings on trade , output , housing and inflation . Baseline the economy 's accords will be taken from several phases this week , housing and inflation readings on trade , housing and inflation . GCN the economy 's temperatures will be taken from several vantage points this week , with reading on trades output , housing and inflation . Table 4: GCN ablation study (layers (L) and skipconnections: none, residual(res) and dense(den)). Average and standard deviation of BLEU scores over three runs on the WebNLG dev. set. Number of parameters (millions) including embeddings. structure of the input graph than the LSTM encoder, resulting in .647 BLEU for the GCN vs. .377 BLEU of the LSTM encoder as reported in Table 2. When we add linguistic features to the GCN encoding we get .666 BLEU points. We also compare the neural models with upper bound results on the same dataset by the pipeline model of Bohnet et al. (2011) (STUMBA-D) and transitionbased joint model of Zhang et al. (2017) (TBDIL). The STUMBA-D and TBDIL model obtains respectively .794 and .805 BLUE, outperforming the GCN-based model. It is worth noting that these models rely on separate modules for syntax prediction, tree linearisation and morphology generation. In a multi-lingual setting (Mille et al., 2017), our model will not need to re-train some modules for different languages, but rather it can exploit them for multi-task training. Moreover, our model could also exploit other supervision signals at training time, such as gold POS tags and gold syntactic trees as used in Bohnet et al. (2011). Qualitative Analysis of Generated Text We manually inspected the outputs of the LSTM and GCN models. Table 3 shows examples of source graphs and generated texts (we in-cluded more examples in Section A). Both models suffer from repeated and missing source content (i.e. source units are not verbalised in the output text (under-generation)). However, these phenomena are less evident with GCNbased models. We also observed that the LSTM output sometimes presents hallucination (overgeneration) cases. Our intuition is that the strong relational inductive bias of GCNs (Battaglia et al., 2018) helps the GCN encoder to produce a more informative representation of the input; while the LSTM-based encoder has to learn to produce useful representations by going through multiple different sequences over the source data. Ablation Study In Table 4 (BLEU) we report an ablation study on the impact of the number of layers and the type of skip connections on the WebNLG dataset. The first thing we notice is the importance of skip connections between GCN layers. Residual and dense connections lead to similar results. Dense connections (Table 4 (SIZE)) produce models bigger, but slightly less accurate, than residual connections. The best GCN model has slightly more parameters than the baseline model (4.9M vs.4.3M). Conclusion We compared LSTM sequential encoders with a structured data encoder based on GCNs on the task of structured data to text generation. On two different tasks, WebNLG and SR11Deep, we show that explicitly encoding structural information with GCNs is beneficial with respect to sequential encoding. In future work, we plan to apply the approach to other input graph representations like Abstract Meaning Representations (AMR; (Banarescu et al., 2013)) and scoped semantic representations (Van Noord et al., 2018).
4,104.8
2018-10-23T00:00:00.000
[ "Computer Science", "Mathematics" ]
6-Nitro-1,3-benzothiazole-2(3H)-thione In the title molecule, C7H4N2O2S2, the nitro group is twisted by 5.5 (1)° from the plane of the attached benzene ring. In the crystal, N—H⋯S hydrogen bonds link pairs of molecules into inversion dimers, which are linked by weak C—H⋯O interactions into sheets parallel to (101). The crystal packing exhibits short intermolecular S⋯O contacts of 3.054 (4) Å and π–π interactions of 3.588 (5) Å between the centroids of the five- and six-membered rings of neighbouring molecules. In the title molecule, C 7 H 4 N 2 O 2 S 2 , the nitro group is twisted by 5.5 (1) from the plane of the attached benzene ring. In the crystal, N-HÁ Á ÁS hydrogen bonds link pairs of molecules into inversion dimers, which are linked by weak C-HÁ Á ÁO interactions into sheets parallel to (101). The crystal packing exhibits short intermolecular SÁ Á ÁO contacts of 3.054 (4) Å andinteractions of 3.588 (5) Å between the centroids of the five-and six-membered rings of neighbouring molecules. In (I) (Fig.1), the nitro group is twisted at 5.5 (1)° from the plane of the attached benzene ring. Intermolecular N-H···S hydrogen bonds (Table 1) link two molecules into centrosymmetric dimer, and weak C-H···O interactions (Table 1) link further these dimers into sheets parallel to (101). The hydrogen bond N-H···S is similar to that reported for 2-mercaptobenzothiazole (Chesick & Donohue, 1971). The crystal packing (Fig. 2) exhibits short intermolecular S···O contacts of 3.054 (4) Å and π-π interactions proved by short distance of 3.588 (5) Å between the centroids of the five-and sixmembered rings from the neighbouring molecules. Experimental A mixture of AgCl (0.2 mmol) and bis(diphenylphosphino)methane (0.2 mmol) in MeOH and CH 2 Cl 2 (10 mL, v/v = 1:1) was stirred for 3 h. The insoluble residues were removed by filtration. The filtrate was then evaporated slowly at room temperature for a week to yield colourless crystalline product. The title compound was prepared by dissolving 0.0587 g colourless product mentioned above in MeOH and CH 2 Cl 2 (10 mL, v/v = 3:7), adding 2-mercapto-6-nitrobenzothiazole (0.2 mmol) into the solution, stirring for 4 h. Subsequent slow evaporation of the yellow filtrate resulted in the formation of yellow crystals. Refinement All H atoms were geometrically positioned [C-H 0.93 Å; N-H 0.86 Å], and included in the final refinement in the riding model approximation, with U iso (H) = 1.2 U eq of the parent atom. Figure 2 A portion of the crystal packing viewed approximately along the a axis. Dashed lines indicate short N···S and O···S contacts. H atoms omitted for clarity. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
735
2013-01-04T00:00:00.000
[ "Chemistry" ]
ValLAI_Crop, a validation dataset for coarse-resolution satellite LAI products over Chinese cropland Numerous validation efforts have been conducted over the last decade to assess the accuracy of global leaf area index (LAI) products. However, such efforts continue to face obstacles due to the lack of sufficient high-quality field measurements. In this study, a fine-resolution LAI dataset consisting of 80 reference maps was generated during 2003–2017. The direct destructive method was used to measure the field LAI, and fine-resolution LAI images were derived from Landsat images using semiempirical inversion models. Eighty reference LAI maps, each with an area of 3 km × 3 km and a percentage of cropland larger than 75%, were selected as the fine-resolution validation dataset. The uncertainty associated with the spatial scale effect was also provided. Ultimately, the fine-resolution reference LAI dataset was used to validate the Moderate Resolution Imaging Spectroradiometer (MODIS) LAI product. The results indicate that the fine-resolution reference LAI dataset builds a bridge to link small sampling plots and coarse-resolution pixels, which is extremely important in validating coarse-resolution LAI products. Background & Summary The leaf area index (LAI), defined as one-half of the total leaf area per unit ground surface area 1 , is a critical parameter used to characterize the structure and function of vegetation 2 . Since the LAI directly relates to the acquisition and utilization of sunlight by leaves, it is a key parameter in terrestrial ecosystem models and closely related to the carbon cycle as well as to photosynthesis, respiration and transpiration in leaves 3 . Many global and regional LAI products with different temporal and spatial resolutions exist that are derived using various retrieval algorithms and can be applied in studies addressing ecophysiology, atmosphere-ecosystem interactions and global change 4,5 . However, due to the limitations resulting from radiometric calibration, the atmospheric correction of raw data, the scale effect, and retrieval algorithms, errors inevitably exist in satellite products. Thus, to make appropriate use of satellite products, it is essential to investigate and quantify the uncertainties associated with these products 6,7 . Field measurements serve as 'reference' values and constitute an important part of the validation of remote sensing products 8,9 . LAI measurement methods are generally categorized into direct and indirect methods 10 . Indirect methods include optical methods based on Beer's law and inclined-point quadrat methods, in which the LAI is calculated by measuring other variables, such as the gap fraction, light transmission, and the contact number. However, the influences of the clumping effect, woody components and the leaf angle distribution (LAD) also need to be considered [11][12][13] . However, correcting for these variables is challenging because difficulties in their accurate measurment 14 . Several methods have been developed to correct the clumping index, including the finite-length averaging method 15 , the gap-size distribution method 16,17 , a combination of the gap-size distribution and finite-length averaging methods 18 , and the path length distribution method 11 . These methods, which have been applied for decades, should increase accuracy and be able to be used for new applications. Many comparisons of direct and indirect methods of LAI measurement for crops and forests have also been made 19,20 . www.nature.com/scientificdata www.nature.com/scientificdata/ Methods Study area. Field LAI measurements were collected in four areas: Beijing, Henan Province, Heilongjiang Province, and Anhui Province, as illustrated in Fig. 1. Online-only Table 1 shows detailed information about the field measurements and selected Landsat surface reflectance images in the four study areas. A total of 1010 samples corresponding to 43 growth stages were collected during the experiments. The collected samples included wheat, barley, paddy rice and soybean. The specific sampling dates, numbers of samples, and types of crops are listed in Online-only Table 1. The experiments in Beijing were carried out during the winter wheat growing seasons from 2004 to 2007. Beijing is located in the north of the North China Plain, which is a warm temperate zone with a semihumid and semiarid monsoon climate. The study sites in Henan Province were located in Jiaozuo and Zhoukou, which have temperate monsoon climates with abundant sunshine and a clear difference between the summer and winter temperatures. The average annual temperature in these areas is between 12.8 °C and 14.8 °C. The annual average precipitation is 644.3 mm, with 45%-60% of the precipitation falling from June to August. The crop grown at these study sites is winter wheat. The study area in Heilongjiang Province was located at Youyi Farm, which is situated on the Sanjiang Plain. The total cultivated area of this study area is 1104.29 km 2 , and the main crops are wheat, barley, paddy rice and www.nature.com/scientificdata www.nature.com/scientificdata/ soybean. The region has a temperate continental monsoon climate with a mean annual temperature of 3.4 °C. The annual average precipitation is approximately 540 mm, and the precipitation is concentrated in the summer. The Sanjiang Plain is one of the most well-known black soil plains worldwide and is characterized by a low soil albedo. The fourth field experiment was conducted at Longkang Farm (33°06′45.2″N, 116°51′44.8″E), Anhui Province, in 2017. This study area is located in the southern part of the Huaibei Plain. The study area has an elevation of approximately 22.7-25.9 m above sea level and covers a cultivated area of approximately 20 km 2 . It is located in a transition zone between the subtropics to the south and the warm temperate zone to the north. The site itself lies in the warm temperate semihumid monsoon agricultural zone and receives moderate rainfall and sufficient sunshine. The annual average amount of sunshine is approximately 2000 hours, which is approximately 54% of the possible maximum. The annual average temperature is 14.84 °C, and the average annual precipitation is approximately 789 mm. LaI measurements. All of the field LAI measurements were collected using a destructive sampling method. The locations of sampling points and vegetation types of study areas were illustrated in Figure S1 in the Supplementary Information. Plant samples were taken from areas of 1 m × 1 m; after being cut, they were quickly taken to the laboratory. All of the fresh leaves were quickly weighed, and 10 typical leaves were scanned to determine the leaf area. These 10 typical leaves and the remaining leaves were then dried in an oven until a constant weight (the dry weight, DW) was reached so that the leaf DW could be obtained. The specific leaf weight (SLW) and LAI were determined as follows: where DW is the total dry weight of the leaves; A 0 and (DW) 0 are the area and dry weight of the typical leaves, respectively, which were used to calculate the SLW; and A s is the sampling area (1 m × 1 m). Here, the elementary sampling unit (ESU) method 29,31 was not employed to collect LAI measurements due to the large amount of effort required to implement the destructive method. The crops were relatively uniform in comparison to the natural vegetation. According to investigations by Song et al. 47 , the spatial heterogeneity of winter wheat is relatively small, with a variation coefficient less than 6% for the optimized soil-adjusted vegetation index (OSAVI). Thus, only one uniform plot with a size of 1 m × 1 m was sampled to represent a Landsat TM pixel. In addition, more than 20 samples were collected to build a semiempirical model to retrieve the LAI in each growth stage, with which a fine-resolution LAI map can be generated. Landsat surface reflectance data and normalization. The Landsat-5 TM and Landsat-8 OLI surface reflectance (SR) products, for which a sufficient number of satellite images acquired at the same time as the field measurements were available, were used as a 'bridge' for upscaling the field LAI measurements to match the coarse-resolution LAI products. All of the Landsat TM and OLI SR images were downloaded from the United States Geological Survey (USGS) EarthExplorer website (https://earthexplorer.usgs.gov). All of these data consisted of SR products that had been derived from Level-1 data by atmospheric correction. Landsat TM/ETM SR data are generated with specialized software called the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) 48 . Landsat-8 OLI SR data are generated from the Land Surface Reflectance Code (LaSRC), which makes use of the coastal aerosol band to perform aerosol inversion tests and uses MODIS auxiliary climate data and a unique radiative transfer model 49 . The criteria for the selection of the Landsat SR images were that the imagery should be cloud free and acquired within seven days of the field measurements 50 Table 1 were used to generate the fine-resolution reference LAI maps. The satellite-based NDVI is a crucial variable in the semiempirical model during the upscaling procedure. To reduce the uncertainty related to the data quantification and determine the parameters in semiempirical models more accurately, the Landsat-5 TM SR imagery was normalized using the MODIS (MCD43A4) version 6 Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) product 51 , which provides 500 m reflectance data adjusted using a bidirectional reflectance distribution function to model the reflectance values as if they were taken at nadir view. Relative radiation normalization is widely used to eliminate the radiation differences among images acquired at different epochs or collected by different space-borne instruments. A clear SR image was generally selected as a reference to normalize the target image using a linear regression model band by band 52 . Here, it was employed to normalize the Landsat TM SR image using the MODIS SR data as a reference. To obtain the linear regression model for normalization processing, the 30 m TM images were aggregated to a resolution of 500 m and converted to the same sinusoidal projection as the MODIS product used; then, linear regression models were built to link Landsat TM data to MODIS SR data band by band. If the determination coefficient (R 2 ) was greater than 0.75, the www.nature.com/scientificdata www.nature.com/scientificdata/ TM SR data were normalized using the linear regression model; otherwise, the ratio of the mean values of the TM and MODIS SR data was used to normalize the Landsat TM SR data. A comparison of the MODIS and Landsat TM SR products (including the reflectance at the red and near-infrared bands and the NDVI) was therefore performed to normalize the Landsat SR products. Figure 2 shows the scatterplot of the normalized Landsat SR product data against the MCD43A4 data on April 1, 2004, in Beijing. The results show that the regression lines deviate from the 1:1 line, indicating that the TM red-band reflectance was higher than that of the MODIS data and that the Landsat NDVI values were smaller than the corresponding MODIS values. The normalization functions for Landsat TM red and near-infrared bands in the Beijing, Henan, and Heilongjiang study areas are illustrated in Tables S1-S3 in the Supplementary Information. The corresponding scatterplots are also provided in Figures S2-S7 in the Supplementary Information. MODIS LAI product (MCD15A2H). In this study, we applied the fine-resolution validation dataset to assess the MODIS LAI product with coarse-resolution, one of the most commonly used global LAI products. The MODIS LAI product version 6 (MCD15A2H) was devised by Myneni et al. 53 in 2015. This product is widely known as a mainstream global LAI product and has been applied to the modelling of atmospheric carbon assimilation, crop growth, and evapotranspiration. It is produced using a combination of Terra and Aqua data acquired every 8 days at a 500-m spatial resolution. The algorithm used to produce this product is based on three-dimensional radiative transfer theory, which is ultimately optimized using a look-up table (LUT) to solve the radiative transfer equation 54 . In addition to the main LUT method, a back-up algorithm based on directional vegetation indices can be employed to retrieve the LAI for different biomes 55 . www.nature.com/scientificdata www.nature.com/scientificdata/ Semiempirical NDVI-based model for generating fine-resolution LAI validation maps. A semiempirical model was employed to model the relationship between the NDVI and LAI. This model was based on the Beer-Lambert Law 56 : where NDVI bs is the NDVI value of bare soil, NDVI ∞ is the NDVI value corresponding to saturation of the LAI, and K ndvi is the extinction coefficient, which is related to the structure of the scattering community (in particular, the leaf inclination distribution) and the leaf optical properties. The parameters in Eq. (3) were optimized to produce the best accuracy for the Landsat scenes covering the different study areas using the local experimental data at different growth stages and a curve-fitting algorithm to give the lowest fitting error 57 . For instance, NDVI ∞ = 0.93, NDVI bs = 0.15 and K ndvi = 1.58 were derived from the experimental data obtained on April 1, 2004, in Beijing, as illustrated in Fig. 3. Once the parameters in Eq. (3) had been determined using the field data, the NDVI-based regression model could be used to generate the fine-resolution LAI maps using the equation www.nature.com/scientificdata www.nature.com/scientificdata/ The fine-resolution 30 m LAI maps were first generated using Landsat SR images for different growth stages and areas using the appropriate NDVI-based model. Cloud-free reference LAI maps with a size of 3 km × 3 km centred on the field sampling points were then acquired for use as potential validation maps. Finally, the proportion of cropland in each 3 km × 3 km reference map was calculated using the GLOBELAND30-2010 land cover product 58 , as shown in Fig. 1. Only the potential LAI validation maps with a proportion of cropland larger than 75% were selected for use as validation maps. LOOCV validation method. Due to limited field measurements in each growth stage, the leave-one-out cross-validation (LOOCV) approach 59 and curve-fitting algorithm were employed to generate the NDVI-based LAI model. The LOOCV method splits a dataset into a training set and a testing set using all but one observation as part of the training set. For example, there were 22 samples in the Beijing field experiment performed on April 1, 2004. The LOOCV approach chose 21 observations as training samples and one observation as a validation sample. This procedure was repeated 22 times. For each repeat, 21 field measurements were used to determine the parameters in Eq. (4) based on the curve-fitting algorithm. This algorithm is in the Python scipy.optimize module, which uses nonlinear least squares to fit a function 57 . Due to the limitation of sample size, we were required to set the bounds for the parameters, and the algorithm derives the optimal values for the parameters through iteration so that the sum of the squared residuals of the function is minimized. The value range of NDVI ∞ is 0.91-0.97, NDVI bs ranged between 0.01 and 0.18, and K ndvi is in the range of 1.3-1.8. Thus, 22 statistical equations were obtained during the procedure. All the field measurements were separately brought into the 22 equations to Table 2. Statistical metrics of the fine-resolution LAI maps of the Beijing study area. The mean LAI is the average LAI within each 3 km × 3 km reference map. The uncertainty is the product of the mean LAI and the RRMSE obtained using the NDVI-based inversion model. The standard deviation represents the spatial heterogeneity of the fine-resolution LAI maps. The scaling difference is the difference between the mean LAI values generated using the two different upscaling methods. The IDs correspond to the file names for the reference LAI maps. www.nature.com/scientificdata www.nature.com/scientificdata/ Table 3. Statistical metrics of the fine-resolution LAI maps of the study areas in Jiaozuo and Zhoukou, Henan Province. The mean LAI is the average LAI within each 3 km × 3 km validation reference map. The uncertainty is the product of the mean LAI and the RRMSE obtained using the NDVI-based inversion model. The scaling difference is the difference between the mean LAI values generated using the two different upscaling methods. The standard deviation represents the spatial heterogeneity of the fine-resolution LAI maps. The IDs correspond to the file names for the reference LAI maps. Table 4. Statistical metrics of the fine-resolution LAI maps of the Youyi Farm study area, Heilongjiang Province. The mean LAI is the average LAI within each 3 km × 3 km reference map. The uncertainty is the product of the mean LAI and the RRMSE obtained using the NDVI-based inversion model. The scaling difference is the difference between the mean LAI values generated using the two different upscaling methods. The standard deviation represents the spatial heterogeneity of the fine-resolution LAI maps. The IDs correspond to the file names for the reference LAI maps. ID Date Latitude (°) Longitude (°) www.nature.com/scientificdata www.nature.com/scientificdata/ identify the equation with the lowest RMSE, which was selected as the equation to generate the fine-resolution LAI map. The equations used to generate the fine-resolution LAI map for each growth stage in the different study areas are shown in Table 1. Several quality indicators were employed to assess the reference maps and LAI products, including the RMSE, relative root mean square error (RRMSE), coefficient of determination (R 2 ), and relative bias. Relative bias is the relative difference between the corresponding reference LAI and field LAI. It was defined as follows: where mean LAI ref represents the mean value of the estimated reference LAI in each growth stage and mean LAI field represents the mean value of the field LAI in each growth stage. Uncertainty is one of most important indicators used to represent the accuracy of reference maps and is of great significance for product validation. The uncertainty was defined as follows: where LAI mean represents the mean value of LAI within the 3 km × 3 km reference map and RRMSE represents the relative root mean square error between the generated and field-measured LAI in each growth stage. Determination of scaling difference using different upscaling methods. In the absence of scaling errors, Tian et al. (2003) found that the LAI obtained from coarse-resolution satellite data should be equal to the arithmetic average of values obtained from fine-resolution data 60 . Due to the heterogeneity of the land surface and nonlinearity of the inversion model, scaling errors are inevitable in retrieving LAI at coarse spatial resolution [61][62][63] . To investigate the scaling errors inherent to the coarse-resolution LAI product, the differences in the U1 and U2 upscaling methods were obtained to partly quantify the errors in product validation. The upscaling method U1 is the so-called 'invert first and then average' method, in which the fine-resolution NDVI is calculated first and the fine-resolution LAI is then retrieved based on the semiempirical NDVI-based model. The fine-resolution LAI maps are then aggregated (i.e., upscaled) to generate the coarse-resolution LAI. The upscaling method U2 is the so-called 'average first and then invert' method. Using this method, the fine-resolution SR image is aggregated to a coarse-resolution image to derive the coarse-resolution NDVI. The semiempirical NDVI-based model is then used to retrieve the coarse-resolution LAI. The difference in pixel value between the coarse-resolution LAI images obtained using the two different upscaling methods can be regarded as the spatial-scale difference 26,61 . Details regarding scaling differences are provided in the Supplementary Information. Data Records On the basis of the selection rules introduced in the Semiempirical NDVI-based model for generating fine-resolution LAI validation maps section, a total of 80 fine-resolution LAI validation maps with a size of 3 km × 3 km were generated from the Landsat-5 TM and Landsat-8 OLI reflectance data; these maps are provided in the Supplementary Information, Figures S9-S13. Detailed statistical metrics for these 80 fine-resolution maps are summarized in Tables 2-5. The scaling difference was taken as the difference between the mean LAI values generated using the two different upscaling methods that were introduced in Figure S8 in the Supplementary Information. The standard deviation reflects the spatial heterogeneity of the 3 km × 3 km fine-resolution LAI maps. The underestimation caused by the scaling difference for the Henan, Beijing, and Anhui study areas (which have relatively light soil substrates) and the overestimation for the Heilongjiang study area (where the soil background is dark) agree with the results of the investigation performed by Liu Table 5. Statistical metrics of the fine-resolution LAI maps of the Longkang Farm study area, Anhui Province. The mean LAI is the average LAI within each 3 km × 3 km reference map. The uncertainty is the product of the mean LAI and the RRMSE obtained using the NDVI-based inversion model. The scaling difference is the difference between the mean LAI values generated using the two different upscaling methods. The standard deviation represents the spatial heterogeneity of the fine-resolution LAI maps. The IDs correspond to the file names for the reference LAI maps. www.nature.com/scientificdata www.nature.com/scientificdata/ "underestimation for mixed pixels with bright non-vegetation components and an overestimation for those with dark non-vegetation components " 26,64 . Table 2 lists the statistical metrics of the fine-resolution LAI validation maps for Beijing. A total of 32 reference maps corresponding to eight growth stages were used between 2004 and 2007. The LAI for the 32 reference maps is relatively low, ranging from 0.273 to 2.257, with a mean uncertainty of 0.290. The spatial heterogeneity is relatively large and has a mean standard deviation of 0.720, which gives a relatively large scaling difference with a mean value of 0.046. Table 3 lists the statistical metrics of the fine-resolution LAI validation maps in the study areas of Henan Province. Twenty reference maps corresponding to five growth stages were used from 2003 to 2004. The LAI for these 20 reference maps varies from 1.615 to 4.310, with a mean uncertainty of 0.364. The spatial heterogeneity is higher than that for the Beijing study area and has a mean standard deviation of 1.361. The scaling difference is still obvious and has a mean value of 0.302. Table 4 lists the statistical metrics of the fine-resolution LAI validation maps for Youyi Farm, Heilongjiang Province. Here, 20 reference maps corresponding to five growth stages were used from 2005 to 2006. The LAI in these maps is relatively low, ranging from 0.293 to 1.338, with a mean uncertainty of 0.189. At Youyi Farm, the size of the fields was much larger than that in the other study areas; the spatial heterogeneity is thus relatively small Fig. 4 Comparison of the fine-resolution reference LAI and the field-measured data for wheat in different stages of growth in the Beijing study area. and has a mean standard deviation of 0.413. The scaling difference is the smallest among all the study areas and has a mean value of 0.013. Table 5 lists the statistical metrics of the fine-resolution LAI validation maps for Longkang Farm, Anhui Province. These statistics are for eight reference maps corresponding to two growth stages in 2017. The LAI for these eight reference maps is relatively large, ranging from 2.190 to 4.651, with a mean uncertainty of 0.685. The spatial heterogeneity is similar to that in the Henan study area, with a mean standard deviation of 1.528. The scaling difference has a mean relative value of 0.553. The field measurements, published for public use, are available at Zenodo, https://doi.org/10.5281/ zenodo.5091251. The dataset contains readme files, compressed files of the fine-resolution LAI maps, and files www.nature.com/scientificdata www.nature.com/scientificdata/ of statistics for the reference maps. The intermediate NDVI files and reference LAI maps derived using the U2 upscaling methods are also provided 65 . technical Validation performance of the semiempirical models. The semiempirical NDVI-based models used to generate the fine-resolution reference LAI maps were validated using field measurements and the LOOCV method for the four study areas. This process is illustrated in Figs. 4-7. The results of a statistical comparison of the field-measured and generated LAI are also displayed in the figures. In Figs. 4-7, the field-measured LAI values are compared with the LAI values derived by applying the semiempirical LAI model to Landsat TM/OLI SR data for the four study areas (Beijing, Henan, Heilongjiang, and Anhui). The results shown in Fig. 4 are characterized by slopes that are close to the 1:1 line, with RMSE values ranging from 0.25 to 0.72. As the results are displayed separately for each growth stage, the LAI values measured during the early growth stage have a wide distribution, with the result that the coefficient of determination for the regreening stage is low. Figure 5 displays the relationship between the field-measured LAI and the predicted LAI values for the Henan test area based on the formal semiempirical model: in this case, the RMSE ranges from 0.31 to 0.92, and the RRMSE is less than 23.16%. Figure 6 shows a comparison of the field-measured and predicted LAI values for Youyi Farm, Heilongjiang Province. On May 5th, 2005, and June 6th, 2006, field measurements of both wheat and barley were performed at this site; the samples collected on June 14th, 2007, were of barley only. Since barley and wheat are crops with similar vegetation structures, the two crop types are not separated in this comparison. The RMSE for these data has a range of 0.22 to 0.37, and the RRMSE has a range of 18.25% to 36.78%. The plots displayed in Fig. 7 show the relationship between the field-measured and predicted LAI values for Longkang Farm, Anhui Province. The slopes here are close to the 1:1 line, and the RMSE has a range of 0.67 to 0.95. Validation of MODIS LaI. The 80 reference LAI maps with a size of 3 km × 3 km derived from the two upscaling methods ( Figure S8 in Supplementary Information) and the corresponding field LAI measurements were employed to validate the MODIS LAI V6 product (MCD15A2H) for the four study areas. The validation results are illustrated in Fig. 8 and Table 6. In Fig. 8(a), the fine-resolution reference LAI maps (30 m) derived from Eq. (4) were compared with the MODIS LAI in the range of 3 km × 3 km, which refers to the U1 upscaling method. To investigate how the scaling difference contributes to the discrepancies between the fine-resolution maps and the coarse-resolution products, the reference LAI maps at 500 m resolution were obtained based on the 'average first and then invert' (U2 upscaling) method with a size of 3 km × 3 km (as described in Figure S8). These reference LAI maps at 500 m resolution were compared with the MODIS LAI, as illustrated in Fig. 8(b). In addition, the field LAI measurements were directly compared with the corresponding MODSI LAI, as illustrated in Fig. 8(c). The results illustrated in Fig. 8(a) indicate that the MODIS LAI values are underestimated in comparison to the fine-resolution reference LAI data in the range of 3 km × 3 km, especially in the case of the Henan study area. Table 6 shows that the accuracy of the MODIS LAI product varies among the study areas: the values are severely underestimated for crops in Beijing, Henan, and Anhui (relative bias = -27.0%, -48.9%, and -10.8%, respectively), whereas the values are overestimated for the crops with a black soil background in Heilongjiang Province (relative bias = 56.9%). Due to the existence of surface heterogeneity, applying the model developed with 30 m data to 500 m data could result in some discrepancies. Since coarse-resolution LAI should be equal to aggregated fine-resolution www.nature.com/scientificdata www.nature.com/scientificdata/ LAI in the absence of scaling errors, validation using the reference LAI derived from the U2 method will result in artificially high accuracy 60 . However, by comparing the validation results from the U1 and U2 methods, the error due to the scale effect inherent to the coarse-resolution product can be at least partly quantified. In Fig. 8(b), the results gave an RMSE of 0.78 against the value of 0.91 that was obtained by applying the U1 ('invert first and then average') upscaling method to the reference LAI dataset in Fig. 8(a), which indicates that the scaling difference also contributes to the error in the coarse-resolution MODIS LAI product. When the scaling difference was taken into consideration and compensated for by applying the U2 upscaling method to the reference LAI dataset, the underestimates for the Beijing, Henan, and Anhui areas were reduced, giving relative biases of −24.0%, −43.0%, and 6.0%, respectively, compared with -26.9%, −48.9%, and −10.8% in Fig. 8(a), respectively. In terms of the accuracy of MODIS LAI in Heilongjiang, since the land cover in Heilongjiang is relatively uniform, the mean scaling difference among the four study areas is lowest, and the RMSE and relative bias thus slightly increased from 0.52 to 0.53 and 56.9% to 59.8%, respectively. A direct comparison with the field measurements ( Fig. 8(c)) produced much higher uncertainties (RMSE = 1.99, RRMSE = 76.8%, relative bias = −49.3%) than were found by using the upscaled reference LAI dataset. In this study, a highly accurate fine-resolution LAI dataset for Chinese croplands that could be used as a reference for coarse-resolution LAI products was derived from field measurements and fine-spatial-resolution satellite imagery (Landsat-5 TM and Landsat-8 OLI data). A semiempirical statistical model based on the Beer-Lambert law was used to derive fine-resolution LAI data that could be used for validation of the coarse-resolution LAI product at each growth stage. The parameters of each semiempirical model were estimated using the field LAI at each growth stage based on the curve-fitting algorithm and LOOCV approach. During this procedure, the performance of each semiempirical model was also investigated. Finally, eighty fine-resolution reference LAI maps with a size of 3 km × 3 km were generated for the study areas in four Chinese provinces. This fine-resolution reference LAI dataset was applied to assess the accuracy of MODIS LAI among these four study areas using the U1 upscaling method. The MODIS LAI was also compared to the reference LAI generated using the U2 upscaling method, through which the error due to the scale effect inherent to the coarse-resolution LAI product can be partly quantified. The direct comparison of the LAI data collected in the field and MODSI LAI showed considerable uncertainty. Therefore, this study contributes to the validation of remote sensing LAI products by providing a set of fine-resolution reference LAI datasets based on destructive sampling methods and highlights the importance of using a fine-resolution reference LAI dataset based on direct field measurements. Such a dataset can bridge the gap between field measurements and coarse-resolution pixel data. Fig. 8 Validation results for the MCD15A2H LAI products obtained by applying (a) the U1 upscaling method to the LAI data from the 80 fine-resolution LAI maps and (b) the U2 upscaling method to LAI data from the 80 reference maps. (c) Validation results obtained using the corresponding field measurements. Table 6. Validation metrics for the MODIS LAI product using data from the fine-resolution reference LAI maps and field-measured LAI values in the four study areas. R 2 : coefficient of determination, RMSE: root mean square error, RRMSE: relative RMSE, RB (relative bias): ratio of the difference in the MODIS and fine-resolution LAI to the fine-resolution LAI.
7,196
2021-09-20T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Gain roll-off in cadmium selenide colloidal quantum wells under intense optical excitation Colloidal quantum wells, or nanoplatelets, show among the lowest thresholds for amplified spontaneous emission and lasing among solution-cast materials and among the highest modal gains of any known materials. Using solution measurements of colloidal quantum wells, this work shows that under photoexcitation, optical gain increases with pump fluence before rolling off due to broad photoinduced absorption at energies lower than the band gap. Despite the common occurrence of gain induced by an electron–hole plasma found in bulk materials and epitaxial quantum wells, under no measurement conditions was the excitonic absorption of the colloidal quantum wells extinguished and gain arising from a plasma observed. Instead, like gain, excitonic absorption reaches a minimum intensity near a photoinduced carrier sheet density of 2 × 1013 cm−2 above which the absorption peak begins to recover. To understand the origins of these saturation and reversal effects, measurements were performed with different excitation energies, which deposit differing amounts of excess energy above the band gap. Across many samples, it was consistently observed that less energetic excitation results in stronger excitonic bleaching and gain for a given carrier density. Transient and static optical measurements at elevated temperatures, as well as transient X-ray diffraction of the samples, suggest that the origin of gain saturation and reversal is a heating and disordering of the colloidal quantum wells which produces sub-gap photoinduced absorption. Results and discussion Light amplification in colloidal quantum wells. Figure 1 shows typical absorption, transmission electron microscopy, and emission of CdSe CQWs. Figure 1a shows the linear absorption (μ) of 4.5 monolayer (ML, 4 monolayers of Se and 5 monolayers of Cd) and 5.5 ML samples based upon literature reports 21 . The monolayer thickness defines the position of the strong excitonic transitions, which arise due to heavy-hole (HH), lighthole (LH), and split-off bands 22 . Atomic precision in thickness across the sample, despite polydisperse lateral dimensions as shown in Fig. 1b, yields narrow optical transitions in absorption and emission. Figure S1 contains transmission electron microscope images of other samples featured in this work. Fluence-dependent emission of dilute solutions of CdSe CQWs (Fig. 1c) shows a pronounced broadening and increased emission at lower energies attributed to biexcitonic emission 3 . In a dense film as in Fig. 1, an amplified spontaneous emission (ASE) feature also emerges above threshold fluences at energies ~ 50 meV lower than the photoluminescence peak. The origin of ASE in CdSe CQWs has been intensively studied. Underpinning the reported physics of CQW lasing is that they have exciton binding energies far greater than thermal energy at 300 K. Reported exciton binding energies for 4.5 ML and 5.5 ML CQWs are 160-200 meV [23][24][25] , which is sufficiently large that excitons are anticipated to dominate the physics of these samples up to the melting point 26 . As a consequence, gain and lasing in CQWs is observed from biexcitonic species (at times termed "excitonic molecules"), as shown in the cartoon in Fig. 1e 1,3,27 . The exact number of excitons per particle corresponding to the transparency condition ( A + A = 0 ) and enabling gain depends on the lateral area of the CQWs, but corresponds to an electron-hole density of c. 2.5 × 10 12 cm −2 in 4.5 ML CQWs 27 . All previous reports of ASE or lasing have been consistent with this biexcitonic mechanism. This is distinct from epitaxial quantum wells in which biexciton lasing may be observed at low temperatures in some samples, but plasma based lasing is ordinarily found at higher temperatures for which thermal energy is greater than the exciton binding energy 11,28,29 . In principle, gain from a plasma can also be observed above the Mott transition at which electron-hole densities reach such a level that the available space per exciton is comparable to the Bohr radius 20 . Such high excitation densities destabilize excitons and result in an electron-hole plasma. Mott transitions have been observed in bulk and quantum forms of GaAs 16,30,31 , ZnO 32,33 , CdS 32 , and InGaAs 14 , and, despite very large exciton binding energies, in transition metal dichalcogenides 34 . The formation of an electron-hole plasma extinguishes excitonic absorption and is accompanied by a blue-shift of photoluminescence intensity and absorption associated with the continuous density of states of free carriers 14,19,30 . However, there is no unambiguous evidence from photoemission experiments or gain spectroscopy that full Mott transitions occur-and, if so, at what densities-in CQWs. Only one report, based upon transient absorption spectroscopy, indicates the formation of a plasma which nonetheless coexists with biexcitonic gain 17 . Gain spectroscopy of colloidal quantum wells. To examine the formation of an electron-hole plasma in CdSe CQWs, transient absorption was performed on several 4.5 ML and 5.5 ML samples at variable excitation fluence. To ensure, as best as possible, that the results do not reflect photocharging or irreversible sample changes, samples were vigorously stirred during measurements and data presented in this work represents reproducible transient spectra of multiple time delay scans. The results of these experiments, with transient www.nature.com/scientificreports/ spectra collected at 3 ps pump-probe delay, which allows relaxation of photoexcited carriers, are shown in normalized plots in Fig. 2a and b. (Corresponding data collected at 40 ps and 3 ns pump-probe delay may be found in Figs. S2 and S3.) Raw ΔA data is presented in Fig. S4. Several phenomena occur in both fluence-dependent series: (i) initially narrow bleaching features broaden substantially at carrier density up to c. 2 × 10 13 cm −2 ; (ii) the bleaching intensity of the light-hole feature continuously increases relative to the heavy-hole band edge bleach with increasing fluence; and (iii) at electron-hole densities greater than 2 × 10 13 cm −2 , a broad photoinduced absorption appears at energies below the excitonic absorption. The photoinduced absorption or gain for the two samples are shown in Fig. 2c and d, respectively, with an expansion of the spectral region showing gain in Fig. 2e and f. As shown in Fig. 2c and d (and Fig. S4), the excitonic bleach feature of the CQWs first decreases, due to bleaching, but this effect saturates and then reverses with increasing fluence. A related effect is apparent in the gain. Earlier reports of the gain spectrum of CdSe CQWs show very similar gain spectra, including saturation, for electron-hole densities of c. 1 × 10 13 cm −2 and lower 1,17,27,35,36 , which corresponds roughly, depending on the CQW size and excitation photon energy, with fluences > 500 μJ•cm −2 or exciton numbers > 30 per CQW. Most earlier literature does not report results for higher excitation densities, but in one report from Tomar et al. 17 , a second band of gain is observed at 2.45 eV for 4.5 ML samples. This was not observed in the present study for any of several samples, including (as shown in Fig. S5) core/shells: at the highest carrier densities, the photoinduced absorption results in substantially diminished gain bandwidth, similar to observed photoinduced absorptions in CdSe quantum dots 18 , and no second band of gain is observed. The absence of the second gain band suggests that an electron-hole plasma is not achieved and the gain mechanism remains excitonic. This is particularly surprising for core/shell samples, which are anticipated to have much reduced exciton binding energies. The trends apparent in Fig. 2 are quantified in Figs. 3 and 4 across three samples each of 4.5 ML and 5.5 ML thicknesses using two excitation photon energies, 3.50 eV and 2.72 eV (see Fig. S6). Here, data are presented as a function of the electron-hole sheet density of the samples, rather than fluence, which does not account for energy differences of the pump excitations, or exciton number, which does not account for differences in the CQW physical dimensions. These alternative representations may be found in Figs. S7 and S8. Individual points show the data for different CQW samples and the solid and dashed lines show the smoothed averaged data of all the samples with either 3.50 eV (solid) or 2.72 eV (dashed) pump photon energy. Figure 3a and b show the normalized intensity of the first excitonic absorption associated with the heavy hole of 4.5 ML and 5.5 ML CQWs, respectively, as a function of the electron-hole density of the samples. Bleaching of the exciton was consistently greater under photoexcitation with 2.72 eV photons, compared to 3.50 eV photoexcitation, but in both cases, the bleaching saturates and reverses for electron-hole excitation densities greater than 2 × 10 13 cm −2 . It is noted that a related phenomenon has been observed in core/shell CQW systems in saturable absorption experiments, attributed to potential exciton-exciton interactions or enhanced upconversion of higher-energy LH excitons from HH excitons 37 . This second explanation is consistent with data in Fig. 3c and d, showing larger LH to HH ratios. The ratio of LH to HH bleaching intensity is a function of carrier density via state filling and temperature (see below). As shown in Fig. 3c and d, the ratio of the LH bleaching intensity is consistently larger, for similar initial carrier density, for the 2.72 eV pump than the 3.50 eV pump. The stronger intensity of excitonic bleaching of both HH and LH transitions under 2.72 eV photon energy excitation may be explained by higher effective quantum yield for bleaching with less energetic photons 38 , although reports of the energy-dependent quantum yield are contested [39][40][41][42] . A second effect accompanying more intense photoexcitation is captured in Fig. 3e and f, www.nature.com/scientificreports/ The observable gain in CQWs under the same photoexcitation conditions is analyzed at a few representative energies for the 4.5 ML and 5.5 ML samples in Fig. 4. Similar to earlier work 27 , peak gain values, achieved at the highest energies monitored in Fig. 4, reach values of 20,000-30,000 cm −1 , which is at least qualitatively consistent with the large gain coefficients observed in variable stripe measurements 7 . In all observed cases, similar to the persistent excitonic absorption, a blueshift and broadening of gain associated with a Mott transition is not observed. Instead, at excitation intensities greater than 2 × 10 13 cm −2 , modal gain saturates and reverses at all measured wavelengths and is not observed at all for energies larger than the HH excitonic transition. Gain is consistently greater and saturates at slightly higher electron-hole densities using 2.72 eV photoexcitation as compared to 3.50 eV pumping. This effect is stronger in the case of 4.5 ML CQWs, for which the relative difference in excess energy of the pump excitation above the band gap is much larger. The absence of the spectral signatures of a Mott transition in CQWs under intense excitation is surprising. CdSe colloidal quantum dots show substantial quenching of the first (1S) excitonic absorption 18,43,44 as do related transition metal dichalcogenides 34 . At electron-hole densities greater than 1 × 10 13 cm −2 , the effective radius of carriers is less than one-half of the bulk CdSe Bohr radius (5.6 nm) 45 . One explanation is that the in-plane exciton size in CdSe CQWs is much smaller than the Bohr radius of bulk CdSe, comparable to the CQW thickness 23,46 , or less than one-fifth of the Bohr radius in these samples. Nonetheless, at still higher excitation intensity, a Mott transition remains possible and at the highest excitation intensities used in this work, the effective radius available per exciton is < 1 nm. The persistent strength of excitonic absorption, even for such high carrier densities, and the absence of plasma formation, provide evidence consistent with theoretical predictions of a degenerate quantum exciton gas 47 . Instead of the formation of an electron-hole plasma, photoinduced absorption results in gain reversal at all energies and a narrowing of gain bandwidth. At lower energies such as 2.25 eV for 4.5 ML CQWs or 2.10 eV for 5.5 ML CQWs, the photoinduced absorption yields particularly large losses greater than 10,000 cm −1 . It should also be noted that all of the trends apparent at 3 ps pump-probe delay may also be observed at longer time delays. Analogous patterns of excitonic bleaching, broadening, and gain are observed in data collected at 40 ps pump-probe delay, which is shown in Figs. S9 and S10. The pattern of persistently stronger excitonic bleaching and gain in samples pumped with 2.72 eV pump photon energy compared to those with 3.50 eV pump photon energy is preserved at longer pump-probe delays, with differences in excitonic absorption becoming even larger. However, at longer pump-probe delays, the magnitude of most features is weaker and the gain bandwidth smaller as excitonic recombination is rapid once population inversion is achieved. Thermal response under intense photoexcitation. Broad, "parasitic" photoinduced absorption was previously observed in CdSe colloidal quantum dot samples 18 . In that work, Malko et al. showed that cross section of the photoinduced loss was fixed for all quantum dot sizes, completely suppressing gain in small quantum dots, but not in large quantum dots. Based upon sensitivity of the gain to solvent environment (or solid versus solution conditions), the photoinduced absorption which parasitized optical gain was attributed to extrinsic electronic effects on the quantum dots, such as interfacial trap sites, and not to thermal effects from quantum dot heating. Arguing against a thermal origin of gain reversal in CdSe quantum dots, Malko et al. reported no red-shifts of the photoluminescence at high intensities which would be associated with the Varshni-like behavior of the CdSe band gap 18 . Although the phenomenon observed optically appears to be quite similar in CdSe CQWs and quantum dots, the details are distinct in many respects. In the case of CQWs, several lines of evidence implicate a thermal origin to the reduction of gain at high excitation intensities. A simple calculation based upon the heat capacity of bulk CdSe, assuming no heat dissipation into the environment, and using excitation densities of the experiments presented here indicates that the temperature of the CdSe lattice can increase by 100 K or more for electron-hole densities greater than 1 × 10 14 cm −2 , with more heating anticipated for a larger excess photon energy of the pump. (See Supporting Information Fig. S11.) For reference, pulsed excitation fluences 3-4 times greater than those used in this work (13-17 mJ•cm −2 ) are reported to reversibly melt bulk CdSe 48 , which has a substantially higher melting point 26,49 and comparable intensities to those used here (as discussed below) were found by transient X-ray diffraction of nanocrystals to yield disordering 50 . The rate of heat dissipation to the environment is therefore critical. Heat outflow from CQWs to methylcyclohexane, which is used for the gain spectroscopy experiments, occurs, at least for small temperature differentials, on a time-scale of c. 160 ps for a 4.5 ML CdSe CQW sample 51 and c. 240 ps (Fig. S12) for a 5.5 ML CdSe CQW sample used in this work. At large temperature differentials, such as those in transient X-ray diffraction, heat loss to a solution environment is also on time-scale of hundreds of picoseconds 52 . Dissipation of heat in the solid state is even slower 51,53,54 . Buildup of gain or ASE occurs with intraband relaxation in ~ 1 ps, based upon time-resolved studies of gain in Figs. S13 and S14 and literature data 2,17,27 . Because the time-scale of heat dissipation is much slower than the buildup of gain, lattice heating of the CQW occurs simultaneously with gain and ASE. At the same time, photoinduced heating of the CQWs may have a relatively small influence on timeintegrated emission occurring over several nanoseconds, particularly in solutions. The data presented for 2.72 eV and 3.50 eV photon pump energy are at least indicative of the influence of heating arising from the larger excess energy of the 3.50 eV pump. The gain and excitonic bleaching with 2.72 eV photon energy are stronger and do not reverse as substantially for a given electron-hole density, compared to the 3.50 eV pump. Distinct from earlier reports on quantum dots, the photoluminescence and ASE band of CQWs red-shift appreciably at electron-hole densities greater than 2 × 10 13 cm −27,17,27 . High fluence photoemission measurements far above the gain threshold were performed in a front face reflection geometry on semitransparent thin films of CQWs using a small excitation spot to suppress the intensity of ASE and avoid inner filtering effects on the emission. Figure 5 shows the results of high-intensity photoexcitation of CdSe CQW films, in which the band www.nature.com/scientificreports/ of ASE red-shifts to lower energy with progressively higher fluence. From Figs. 2 and 4, this red-shift is not well-explained by a red-shift in the gain spectrum. Indeed, in the case of 4.5 ML CQWs, the gain band begins to blue-shift as the available gain bandwidth decreases (Fig. 5a). Also noteworthy, the relative intensity of the ASE band compared to the excitonic and biexcitonic emission saturates at electron-hole densities of > 5 × 10 13 cm −2 for the 5.5 ML sample and for the 4.5 ML sample, the relative intensity of ASE even decreases. This saturation and reduction of ASE intensity is consistent with the reduction of optical gain at higher electron-hole densities observed by gain spectroscopy on solutions. Complementing this, static absorption spectra of the 4.5 ML and 5.5 ML CQW samples were also collected (raw data in Fig. S15) and the thermal difference spectra (A T -A 295 K ) are shown in Fig. 6a and b overlapped with www.nature.com/scientificreports/ a transient absorption spectrum collected at high fluence. As anticipated from the thermochromic behavior of CdSe 26 , increases in the static temperature of the CQWs leads to an increase in the absorption of the film at energies below the ambient band gap, qualitatively resembling the photoinduced absorption feature observed by transient spectroscopy. An important distinction should be noted: although the static absorption data show a typical red-shift with heating, the apparent peak of the exciton in gain spectra in Fig. 2 does not shift substantially. This apparent contradiction is explained, primarily, by the presence of stimulated emission from biexcitons (responsible for gain) which suppress linear absorption μ on the red edge of the lowest excitonic absorption feature. This stimulated emission feature, which produces a negative ΔA signal, directly competes with redshifted absorption that produces a positive ΔA signal apparent in Fig. 6a and b at low energies. Other smaller sources of divergence can include Moss-Burstein filling, which blueshifts the spectrum of the CQWs at high sheet density, and any contributions from the decay of excitation due to Beer's law, which broadens the temperature profile of the resulting CQWs, resulting in a more gently sloping induced absorption feature in transient absorption measurements than in static thermal difference spectra. In addition to the changes below band gap, both static absorption spectra and low-fluence (< 1 × 10 12 cm −2 ) transient absorption spectra (Figs. S15, 6c and d) collected at elevated sample temperatures show increases in the LH:HH ratio and the band-width of the photoinduced bleach which are catalogued in Fig. 6e and f, respectively. In particular, the band-width of the transient bleach feature under low fluence at 500 K reaches 25-30 meV, close to the same values reached for photoexcited samples at room temperature with electron-hole densities greater than 1 × 10 14 cm −2 . Such static data may be used to interpret the time-resolved data, implicating lattice heating (in addition to band filling) as an origin of higher LH:HH ratios, due to increased thermal occupation, and broadening and redshifting of the absorption features. Finally, we highlight that there is strong evidence from dynamic measurements of crystallographic structure that CQWs undergo substantial heating and disordering under photoexcitation 50 . Transient X-ray diffraction patterns of 4.5 ML and 5.5 ML CdSe CQWs are shown in Fig. 7a and b. These data convey the change in X-ray diffraction scattering, ΔS versus q, 40 ps after photoexcitation with 3.10 eV photons overlaid on the static, room temperature diffraction pattern of the sample. As detailed elsewhere, the time-resolved ΔS signal can be broken into two contributions from thermal shifts-which results in close to symmetrical derivative-like ΔS contributions-and disorder or phase transitions, which result in changes in the intensity of diffraction peaks 50,52,55,56 . Although previous work has highlighted that disordering occurs preferentially in the short axis of the CQWs 50 , Fig. 7c and d show a simplified integration of ΔS signal attributable to disorder and thermal shift by summing contributions of all available diffraction peaks. The transient X-ray diffraction data shows that at comparable electron-hole excitation densities to the emergence of photoinduced absorption in optical experiments, CQWs show both pronounced heating and disordering of the CQWs. Also, limited temporal dynamics of the transient X-ray diffraction signal (shown in Fig. S16) closely match the dynamics of photoinduced absorption in the same 5.5 ML sample at similar electron-hole densities. As noted above, heating produces a predictable bathochromic shift of the CQW band gap. The optical properties of CQWs in a molten or substantially disordered state have not been measured experimentally, but calculations of the disordered density of states of CdSe nanoparticles www.nature.com/scientificreports/ also show pronounced reductions of the band gap 52 . Therefore, the transient X-ray diffraction data indicating photoinduced heating of the CQWs lattice is broadly consistent with attributing the observed parasitic photoinduced absorption. Conclusions Collectively, the data presented in this work do not show any indication that the CQWs undergo an electronic transition from an exciton gas to an electron-hole plasma. These data relate lack of a Mott transition in CdSe CQWs under optical excitation, as photoinduced heating at such intensities alters the structure and optoelectronic properties of the CQW. Lattice heating results in saturated gain and, at still higher excitation densities, large optical losses due to photoinduced absorption of hot CQWs. Although resonant excitation at the HH transition is most likely to generate a Mott transition, due to the minimized energy in excess of the band gap, it remains unlikely that without modification of the thermal interfaces of the CQW system that such an optical excitation scheme can generate a Mott transition due to lattice heating arising from Auger processes 57,58 . This does not preclude the possibility of generating a unipolar plasma, which is potentially more promising. These results also emphasize the important role that heat dissipation can play in the performance of nanocrystal-based optoelectronics. Enhancements of the thermal outflow from CQWs to the environment should allow the realization of even higher levels of gain saturation. These results further underline that under high excitation conditions for lasers and bright light emitting diodes, thermal management is a critical element device optimization.
5,303.2
2022-05-16T00:00:00.000
[ "Physics", "Materials Science" ]
Image motion with color contrast suffices to elicit an optokinetic reflex in Xenopus laevis tadpoles The optokinetic reflex is a closed-loop gaze-stabilizing ocular motor reaction that minimizes residual retinal image slip during vestibulo-ocular reflexes. In experimental isolation, the reflex is usually activated by motion of an achromatic large-field visual background with strong influence of radiance contrast on visual motion estimation and behavioral performance. The presence of color in natural environments, however, suggests that chromatic cues of visual scenes provide additional parameters for image motion detection. Here, we employed Xenopus laevis tadpoles to study the influence of color cues on the performance of the optokinetic reflex and multi-unit optic nerve discharge during motion of a large-field visual scene. Even though the amplitude of the optokinetic reflex decreases with smaller radiance contrast, considerable residual eye movements persist at the ‘point of equiluminance’ of the colored stimuli. Given the color motion preferences of individual optic nerve fibers, the underlying computation potentially originates in retinal circuits. Differential retinal ganglion cell projections and associated ocular motor signal transformation might further reinforce the color dependency in conceptual correspondence with head/body optomotor signaling. Optokinetic reflex performance under natural light conditions is accordingly influenced by radiance contrast as well as by the color composition of the moving visual scene. Material and methods Animals. Experiments were performed in vitro on semi-intact preparations of Xenopus laevis tadpoles (n = 46) and complied with the "Principles of animal care", publication No. 86-23, revised 1985 of the National Institute of Health. All experiments were carried out in accordance with ARRIVE guidelines and regulations. Permission for these experiments was granted by the ethics committee for animal experimentation of the legally responsible governmental institution (Regierung von Oberbayern) under the license code 55.2-1-54-2532.3-59-12. In addition, all experimental methods were performed in accordance with the relevant guidelines and regulations of the Ludwig-Maximilians-University Munich. Xenopus larvae of either sex at stage 51-54 25 were obtained from the in-house animal breeding facility at the Biocenter-Martinsried of the Ludwig-Maximilians-University Munich. Animals were maintained in tanks with non-chlorinated water (17-18 °C) at a 12/12 light/ dark cycle prior to experimentation. Experimental approach. Semi-intact preparations were obtained according to the procedure reported previously 10,26 . In brief, tadpoles were anesthetized in 0.05% 3-aminobenzoic acid ethyl ester (MS-222; Pharmaq) in frog Ringer solution (in mM: 75 NaCl, 25 NaHCO 3 , 2 CaCl 2 , 2 KCl, 0.5 MgCl 2 and 11 glucose, pH 7.4) and decapitated. For experiments that employed eye motion recordings, the skin covering the dorsal head was removed, the soft skull tissue opened and the forebrain disconnected. This surgical procedure anatomically and functionally preserved the remaining CNS with the eyes and associated optic nerve, extraocular motor innervation and eye muscles. Such preparations allowed prolonged experimentation and in vivo-like activation of the OKR by horizontal large-field visual image motion under defined in-vitro conditions 10,12 . For electrophysiological recordings of RGC axons in these preparations, the optic nerve of the right eye was cleaned from surrounding connective tissue and transected before entering the optic chiasm. All extraocular muscles of this eye were transected at their proximal insertion site to immobilize the eye in its natural position within the head. After the surgery, all preparations were allowed to recover from the surgical intervention for three hours 27 . Eye motion capture and optic nerve recordings. Semi-intact preparations were mechanically secured with insect pins to the Sylgard floor of a Petri dish (5 cm in diameter). As described earlier 12 , the chamber, which was constantly perfused with oxygenated frog Ringer solution at a rate of 3.0-5.0 ml/min, was placed in the center of an open cylindrical screen with a height of 5 cm and a diameter of 8 cm, encompassing a horizontal visual field of 275°. Three digital light processing (DLP) video projectors (Aiptek V60), installed at 90° angles to each other projected visual motion stimuli onto the screen 12,28 at a refresh rate of 60 Hz. For eye motion recordings, a CCD camera (Grasshopper 0.3 MP Mono FireWire 1394b, PointGrey, Vancouver, BC, Canada), mounted 20 cm above the center of the recording chamber, permitted on-line tracking of horizontal eye movements by custom-written software 29 . The position of both eyes was digitized at a sampling rate of 50 Hz and recorded along with the visual motion stimulus (Spike2 version 7.04, Cambridge Electronic Design Ltd., Cambridge, United Kingdom). The chamber was illuminated from above using an 840 nm infrared light source. An infrared longpass filter inside the camera ensured selective transmission of the respective wavelengths and a high contrast to outline the eyeballs for motion tracking and online analysis of induced eye movements. Electrophysiological recordings of multi-unit optic nerve spike activity were performed under the same experimental conditions as described previously 10 . In brief, the spike discharge of retinal ganglion cells was recorded extracellularly (EXT 10-2F, npi Electronics, Tamm, Germany) with glass microelectrodes that were filled with Ringer solution 10 . Electrodes were produced with a horizontal puller (P-87 Brown/Flaming, Sutter Instruments Company, Novato, CA, USA) and the tips were broken and individually adjusted to fit the diameter of the transected optic nerve 10 Stimulus paradigms. A reliable estimate of color motion perception is provided by the performance of visuo-ocular motor responses evoked by moving chromatic and radiance contrast stimuli 12 . This method depends on the brightness ratio of two colors at which the response is minimal 30,31 . To identify this ratio, animals were presented with horizontal visual image motion stimuli using a vertically striped pattern with alternating red and blue vertical stripes. With our digital projection system, the red and blue color provided the maximal possible separation of the wavelength spectrum (Fig. 1B). The intensity of the red stripes was varied systematically, leading to variations in the OKR response due to resulting differences in radiance contrast. At the point of equiluminance (POE), the respective brightness of the optic scene appears to be homogeneous to the animal, such that the visual image is only structured by the color of the scenery. At the POE, a visuo-motor response can www.nature.com/scientificreports/ either be absent, indicative of a color-blind motion perception system or show a residual response, indicating that color information provides motion cues 12 . Here, we exploited the robust OKR of larval Xenopus 10 that is elicited by horizontal motion of a large-field visual scene ( Fig. 1A1-4) with a rectangular velocity profile of ± 10°/s and a frequency of 0.2 Hz. Point of equiluminance. The POE was determined by presenting a visual stimulus that consisted of alternating red and blue vertical chromatic stripes (Fig. 1A4). The radiance of the red stripes was varied over a range that extended from 0.29 to 3.18 W*sr −1 m −2 , while the intensity of the blue stripes was maintained at a constant value of 4.66 W*sr −1 m −2 . As a control condition to determine the amplitude of spontaneous eye oscillations or measurement noise, the amplitude of eye movements at the stimulus frequency, while presenting a uniformly lit grey screen, served as reference level for comparison of the minimum OKR amplitude (Grey condition, radiance 3.21 W*sr −1 m −2 ). Importantly, the POE was determined individually for each animal as intensity value of the red stripes at which the OKR response was minimal. The mean POE was constructed by averaging the separately obtained values of each animal. Interaction of color and motion at high light intensities. Three visual motion stimuli were presented at high intensity levels in random order to assess the interaction of color and radiance motion at high contrast levels. Stripes were white (18.7 W*sr −1 m −2 ), red (3.39 W*sr −1 m −2 ), blue (4.66 W*sr −1 m −2 ) or black (0.187 W*sr −1 m −2 ) and were presented in alternating combinations of white/black, red/black, blue/black and red/blue stripes ( Fig. 1A1-4). Retinal ganglion cell discharge at high light intensities. The encoding of colored motion stimuli at high intensities at the neuronal level was determined by recordings of the optic nerve discharge during horizontal motion of a large-field visual scene. The motion stimulus had a sinusoidal profile with a peak velocity of ± 10°/s and a frequency of 0.125 Hz causing similar positional stimulus oscillations as those used to evoke eye movements. Retinal ganglion cell discharge near the POE. To determine whether the discharge of retinal ganglion cells is exclusively driven by radiance contrast, or whether particular units also respond to pure color contrast, the spike activity was determined in response to red/blue stripes at three different red color intensities close to the POE (1.14, 1.19 and 1.25 W*sr −1 m −2 ), while the intensity of the blue stripes was kept constant at 4.66 W*sr −1 m −2 . Spectral distribution of color stimuli. Individual red and blue colors were generated using the red and blue channels of the image projectors; white stripes were generated using all three color channels (for spectra see Fig. 1B,C). This procedure was chosen for the following reason: if red-black or blue-black stimuli would cause larger responses than white-black stimuli, then the neuronal transformation of RGC signals into optokinetic eye movements must depend on specific chromatic contrasts rather than pure radiance/luminance responses. Thus, if, for example, red-responsive RGCs would be exclusively responsible for eliciting the OKR, then it can be expected that the response to white-black stripes is equal or even stronger than red-black stripes, because redresponsive RGCs would be activated even better with white light (see spectral composition in Fig. 1B). Spectra and radiance values were measured using a spectrometer (PhotoResearch SpectraScan PR655). Data analysis. To obtain a robust measure of the strength of the optokinetic response, the OKR magnitude was computed by fitting a triangular profile to the recorded eye position trace and evaluating the amplitude of the fit 12 . To account for the fact that the "actual" value of the POE might lie between the sampled responses at different intensities, the resulting intensity-amplitude curve was fitted with a function for the normalized OKR amplitude A defined by: with the subjective brightness for red or blue: L Red and L Blue represent the radiance values for red and blue and c Red and c Blue are the relative sensitivities to red and blue, respectively. The parameters m and c were required to model the individual sensitivity of each preparation's optokinetic response to changes in contrast. This model was able to fit the observed data, particularly near the POE. The POE was identified as the radiance value of the red stripes at which the fitted function became minimal (A CR ) individually for each preparation. The chromatic response (CR) component of the optokinetic eye movement was then defined as the OKR amplitude at this point. Spike sorting. To determine the response properties of individual retinal ganglion cells, spike sorting was performed in MATLAB (R2017a) on multi-unit optic nerve spike discharge in selected experiments, where individual units were identifiable. For each preparation, all trials were pooled following detection of the action potentials by a non-linear energy operator, the Teager-Kaiser energy operator 32 , with an optimized threshold 33 . Artifacts were identified as all segments of the dataset in which the electrical signal was larger than 1000 mV www.nature.com/scientificreports/ and were subsequently removed by replacing the respective values with zero. Spikes were then extracted in a 7 ms window around their respective peaks (200 samples at a sampling rate of 28.6 kHz). The collected spike shapes were subjected to singular value decomposition. The three largest singular values were then used for spike sorting. The values were clustered using k-means clustering (MATLAB 2017a). The number of clusters was determined by visual inspection of the 3D scatter plot of singular values to ensure that data were clustered into non-overlapping regions. Spike clustering was very clear in some cases and somewhat less clear in others; only clusters that could be clearly distinguished were taken into account for an evaluation of the respective spike activity of usually up to four units. Firing rate magnitude of individual RGCs was determined as the total count of action potentials during each trial. Since all trials had the same duration and velocity profile, this gave a robust estimate of single-unit activity during stimulation with differently colored stripe combinations. Statistical procedures. The significance value was chosen as α = 0.05 for all statistical tests. The response amplitudes of the OKR at the POE were compared to the reference values obtained from the grey condition using a two-sample t-test. The OKR amplitudes in response to white/black, blue/black, red/black and red/blue vertical stripe motion were compared using a repeated measures ANOVA. As described earlier 12 , post-hoc-tests were performed using paired t-tests between all conditions and the Bonferroni method to compensate for multiple comparisons. To reveal a possible linear interaction between color and radiance motion information, the correlation between the chromatic component of the OKR and the differences between the responses to red/blue, red/ black, blue/black and white/black stimuli was computed. To account for potential mathematical coupling, statistical significance of correlations was assessed by randomly re-sampling data points to estimate the distribution of expected correlation coefficients 12 . To determine the response magnitude of individual optic nerve fibers to colored compared to pure radiance stimuli, color preference (g Red, Blue ), determined as relative increase or decrease of activity in response to colored versus white stimuli was measured by computing the log of the ratio of spike counts (SC) in either red or blue color condition over the spike count in white condition: Presentation. Schemes, data and analysis plots were generated with MATLAB (R2017a) and assembled in Results Determination of the point of equiluminance. Large-field visual motion stimulation with black/ white-striped image patterns (Fig. 1A1) triggers robust conjugate ocular motor following responses of both eyes in semi-intact preparations of Xenopus laevis tadpoles ( Fig. 2A) 10 . Horizontally alternating constant-velocity motion stimulation with a black/red-or black/blue-striped pattern (Fig. 2B) elicited eye movements with comparable magnitudes in such preparations. The robustness of these responses allowed determining the POE between the two colors (Figs. 1C, 2C). The POE at which the radiance contrast of the red stripes with respect to blue stripes with a radiance of 4.66 W*sr −1 m −2 vanished was very similar across animals and occurred on average at a value of 1.22 ± 0.09 W*sr −1 m −2 (Fig. 2C), confirming previous findings that the relative sensitivities to these two component colors show little variance between animals 12 . The normalized amplitude of the OKR at the respective individual POE, separately determined for each animal was 0.27 ± 0.15 with respect to the mean amplitude over all conditions (Fig. 2D). This residual eye movement magnitude, albeit small, was significantly greater than baseline oscillations (Two-sampled t-test: t(42) = 3.02, p = 0.004, d = 0.86; Fig. 2D). As a novel finding, this indicated that color contrast is sufficient to detect motion of the visual scene and to consequently evoke an ocular motor response. Interaction of color and radiance during OKR performance. In addition to the OKR evoked by pure color contrast stimuli, high intensity color/black-striped motion stimuli provoked eye movements with amplitudes that were considerably larger compared to those elicited by a white/black-striped pattern. This is indicated by a significant main effect of stimulus color (Repeated measures ANOVA: F (3, 102) = 46.62, p < 0.001, η 2 = 0.26; Fig. 3A) and successive post-hoc tests. This finding was rather surprising since the radiance of white stripes was approximately four-fold compared to that of the blue and more than five-fold compared to that of red the stripes (18.7 W*sr −1 m −2 compared to 4.66 W*sr −1 m −2 or 3.39 W*sr −1 m −2 , respectively). In contrast, the black stripes had the same intensity. The effect persisted even when the radiance of white and colored stripes was matched (Two-sample t-test for blue: t(5) = − 2.90, p = 0.034, d = − 0.377). The additional color-related components, however, were not significantly correlated with the isolated chromatic OKR response at the POE (blue: ρ = 0.04, p = 0.430; red: ρ = 0.05, p = 0.428; Fig. 3B1,3). Bayes factor analysis 34 showed, however, that there is only anecdotal evidence for a lack of effect of response difference on chromatic response (B 01 = 2.98 for red-black minus whiteblack; B 01 = 2.95 for red-black minus blue-black). The fact that additional color-related components of red and blue motion stimuli were strongly correlated (ρ = 0.71, bootstrap p < 0.011; Fig. 3B2), suggested a particular color motion sensitivity of some but not all preparations. www.nature.com/scientificreports/ Furthermore, comparison between color motion-stimuli revealed significantly larger OKR amplitudes in response to a blue/black-, than to a red/black-striped motion stimulus (Figs. 2B, 3A), indicating that the performance of the OKR in Xenopus tadpoles differs for motion stimuli with different color patterns. Variation of pure radiance contrast of achromatic stimuli as previously employed 10 indicated that beyond a certain contrast level, OKR amplitudes saturate. Based on these previous findings, the high contrast stimuli described above fall into the saturation range and no local sensitivity to changes in contrast is expected. Nevertheless, the increase in OKR amplitude to color stimulus motion at distinctly lower radiance contrasts compared to black/white stimuli suggests the presence of a color-related component that influences the OKR performance. Optic nerve population responses to colored visual motion stimuli. Electrophysiological record- ings of the optic nerve with suction electrodes consistently evoked a multi-unit discharge of retinal ganglion cells albeit with bidirectional activity patterns during horizontal stimulus motion ( Fig. 4A and magnified response over one cycle). An earlier study demonstrated for radiance motion stimuli a close association between the www.nature.com/scientificreports/ population activity of the optic nerve and the speed of the large-field visual image motion 10 . This indicates that RGC population activity can be interpreted as surround velocity estimate. Whether this also applies when comparing responses to stimuli with distinct wavelength components-that differentially excite retinal photoreceptors-was evaluated by a set of experiments with the same color combination stimuli as used for evaluating the performance of the OKR. Interestingly, at the population level, the relative neuronal response magnitude of the multi-unit optic nerve discharge (spike count) differed from the behavioral observations (two-sample Kolmogorov-Smirnov test, p < 0.001; compare Fig. 3A with Fig. 4C). Although the neuronal response to a high intensity red/blue-striped pattern was smaller than the response to the three other patterns (likely due to a smaller radiance contrast in this condition; Fig. 4C), the respective response magnitudes showed a different activation pattern compared to the OKR. This suggests that the mechanism that causes different optokinetic response amplitudes for different colors at high light intensities 12 differs from the mechanism responsible for adjusting the optokinetic response simply based on radiance contrast. Optic nerve single-unit responses to colored visual motion stimuli. Spike sorting of the multiunit optic nerve discharge and isolation of individual units using SVD-based spike sorting (MATLAB R2017a) revealed an alternative mechanism for the partially color-specific optokinetic response amplitude (see units 1-4 in Fig. 4B and corresponding PSTHs in Fig. 5). In fact, the presence of separate subgroups of retinal motion detectors, which respond preferentially to particular wavelengths could be the origin of this color dependency. Mediation of visual motion signals in distinct color channels from the retina to the pretectum and differential coupling probability or strength within the OKR circuitry could explain the observed differential color sensitivity of this reflex. To test this hypothesis, responses of isolated single optic nerve units (n = 25) that modulated their firing rate with stimulus velocity were analyzed. The color preference was identified by evaluating the response magnitude to the four high intensity color stimuli (Fig. 4C). While the total spike count varied strongly between individual units, clear differences were encountered between the color-preference of the different units, with illustrates the difference in response magnitudes between blue-black and white-black (x-axis) and red-black and white-black (y-axis); despite the mathematical coupling, there is a significant correlation between blue/ white and red/white response magnitudes (expected correlation due to coupling: ρ = 0.48). www.nature.com/scientificreports/ color preferences ranging between − 0.42 and + 0.59 (10th and 90th percentile, respectively), corresponding to relative responses to colored stimuli between 66 and 181% of the same unit's responses to white stripes. There was a roughly even split between units responding preferentially to either white (n = 12), or colored (red: n = 10, blue: n = 3) stripes. To support the finding that units can be separated into subtypes with preferential sensitivity to broad and narrow light spectra, respectively, Gaussian mixture models with one and two components were fitted to the units' color preference values. Despite the additional degrees of freedom, the two-component model provided higher explanatory value, indicated by a lower Akaike Information Criterion (14.6 versus 16.9; Fig. 4D). The distinction between units with color-sensitive and color-insensitive response characteristics was obvious and allowed clear identification of a neuronal population preferentially responding to achromatic stimuli. Among color-sensitive units, however, there was no clear distinction based on the response preference to either red or blue color. In contrast to behavioral findings that animals with a strong preference for one color (expressed by their relative OKR amplitude) also show a strong response to the other color, there was no significant correlation between red and blue response preference on the level of RGC axons (ρ = 0.67, p = 0.12). Nevertheless, there was a considerable degree of variability in preference for colored or white stimuli (Fig. 4D). While most of the measured units were inactive during stimulation near the POE (Fig. 4E2), a small number of single units still clearly showed an activity profile that modulated with instantaneous stimulus velocity (see Fig. 4E1 for an example), suggesting that a set of RGC axons encode and transmit color-dependent visual motion signals. Discussion The optokinetic reflex of Xenopus laevis tadpoles is systematically influenced by the color component of the moving visual scene. Even though eye movement amplitudes decrease with smaller radiance contrast, a residual optokinetic response at the point of equiluminance remains, even when accounting for inter-individual variance in the exact location of the POE. This color contrast-dependency complies with an activation of larger OKR amplitudes when black stripes were paired with colored instead of white stripes. The underlying neuronal www.nature.com/scientificreports/ computation is likely of retinal origin since single optic nerve fibers can show considerable preferences for either colored or white motion stimuli. The residual response at the POE demonstrated that pure color contrast is a likely contributor to the visual motion detection mechanism, which forms the basis of the optokinetic reflex 10 . In fact, the OKR at POE along with the residual directionally selective response of RGCs (see Fig. 4E1) suggests that retinal motion processing contributes at least in part to the observed behavioral responses. Accordingly, by varying the radiance of the red stripes a point was reached where the behavioral response became minimal. Thus, the response at the POE for red-blue stripes indicates that the motor response is in fact elicited not just by radiance contrast, but also by color contrast. Anatomically, the necessary requirements for color vision have been shown to be present in adult Xenopus that possess principal and thin rods as well as four types of cones with red, blue and ultravioletsensitive opsins 35 . The current study provides evidence that Xenopus tadpoles not only possess but also recruit these anatomical structures during large-field visual motion to distinguish different colors for an adjustment of their ocular motor output. In fact, larval Xenopus can be trained to avoid specifically-colored segments of the environment, concluding that color vision is behaviorally relevant for these animals 36 . Visual motion-induced behavior at the point of equiluminance facilitates a determination of the dependency of visual motion perception on color 37 . This method has been particularly applied to optomotor responses of animal models, such as flies 31 or zebrafish 38,39 , where motion vision was found to be "color-blind", at variance with earlier reports on amphibians such as frogs 13,14,40 demonstrating that orienting head/body movements in both larvae and adults of ranid frogs are color-sensitive. If this is a fundamental difference between amphibians and zebrafish is likely but so far not yet fully explored. Although visual motion-driven ocular (this study) as well as neck/limb motor behaviors 13,14 are color sensitive in amphibians, the color-sensitivity of the OKR, its shortlatency neuronal circuitry, approximately linear input-output relationship along with the spatial specificity and the limited degrees of freedom of eye movements 41 makes this ocular motor behavior a very sensitive assay. This is further facilitated by the possibility to isolate small responses based on their frequency characteristics and to robustly distinguish evoked reflexive from spontaneous motor activity. Our experiments revealed a non-zero optokinetic response at the POE in the majority of preparations (26 out of 35), suggesting that color information indeed plays a role in low-level motion vision, even though the central targets of the recorded fibers and functional implication remained so far unidentified. The contribution of color to visual motion processing appears to be independent of the overall light level, provided these RGCs feed into the ocular motor reflex circuitry, since color influences the performance of the OKR also at high light intensities, when alternating white/black, red/black and blue/black stripes were presented. Such a color contribution to neuronal motion computation, however, is not restricted to brainstem levels but is also implemented in higher-order motion vision, such as for human subjective speed estimation 15 . While the speed of an equiluminant chromatic grating is only perceived to be moving at about half the speed of a corresponding luminance grating, a motion percept is still evoked in the absence of luminance contrast 17 . The larger responses to stimuli combining black with either blue or red stripes, as compared to white/black stripes, was rather surprising, in particular since the radiance contrast of white versus black stripes was the highest. While obviously all tested stimulus patterns express sufficient radiance contrast for maximal optokinetic responses, colored motion stimuli likely recruit additional pathways thereby leading to increased amplitudes of the OKR. Although direct causality is still lacking, this dependency of OKR amplitudes complies with the observed pattern in the population of optic nerve fibers 10 . This indicates that radiance-dependent variations in the performance of the OKR potentially derive from retinal signal processing. At variance with the correlation between radiance magnitudes and optic nerve population activity 10 , variations of the color composition of visual motion stimuli were generally unrelated to the population activity of retinal ganglion cells (Fig. 4C). This suggests that the observed differences in response amplitude are not caused by radiance contrast artifacts, such as reflections. Rather, the increase of the OKR amplitude during chromatic stimulation could be explained by retinal motion detectors with preferences for specific colors, which differ in how strongly they are coupled to the brainstem OKR circuitry 39 . This hypothesis is supported by our observation of individual units at the optic nerve level with varying preferences for large-field visual motion stimuli in red, blue or white color (Fig. 4B). Evaluating individual units based on how their response changes to white and red or blue stimuli, respectively, showed a relatively large separation between two types (see Fig. 4C). In ~ 50% of the recorded units, the spike activity decreased when color-striped visual motion stimuli were presented, while in the remaining 50% of the examined optic nerve fibers the spike discharge rate increased. This latter augmentation was typically similar for red-and blue-striped motion stimuli, although, units with a preference for either one of the two colors were found. This demonstrates further that a neuronal substrate for color-specific processing of motion information is implemented in the Xenopus as in the zebrafish retina. In the present study, however, it was not possible to clarify whether the analyzed optic nerve fibers were indeed connected via the accessory optic system to the relevant pretectal relay nucleus 4 and thus in fact have contributed to the OKR response. While most of the optic nerve fibers investigated here (examples shown in Fig. 5) did not show a directional preference that would be required to sustain the OKR response (for an exception see Fig. 4E1), their response to both color and motion indicates that color-sensitive motion channels already exist in the amphibian retina and may constitute the origin of color-sensitive OKR responses compatible with the color-sensitivity of optomotor reactions of the head/body 40 . From an ecological point of view, color information is advantageous for OKR performance. Color provides a number of additional cues about the environment and the visual motion within it and potentially extends the sensitivity of this motor behavior. Color increases the saliency of objects, and thus facilitates motion perception 19 . In addition, certain environmental features have invariant colors and-based thereupon-can be more easily distinguished from other objects. This plays a particular role in the distinction between visual motion of the entire scene (which is most likely caused by self-motion and therefore should elicit an optokinetic ocular motor www.nature.com/scientificreports/ response), and motion of external objects such as floating debris in aquatic environments for instance. One particular example in this environment is a distinction between the blue sky, which always provides a worldstationary reference, in contrast to other environmental features which might be moving themselves, such as the surrounding fauna. Color cues can thus assist in performing a behavioral distinction between different sources of visual motion, in particular self-and object motion. Data availability The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
6,907.6
2021-04-19T00:00:00.000
[ "Biology" ]
Editorial overview: Systems neuroscience HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Systems neuroscience classically studies how neuronal circuits dynamically interact at varying spatial and temporal scales to process sensory information, represent the external environment to guide decision making, and execute movements. A recent explosion of techniques for studying neural dynamics has revolutionized this discipline and therefore our understanding of brain function. Indeed, these tools have provided unprecedented observations of neural activity and elegant means of manipulating specific components of these circuits, albeit in a more consolidated number of animal models. Such approaches have enabled systems neuroscientists to go beyond the description of reflexive behaviors or of the first stages of sensory processing, opening the way for the understanding of how cognitive processes such as memory are implemented at the level of large scale interacting circuits to support adaptive behavior. This volume of Current Opinion in Neurobiology provides a snapshot of the state of this field with an emphasis on the contextual modulation of multisensory processing, on memory circuits, or on the development of long-range interacting networks. It also offers an overview of recent methodological advances and new animal models with an emphasis on a mesocircuit level of description. Perhaps the most rapidly maturing aspect of neuroscience is our ability to measure the activity of large neural ensembles in action. For instance, in the olfactory bulb, Chong and Rinberg use imaging methods capable of monitoring complex spatiotemporal patterns of activity during the performance of odor-guided behaviors and examine ways in which such activity can be experimentally manipulated or even recapitulated in order to test the importance of such codes. In their review, Pakan et al. discuss how the recent development of genetic tools and imaging techniques has led to the urgent need to standardize experimental conditions and analysis methods. Sensory processing represents the most commonly studied aspect of systems neuroscience. Although other sensory modalities, such as vision, may receive more experimental attention, tremendous progress is being made in other sensory systems as well. In his contribution, Gu explores the primate vestibular network, focusing on a substantial cortical representation that enables the perception of self-motion and spatial orientation. Additionally, Bokiniec et al. examine the circuitry underlying the perception of innocuous thermal stimuli and highlight recent advances that address the functional organization of networks underlying the processing of skin surface temperature. How "non-sensory" signals (arousal, experience, prediction, attention, social context, etc.) affect the processing of sensory inputs is an important and Michael Long NYC School of Medicine, USA e-mail<EMAIL_ADDRESS>Michael Long is an associate professor in the Neuroscience Institute at the NYU School of Medicine. He completed his graduate studies with Barry Connors (Brown University) and his postdoctoral work with Michale Fee (MIT). His laboratory studies the neural circuits that underlie skilled movements, often in the service of vocal interactions. To accomplish this, he has taken a comparative approach, examining relevant cellular and network mechanisms in the songbird, the rodent, and the human. timely topic and a recent matter of debate. The issue of "contextual modulation", when a primary feedforward input interacts with modulatory influences arising from top-down networks is therefore becoming a central focus in systems neuroscience. Pakan et al. discuss the impact of these 'nonsensory' variables, such as state-dependent and experience-dependent modulation of visual processing, implicating both corticocortical and thalamocortical pathways as well as neuromodulation. Khan and Hofer explore how such information enables bottom-up and top-down influences to be integrated in order to form predictions about the visual world. Batista-Brito et al. examine the microcircuitry that enables these contextual influences to be implemented in neocortex and focus on the role of inhibitory interneurons and neuromodulation in both normal and pathological brain processing. Kuchibhotla and Bathellier demonstrate the importance of such contextual use for adding perceptual richness to the auditory cortex and shaping behavioral responses. Although brain state can influence sensory perception, individual sensory streams (e.g., olfaction) often have a significant impact on ongoing brain function. Choi et al. review the processes underlying the integration of sensory information from multiple modalities, focusing on the dynamic modulation of these factors and the importance of this process on perception. Robbe examines the integration of sensory signals with motor information within the striatum and argues that the somatotopic organization may facilitate motor learning. Ben-Tov et al. use the archerfish, which can integrate sophisticated visual information to localize and target prey, to examine dynamic sensorimotor processes underlying an ethologically relevant behavior. Knafo and Wyart use advanced optical methods to identify the cell types involved in mechanosensory feedback in the larval zebrafish that are engaged in active locomotion. Although thalamocortical inputs to sensory structures have been traditionally thought to simply relay afferent information to the neocortex, several lines of evidence suggest that the thalamus may be carrying out a diverse set of functions. For example, Gent and Adamantidis discusses recent findings that implicate the thalamus in the regulation of sleep-wake states, expanding the view that such arousal states are primarily determined by brainstem structures. Antó n-Bolañ os et al. explore the role of extrinsic versus intrinsic factors in the establishment of functional networks during development, with a focus on the early dialogue between thalamic nuclei and sensory cortices, and Colonnese and Phillips examine the changes in inhibitory cell types that may enable the developmental switch from "pre-sensory" early thalamocortical coordinated activity to high resolution sensory processing. Systems developmental neuroscience, an emerging subdiscipline at the interface between development and systems neuroscience is particularly well-represented by several reviews in this special issue including the two cited above. In that respect, the review by Valero and Menendez de la Prida exemplifies how much development may shape the functional structure of adult memory circuits. Indeed, it reviews recent evidence indicating a segregation between temporal and contextual information flows along the radial axis of the CA1 hippocampal region, an emergent outcome structure of development. This issue includes several additional reviews related to the systems neuroscience of cognition. Piskorowski and Chevaleyre highlight the critical role of CA2 in memory circuits, not only for social behavior but also regarding spatial information. They argue that CA2 may act as a conflict detector, comparing contextual information with internal representation. Mably and Colgin discusses how gamma oscillatory activity is involved in Aix-Marseille University, a pioneering Institute in the field of Systems Developmental Neuroscience. After graduating in Mathematics and Physics from the Ecole Centrale Paris, she studied the functional rewiring of GABAergic circuits in epilepsy during her PhD with Drs. Bernard and Ben-Ari. As a postdoctoral fellow with Pr. Yuste at Columbia University, she pioneered the use of calcium imaging to study cortical circuit function. Her lab made seminal contributions to the understanding of how development scaffolds hippocampal circuits. They discovered "hub cells" and more recently "assemblies" forming the functional building blocks of hippocampal function. memory processes, providing a window into the brains interworkings in health and disease and potentially a means for novel therapeutic interventions. The hippocampus and related structures have long been known to play a role in spatial navigation, and Maimon and Green use the drosophila as a model system to study the computational similarities that exist between insect and mammalian head direction systems to enrich existing models for the circuitry that provides directional information for such behaviors. Another advantage of modern techniques is the ability to examine the impact of long-range connectivity. A pair of reviews explore the importance of a specific interconnection, linking the prefrontal cortex to the amygdala, for two distinct brain processes. Yizhar and Klavir examine these interactions in the context of establishing and modifying associations between a cue and an outcome in the service of adaptive and maladaptive learning, while Rozeske and Herry link the connections between these structures to the rapid and flexible expression of fear behavior. Movement is the final outcome of nervous system function. Muscatelli and Bouret review recent literature regarding a highly important innate behavior, namely the suckling reflex that must be initiated immediately after birth and is influenced by hypothalamic circuits that regulate feeding. Aranha and Vasconcelos examine female innate behaviors in the drosophila, specifically courtship responses and egg-laying decisions. For skilled behaviors, Yoshida and Isa present comparative results from rodents and primates focusing on the role of the corticomotorneuronal pathway in enabling dexterous hand movements. Despite the impressive progress in systems neuroscience in recent years, many aspects of brain function remain poorly understood. For instance, although an enormous amount of effort within the field is presently being directed towards simple model organisms, Bansal et al. tackle the complexity of the human brain by developing personalized models constrained by anatomical and functional data, providing a means for querying the human brain function through 'virtual experiments' as well as a potentially powerful new tool for neurosurgical applications. Another potential gap in our knowledge is rooted in the paucity of circuit-oriented studies that focus on social neuroscience. Brecht et al. address existing literature, highlighting differences in sensory processing based on social context and address sexually dimorphic aspects of nervous system function as early steps towards establishing a mechanistic understanding of the social brain. Collectively, these reviews reveal how much systems neuroscience is experiencing an exciting period supported by unprecedented technological breakthroughs. It is by essence an interdisciplinary field which is in turn currently giving rise to the emergence of new subfields, such as 'systems developmental neuroscience' or 'systems social neuroscience'. In the close future, systems neuroscientists throughout these various subfields will certainly need to work hand in hand with data and computational neuroscientists to standardize their experiments and analysis and bridge the gap between data and understanding.
2,324
2018-10-01T00:00:00.000
[ "Economics" ]
MHD SLIP FLOW AND HEAT TRANSFER OVER AN EXPONENTIALLY STRETCHING PERMEABLE SHEET EMBEDDED IN A POROUS MEDIUM WITH HEAT SOURCE Steady two dimensional laminar magnetohydrodynamic (MHD) slip flow and heat transfer of a viscous incompressible and electrically conducting fluid past over a flat exponentially non-conducting stretching porous sheet embedded in a porous medium with non uniform permeability in the presence of non uniform heat source is investigated. The governing equations of velocity and temperature distributions are solved numerically and the effects of different physical parameters are shown through graphs. The rate of shear stress and the rate of heat transfer at the sheet are derived, discussed numerically and their numerical values for various values of physical parameters are presented through tables. INTRODUCTION The study of hydromagnetic electrically conducting fluid flow involving heat transfer over stretching porous sheet is of great importance in many processes as modern metallurgical and metalworking processes. This field has attracted the attention of many researchers because of its possible applications in soil sciences, astrophysics, geophysics, nuclear power reactors etc. In cooling process of nuclear fission reactors, liquid sodium is pumped around using electromagnetic forces. In medical science, an advanced method is used for precisely delivery of medicine to cancer affected organs, in which MHD equations and finite element analysis are used to study the interaction between the magnetic fluid particles in the bloodstream and the external magnetic field. The study of fluid flow through porous medium has become predictable in the extraction of crude oil from the pores of rocks and filtration of solids from liquids. Fluid flow through porous medium also has applications in environment such as flow of ground water through soil and rocks, which is important for agriculture and pollution control. The suction/injection process has its importance in many engineering activities such as in the thermal oil recovery, designing of thrust bearing and radial diffusers. Suction is also applied to chemical processes to remove reactants. In heat pumping technology natural heat sources/sinks like air, ground, water etc. are used. This technology is used in compressors, refrigerators and air conditioners. Heat transfer of a continuous stretching surface with suction or blowing was analyzed by Chen and Char (1988). Kumaran and Ramanaiah (1996) discussed the flow over a stretching sheet. Heat and mass transfer in the boundary layers on an exponentially stretching continuous surface has been studied by Magyari and Keller (1999). Elbashbeshy (2001) considered heat transfer over an exponentially stretching continuous surface with suction. Slip flow past a stretching surface was investigated by Andersson (2002). Miklavcic and Wang (2006) analyzed viscous flow due to a shrinking sheet. Hydromagnetic flow and heat transfer adjacent to a stretching vertical sheet with prescribed surface heat flux was studied by Aman and Ishak (2010). Pal and Hiremath (2010) considered computational modeling of heat transfer over an unsteady stretching surface embedded in a porous medium. Boundary layer flow and heat transfer over a stretching sheet with Newtonian heating was studied by Salleh et al. (2010). Sharma and Singh (2010) investigated steady MHD natural convection flow with variable electrical conductivity and heat generation along an isothermal vertical plate. MHD boundary layer flow due to an exponentially stretching sheet with radiation effect was presented by Ishak (2011). Yao et al. (2011) studied heat transfer on a generalized stretching/shrinking wall with convective boundary condition. Heat transfer in a fluid through a porous medium over a permeable stretching surface with thermal radiation and variable thermal conductivity was analyzed by Cortell (2012). Hayat (2012) considered three-dimensional flow of a Jeffery fluid over a linearly stretching sheet. Hydromagnetic boundary layer flow over stretching surface with thermal radiation has been discussed by Soid et al. (2012). Mandal and Mukhopadhyay (2013) presented heat transfer analysis for fluid flow over an exponentially stretching porous sheet with surface heat flux in porous medium. Slip effects on MHD boundary layer flow over an exponentially stretching sheet with suction/blowing and thermal radiation shown by Mukhopadhyay (2013). Norhafizah et al. (2013) studied numerical solution of flow and heat transfer over a stretching sheet with Newtonian heating using the Keller Box Method. Singh and Makinde (2015) presented a similarity solution for the combined effects of velocity slip and temperature jump on boundary layer flow over a moving surface. The MHD slip flow of a conducting Casson nanofluid over a convectively heated stretching sheet was numerically studied by Ibrahim and Makinde (2016a). Other relevant papers with respect to MHD flow over a stretching sheet include Ibrahim and Makinde (2016b); Khan et al. (2016). The aim of the paper is to investigate steady two dimensional laminar MHD flow of a viscous incompressible and electrically conducting fluid past over a flat exponentially non-conducting Frontiers in Heat and Mass Transfer Available at www.ThermalFluidsCentral.org Frontiers in Heat andMass Transfer (FHMT), 9, 18 (2017) DOI: 10.5098/hmt.9.18 Global Digital Central ISSN: 2151-8629 2 stretching porous sheet in the presence of non uniform heat source. The governing equations of motion and energy are solved numerically using Runge-Kutta fourth order method along with shooting technique. The effects of the Hartmann number, permeability parameter, Prandtl number, heat source parameter, velocity slip parameter, thermal slip parameter and suction parameter on velocity and temperature distributions are investigated and shown through graphs. The rate of shear stress as skin friction coefficient and the rate of heat transfer as the Nusselt number are derived, discussed numerically and their numerical values for various values of physical parameters are presented through Table 1 and Table 2. MATHEMATICAL FORMULATION OF THE PROBLEM Steady two dimensional laminar flow of a viscous incompressible and electrically conducting fluid past over a flat exponentially nonconducting stretching porous sheet embedded in a porous medium with non uniform permeability is considered. The x -axis is taken in the direction along the stretching sheet and y -axis is taken normal to it. The fluid flow confined to 0 y  . The flow is generated by the action of two equal and opposite forces along the x -axis so that the wall is stretched keeping the origin fixed. The surface is assumed to be highly elastic and is stretched in the x -direction with the velocity 0 x l U U e  . A non uniform magnetic field * 2 0 x l B B e  is applied along the y -direction. The magnetic Reynolds is taken to be small and therefore the induced magnetic field is neglected. It is assumed that the temperature of the sheet w T is variable and given by 2 0 A non uniform heat source is also applied. All the fluid properties are assumed to be constant throughout the motion. Under these assumptions, the governing boundary layer equations [Bansal (1977), Bansal (1994), Schlichting and Gersten (2003)] are 0, uv xy is the non uniform permeability of the medium and * 0 x l Q Q e  is the non uniform heat source. The boundary conditions are is the suction velocity at the sheet, is the velocity slip factor and is the thermal slip factor. METHOD OF SOLUTION Introducing the following similarity transformations which identically satisfies the continuity Eq. (1). Using Eq. (5) into the Eq. (6), we have the velocity components as given below where prime denotes the differentiation with respect to  . Now using Eqs. (5) and (7) into the Eqs. (2) and (3), we have where w  is the wall shear stress given by NUMERICAL SOLUTION The coupled nonlinear ordinary second order differential Eqs. (8) and (9) along with boundary conditions (10) are solved numerically using MATLAB software. First we convert the boundary value problem into a system of initial value problems as follows: ; The boundary conditions are To solve Eqs. (13) and (14) with boundary conditions (15) as a system of initial value problems we must need the values for 2 (0) Figure 2 shows the effect of permeability parameter on the velocity of the fluid. It is observed that velocity increases with the increment of the permeability parameter. Increment in permeability parameter denotes the increment in the porosity feature of the medium. Clearly, as the porosity of medium is increased, velocity of the fluid through that medium is also increased because fluid got more space with fewer disturbances to flow through medium. The effect of magnetic parameter (Hartmann number) on velocity of the fluid is displayed in Fig. 3. Due to magnetic field, a resisting force is generated in the flow which is called Lorentz force. This force caused a decline in velocity of the fluid. Therefore as the Hartmann number increases, fluid velocity decreases. It is seen from Fig. 4 that velocity decreases with the increment of suction parameter. Suction parameter shows the porosity of the sheet. As the size of pores of the sheet is increased the flow through the sheet is increased which resist the flow along the sheet. That is the reason of the reduction of fluid flow along the sheet in magnitude. The effect of velocity slip parameter on fluid velocity is presented in Fig. 5. It is observed that velocity decreases with the increment in the velocity slip parameter. Increment in Hartmann number implies the increment in the strength of Lorentz force. Due to this resisting force, temperature of the fluid is increased. This effect can be observed from Fig. 6. Temperature profile is shown for different values of heat source parameter in Fig. 7. Increment in heat source parameter indicates that external heat is provided to fluid and clearly it causes the enhancement of the fluid temperature. The effect of velocity slip parameter is presented through Fig. 8 and it is observed that fluid temperature increases with the increase of the velocity slip parameter. Prandtl number is the ratio of momentum diffusivity to thermal diffusivity. Figure 9 shows the effect of Prandtl number on fluid velocity. As the Prandtl number increases thermal conductivity of the fluid decreases, therefore fluid temperature decreases. Frontiers in Heat andMass Transfer (FHMT), 9, 18 (2017) DOI: 10.5098/hmt.9.18 Global Digital Central ISSN: 2151-8629 5 Figure 10 shows the effect of suction parameter on fluid temperature and it is observed that fluid temperature decreases with the increase of suction parameter. The effect of thermal slip parameter on fluid temperature is presented in Fig. 11 and it is noted that fluid temperature decreases with the increase of thermal slip parameter. CONCLUSIONS Steady two dimensional laminar boundary layer flow and heat transfer of a viscous incompressible and electrically conducting fluid past over a flat exponentially non-conducting stretching porous sheet in the presence of non uniform transverse magnetic field and non uniform heat source are analyzed numerically. Effect of different physical parameters on fluid velocity, fluid temperature, skin friction coefficient and Nusselt number are investigated and the following observations are made: • As the Hartmann number increases, the fluid velocity decreases; whereas the fluid temperature increases. • An increase in heat source parameter results an increase in the fluid temperature • The fluid velocity and fluid temperature decrease with the suction parameter. • As the velocity slip parameter increases, the fluid velocity decreases, while opposite behavior is seen for fluid temperature. • The fluid temperature increases with the increase of heat source parameter; while it decreases with the Prandtl number. • The skin friction coefficient and the Nusselt number increase due to increase in the permeability parameter.
2,636.2
2017-09-09T00:00:00.000
[ "Engineering", "Physics" ]
Short Sequence Chinese-English Machine Translation Based on Generative Adversarial Networks of Emotion With the steady growth of the global economy, the communication between countries in the world has become increasingly close. Due to its translation efficiency and other problems, the traditional manual translation has gradually failed to meet the current people's translation requirements. With the rapid development of machine-learning and deep-learning related technologies, artificial intelligence-related technologies have affected various industries, including the field of machine translation. Compared with traditional methods, neural network-based machine translation has high efficiency, so this field has attracted many scholars' intensive research. How to improve the accuracy of neural machine translation through deep learning technology is the core problem that researchers study. In this paper, the neural machine translation model based on generative adversarial network is studied to make the translation result of neural network more accurate and three-dimensional. The model uses adversarial thinking to consider the sequence of emotion direction so that the translation results are more humanized. We set up several experiments to verify the efficiency of the model, and the experimental results prove that the proposed model is suitable for Chinese-English machine translation. Introduction Since the twenty-first century, the economic level of all countries in the world has been greatly improved. In the context of economic globalization, cross-language communication between people of all countries has become more and more frequent. Different nations have their own customs and cultures, and there are great differences in language expression. How to communicate effectively across languages is a problem that must be faced and solved. Due to its translation efficiency and other problems, the traditional manual translation has gradually failed to meet the current people's translation requirements. erefore, many people turn their attention to Machine Translation, which is an important branch of natural language processing. Machine translation is to generate the target language with the semantics of the source language unchanged through relevant computer and algorithm and other techniques. at is, to achieve equal conversion from one natural language to another [1,2]. Statistical Machine Translation mainly obtains the conversion rules between two natural languages by learning the corpus, without the need to make conversion rules manually. However, there are still many problems in Statistical Machine Translation [6]. It relies too much on the learning of the model in the corpus and has high requirements on the accuracy of the processing steps such as word alignment, word segmentation, and translation rule extraction [7]. In recent years, with the continuous maturity of artificial intelligence technology and the rapid development of machine learning and deep learning-related technologies, deep learning has gradually been combined with different fields. How to improve the accuracy of neural machine translation through related deep learning technology is also a problem that researchers have been studying [8,9]. Deep learning techniques are used to deal with natural language problems so that some problems faced in natural language processing have been well solved and good results have been achieved. e application of deep learning technology provides many ideas and methods for improving the accuracy and efficiency of machine translation. At present, deep learning technology is mainly used in two models in the machine translation [10]. e first is the Statistical Machine Translation model framework, which adopts neural network to improve and optimize the language model, sequencing model, and other key modules in the model framework. e second method is to construct the encoder and decoder through neural network, and use the end-to-end neural network machine translation model to realize the translation and conversion from source language to target language [11,12]. With the deepening of research, more and more neural network machine translation algorithms are proposed. Rule-Based Machine Translation. With the birth of computers in the middle of the last century, machine translation began its exploration [13]. In 1954, IBM used the computer to translate several simple Russian sentences into English for the first time. Its translation system consists of six translation rules and 250 words [14,15]. is experiment shows that the process of machine translation can be realized by using the method based on dictionaries and translation rules. Although it was only a preliminary success, it aroused the enthusiasm of machine translation research in the Soviet Union and other European research institutions. It greatly promoted the research progress of early machine translation. However, machine translation was completely rejected in 1966 by a report titled LANGUAGE AND MACHINES, and machine translation research suffered a setback at that point [16,17]. With the increasingly close exchanges between countries, the communication barriers between different languages become more and more serious, and people's demand for machine translation is more and more intense. At the same time, the development of corpus linguistics and computer science has provided new possibilities for machine translation. Since then, Machine Translation has entered a period of rapid development. After decades of evolution, it has formed three stages from Rule-based Machine Translation to Statistic Machine Translation and then to Neural Machine Translation [18][19][20]. e earliest machine translation method is rule-based machine translation, which realizes the conversion between source language and target language by making relevant translation rules. e process of rule-based machine translation mainly includes three steps: source language parsing, language conversion, and target language generation [21]. e first step is to parse the input source language to obtain the structural representation of the source language. e second step is language conversion. Transform the structural representation of the source language into the structural representation of the target language through the formulated translation rules. In the third step, the representation of the target language is generated into the target language by processing the corresponding rules. Early rulebased machine translation methods require manual transformation rules. Although they have high-translation accuracy for a small number of sentences, their coverage is limited, the system robustness is poor, and is very sensitive to noise in rules. e rule-based machine translation method can perform machine translation to a certain extent, but its application is very limited. is translation method almost completely depends on the language rules established by linguists, which has certain limitations in practical application. Moreover, due to the extensive and profound language, it is difficult to list all the rules contained in various kinds of language. erefore, the inability to obtain a complete set of language rules is the main problem facing rule-based machine translation research. Machine Translation Based on Statistics. In order to solve the problems of rule-based machine translation, statistical machine translation has become the representative method of machine translation research. A landmark event was the launch of Google's free online automatic translation system, also known as Google Translate [22], which really brought the "high-flying" technology of machine translation into people's lives. Statistical machine translation is a data-driven approach that designs probabilistic models on large-scale parallel corpora to achieve automatic translation from source language to target language. Early statistical machine translation was word-based, learning model parameters from words in the corpus. Later, phrases were used as the basis to learn model parameters, and now syntax is used as the basis to build syntactically based statistical machine translation model to further improve translation accuracy. Statistical machine translation model is one of the most widely used machine translation models. is is because statistical machine translation models have excellent translation results in machine translation in unbounded domains. Statistical machine translation model is to obtain the parameters required by the relevant translation model through the statistical analysis and learning of a large number of parallel corpus, and then to construct the statistical translation model, and then to use the model for translation. Koehn et al. took words as the basic unit of statistical machine translation model, extracted corresponding words of original language and target language from corpus, and proposed phrase-based statistical machine translation model [23]. Och and Ney proposed statistical machine translation based on the maximum entropy model and constructed the machine translation model through the log-linear model [24]. Later, the processing unit of the translation model is extended to include words, and a phrase-based statistical machine translation model is proposed [25]. All of the above statistical machine translation methods are syntactically based and take syntactic structure as the basic translation unit to construct translation models. Although the basic organizational structure of a sentence can be displayed through the syntax tree, the specific semantic information of the sentence cannot be expressed, which makes it difficult for the final translation to correctly represent the original sentence semantics. People gradually turn their attention to the semantic understanding of source language and target language in machine translation. In order to increase the differentiation of translation rules, Aziz et al. integrated the semantic information generated by the source language as a feature into the existing translation model, and marked the nonterminal symbols in the syntactic translation model to a certain extent through the semantic role information [26]. Wu and Fung preprocessed the translation process to realize the utilization of semantic information, reordered the candidate translation list, and marked semantic information with semantic roles [27]. Zhai et al. [28] through the predicate meta-structure made the statistical machine translation model maintain the semantic information of the original text to the maximum extent, made the semantics of source language and target language more similar, and established a semantic translation model based on the transformation of predicate meta-structure. e charm of language lies in the fact that different words have different meanings in different situations. However, in the process of translation, these traditional machine translation models ignore the influence of contextual information on sentence semantics, ignore the context in which the sentence exists, and only focus on the translation of the sentence, which results in the lack of structural rationality and semantic coherence. erefore, many researchers conduct machine translation research based on the whole article as a translation unit. Xiong et al. [29] proposed a statistical machine translation model based on topic transformation in order to improve the quality of discourse-level statistical machine translation. Gong et al. maintained semantic consistency of the same words and phrases in the whole document through semantic caching technology based on cohesive properties [30]. Tu et al. made a preliminary exploration of the discourse translation framework model based on discourse and proposed a statistical machine translation model that takes the rhetorical structure of discourse as the basic translation unit [31]. Statistical machine translation also has some problems. e independent parameter model structure makes the parameters of the translation model independent, which leads to the situation that the translation model cannot consider the relevance between words, leading to the existence of sparse problem. e process of parameter optimization and training of translation model is independent and not unified. Since learning is carried out in a corpus, statistical machine translation is dependent on the corpus, and the quality of the corpus will directly affect the final translation result. Without in-depth analysis of the source language, if the model does not deal with syntactic and semantic components, it ignores the connection between words and context, which results in the inability to deal with long-distance dependence, resulting in semantic incoherence and unreasonable semantics [32]. Neural Network Machine Translation. With the development of deep learning theory, researchers have found that deep learning-related technologies can better solve these problems in statistical machine translation. Neural machine translation technology originated from the neural network probabilistic language model proposed by Bengio et al. in 2003 [33]. It represents discrete characters into continuous dense distributed vectors through neural networks, which effectively alleviates the problem of data sparsity. In 2013, Kalchbrenner and Blunsom et al. [34] from Oxford University constructed an encoder-decoder structure by using CNN and RNN. As an encoder, convolutional neural network (CNN) can obtain historical information and process variable length strings. As a decoder, recurrent neural network (RNN) can directly model translation probability. In earlier studies, deep neural network was only used as an auxiliary method for language modeling, while their study was completely composed of deep neural network, which marked the independent application of deep learning methods in machine translation. Subsequently, Sutskever et al. in Google team proposed RNN-RNN model on the basis of the former, which became the general Sequence-to-Sequence model later. e model uses recurrent neural network as the backbone network of an encoder and a decoder. Cho et al. [35] proposed that Gated Recurrent Unit (GRU) could replace LSTM to handle machine translation tasks. GRU is actually an optimization of LSTM, which simplifies the internal structure, reduces training parameters, and improves training efficiency. Sequence-to-sequence structure, understood abstractly, generates a semantic space. Source language and target language are mapped to this semantic space through neural network training. e more semantically similar words are, the closer they are in the semantic space. In 2014, Bahdanau of Youngor University in Germany proposed attention mechanism, which effectively solved this problem and brought machine translation to a new height [36]. ey gave the "S-S" model ability to distinguish, so that it pays attention to the more relevant input information. e attention mechanism is essentially a small neural network trained at the same time as the S-S network. Luong et al. from Stanford proposed many variations of attention mechanism, which further enhanced the representational ability of attention mechanism. After the attention mechanism is introduced, the long-distance dependency problem can be better dealt with. e influence of the previous word on the current word can be obtained Computational Intelligence and Neuroscience through the attention weight, and the representation vector of the current word can be better generated. With the proposal of attention mechanism [37] and the rapid development in the field of image, attention mechanism is gradually combined with natural language processing. Especially in machine translation, attention mechanism is introduced between the current state of the target language sequence and the hidden layer state of the source language sequence. e matching degree of these two states is measured by attention weight, so as to obtain a better representation vector of the target language. e problems of long-distance dependence and incomplete representation of vector information are effectively solved [38]. Mi et al. used punishment to improve the translation effect. If the completed part of the translation received too much attention, it would be punished and reward the unfinished part of the translation [39]. In order to obtain better translation results, Tang et al. selected the required rules through the attention mechanism in the translation process, but it also caused high-time complexity [40]. Researchers have never stopped improving the neural machine translation model and have made some achievements in improving the memory capacity of the model and expanding the depth of the translation model [41]. Although neural machine translation has surpassed statistical machine translation in many publicly evaluated translation tasks, its actual translation quality is far from the level of human expert translation, and the model of neural machine translation still needs to be optimized. Compared with phrase-based or rule-based statistical machine translation, neural machine translation lacks the basis of theoretical explanation, because deep learning itself is a "black box" approach. Besides, the complex network structure and the large number of parameters mean the need for largescale and high-quality parallel corpus pairs. However, highquality parallel corpus pairs are often missing among many less-popular languages. From the cyclic neural network based on attention mechanism to the convolutional neural network based on attention mechanism to the current mainstream Transformer model based on self-attention mechanism, Transformer's parallel input combined with the self-attention mechanism makes the actual distance between the input words as 1. It effectively alleviates the long-distance dependence problem. At the same time, the computing speed is greatly improved. However, this also leads to inferior representational ability of local information as RNN and CNN, and damages relative location information. In addition to the Transformer model, there is still a lot of room for improvement in the neural machine translation model. Network Framework Bi-LSTM and Transformer are widely used in various fields of artificial intelligence. How to further improve the translation effect of Bi-LSTM and Transformer neural machine translation models which introduce attention mechanism that is the focus of this paper and also the innovation of this paper. In this paper, the generative adversarial network is added to the neural machine translation model. e generator adopts Bi-LSTM and Transformer neural machine translation models, respectively. e discriminator uses convolutional neural network to discriminate the translation results and generates feedback to act on the generator. rough the idea of generating antagonism, the effect of generator is improved, that is, the final translation effect of the machine translation model is improved. Language is an important means of expressing emotions. Confrontational training methods can judge positive or negative emotions, and such translation results have emotional effect also. Based on the end-to-end neural machine translation model, the neural machine translation model adopts the encoder-decoder framework structure. Encoder-decoder model framework is used to encode and decode variable sequences of input and output. In the frame of the model of the encoder and decoder, the decoder corresponds to the output sequence, and the encoder corresponds to the input sequence. e decoding stage decodes the whole target language sequence by maximizing the probability of prediction sequence, and the coding stage encodes the whole source language sequence into a vector. e encoder-decoder framework mainly realizes the probability prediction of target language through the encoding and decoding process of encoder and decoder. Assuming that the source language sequence is X ∈ [x 1 , x 2 , ...x n ] and the target language sequence is Y ∈ [y 1 , y 2 , ...y m ], the probability calculation of generating the target language is shown in formula (1). e generation probability of each target language vocabulary is calculated by softmax function as shown in formula (2). where C is the vector used to represent the source language sequence, contains the relevant information of the source language sequence, and is the vector with fixed dimensions generated by the encoder stage. e ϕ function defines the possibility of generating the current target language term y n from the source language as well as the generated target translation. e purpose of introducing the softmax function is to generate the probability distribution of the target word and to ensure that the function value satisfies the probability distribution. c s represents the source language context vector representation, c t represents the target language context vector representation, Y represents the target language, and v y represents the word vector representation of the target language. e known source language sentences and generated target language sentences are used to predict the current probability of the target word. Since the source language sentences and generated target language sentences are very sparse, neural machine translation uses continuous 4 Computational Intelligence and Neuroscience representation to model the conditional probability of the current word in the target language. RNN Neural Translation Model. Owing to the network structure of Recurrent Neural Networks, which perfectly fits the sequence problem, it can process the input sequence of any length in theory. In the process of processing the sequence problem, Recurrent Neural Networks can store the time sequence information and store the historical information of the time sequence through the implicit state. erefore, the structure of cyclic neural network is generally adopted to deal with sequence problems. e output of the recurrent neural network is a hidden layer state, which is used when the current layer processes the next layer, and each layer outputs to the next layer. is structure enables the recurrent neural network to process the input sequence data well, and to process the data samples with contextual dependencies. e hidden layer state at each moment is a functional representation of all the hidden layer states at the previous moment. According to the time sequence, the schematic diagram of the cyclic neural network is shown in Figure 1. As shown in Figure 1, the input in the network at time t consists of the hidden layer state h t−1 at the previous moment and the input x t at the current moment. e hidden layer state h t at the current moment can be calculated by h t−1 and x t . e hidden layer state h t is computed repeatedly until all inputs are complete. In general, the zero vector is used to represent the initial state of the hidden layer. If the neural network contains only one hidden layer, the activation function of the hidden layer will generally adopt sigmoid function, which is represented by σ. For a batch data with n samples, assuming that the length of the hidden layer is h and the dimension of the feature vector of the sample data is X, the output representation of the hidden layer is shown in formula (3): where b h , w represents the bias vector parameters and weights of the hidden layer, respectively. In the neural network, the output of the hidden layer is taken as the input of the output layer. Assuming that the dimension of the output vector corresponding to each sample is y, the final output representation is shown in formulae (4) and (5): Transformer Neural Network Translation Model. Attention mechanism is used for machine translation tasks. Encoder or decoder layers are directly used for attention, which reduces the transmission path of information. In addition, this attention approach can directly mine the semantic combination relationship between words inside sentences, and treat it as a semantic whole, making better use of word combination and even phrase information in translation, and better encoding semantic matching target language words. e final experimental results show that with the reduction of computation and the improvement of parallel efficiency, the translation result is also improved. Transformer is the encoder and decoder, respectively. e encoder maps the natural language sequence into a hidden layer, that is, the mathematical expression containing the natural language sequence. e decoder is responsible for remapping the hidden layer to a natural language sequence. First of all, text is typed in Transformer for embedding. at is word embedding processing. Text information is transformed into high-dimensional real vector. In order to identify the sequential relationship between statements, position embedding is introduced, and linear transformation of sine and cosine functions is used to provide position information for the model. In the encoder of Transformer, N � 6, that is, there are six layers, and each layer includes two sublayers, as shown in Figure 2. e first sublayer refers to the multihead self-attention mechanism, which is mainly used to calculate the self-attention value. e second sublayer is a simple fully connected network. Residual networks are added to each sublayer, and the output of each sublayer is shown in the equation (6): where Sublayer(x) represents the mapping of input x by the sublayer. To ensure dimension consistency, all sublayers and word embedding layers have the same output dimension. Transformer decoder is also composed of N � 6 layers, each layer includes three sublayers. e first sublayer is masked multihead self-attention, which is also used to calculate selfattention. However, because it is a generation process, there is no result at time i greater than i, and only at time less than i, so mask processing is required. e second layer is the encoder input, related to attention calculation. e third sublayer is also a fully connected network, the same as encoder's sublayer fully connected network. e encoders and decoders of the Transformer model do not contain cyclic neural networks or convolutional neural networks, so it is impossible to capture sequence information. For example, if K, V are scrambled in line, the result will be the same after attention. However, the sequence information is very important, representing the global structure of the sequence, so the relative or absolute position information of each word of the sequence must be used. Generative Adversarial Network. e core idea of generative adversarial network is derived from the Nash Computational Intelligence and Neuroscience equilibrium of game theory, which is a two-player game in which the sum of the interests of both sides is a constant. e generation problem is regarded as the competition and game between generator and discriminator networks: the generator generates synthetic data from a given noise (generally evenly distributed or normally distributed), and the discriminator distinguishes the generator's output from the real data [42]. e former tries to produce more realistic data, while the latter, in turn, tries to better distinguish real data from generated data. us, the two networks make progress in the confrontation and continue to fight after progress. en the data obtained from the generative network is more and more perfect, approaching the real data, so that the desired data can be generated. e antagonistic network judges that the text belongs to positive or negative emotion, and the final output results include that the emotional state that is more consistent with the language characteristics. e overall architecture of the model is shown in Figure 3. e left half of Figure 3 is made up of generator G and discriminator D. Among them, G is our neural machine translation model, which generates target sentences. D discriminates between the sentences generated by G and the artificial translation sentences, and generates feedback results. e right part carries out strategy gradient training for G, and the final feedback is provided by D and Q,, where Q is BLEU value. e model of generator G is similar to the model of neural machine translation. Generator G defines the method of generating the target sentence y, given the source statement x. e generator uses exactly the same architecture as the neural machine translation model. It is noteworthy that we do not assume a specific model structure for generator G. In order to verify the effectiveness of the proposed method, the generator adopts Bi-LSTM and Transformer. Since the length of the target sentence generated by the generator is not fixed, the discriminator model CNN fills the generated sentence to a certain extent and converts the target sentence into a sequence with fixed length T, which is the maximum length of the output target sentence of the generator. Given the source sentence sequence [x 1 , x 2 , ..., x T ] and the target sentence sequence [y 1 , y 2 , ..., y T ], the source matrices for the source sequence and the target sequence are, respectively, established as shown in the following expressions: ...; x T , x t ∈ R k , Y 1: T � y 1 ; y 2 ; ...; y T , y t ∈ R k . (7) When l words undergo convolution operation, a series of feature graphs are generated, as shown in the formula (8): Computational Intelligence and Neuroscience where ⊗ represents the sum of principal element multiplications, b is the offset term, and σ is the activation function. Apply the BLEU value to the generator as a specific target. For the target sequence y g generated by the generator and the real target sequence y d , by calculating the n-element syntax accuracy of the generated target sequence y g , the calculated result Q(y g , y d ) is used as the feedback of the final generation. In order to facilitate the fusion of D and Q, the value range of Q(y g , y d ) is 0-1, the same as the output of the discriminator. e objective of generator G is defined as maximizing the expected feedback from the beginning state of the generated sequence, and the objective function is shown in the formula. where θ is the parameter in generator G, Y 1: T � Y 1 , Y 2 , ..., Y T is the target sequence generated by generator, x is the source sentence sequence, Y * is the real existing target sentence sequence. e action value function from the source sentence sequence X given by R G θ D,Q to the target sequence indicates that the generated feedback is accumulated from the state. e action value function is calculated by combining the actual probability estimation output of discriminator D with the output of BLEU objective function Q as feedback. Experimental Analyses e experimental models were done on the Tensor Flow framework and then run on the GPU. When the model ran ten evaluation tests on the test set and the model performance did not improve, we stopped training the model. BLEU value is used as the evaluation index of translation results. In order to ensure the fairness of the experiment, 1 million sentence pairs are randomly selected from the LDC corpus as training data, and the source and target statements are encoded by byte pair encoding, respectively. About 36,000 words are generated in the source language and 32,000 words in the target language. Select NIST04 as the test set and NIST02 as the verification set. e hidden neural units of both the encoder and decoder are set to 512, and the dimension size of word embedding is also set to 512 dimensions. For the Transformer translation model, the basic structure of the model is used without any changes. We set the dimension size for word embedding to 512 dimensions, Dropout to 0.1, and multiple to 8. Both encoders and decoders have a six-layer network structure. For Bi-LSTM translation model, the number of hidden units of encoder and decoder is set to 512, and the dimension size of word embedding is also set to 512 dimensions. Dropout is not used to train the Bi-LSTM translation model. Baseline Experimental. It can be clearly seen from Figure 4(a) that the BLEU score of RNN model is low, indicating that the translation effect generated by the original RNN is not very good. is is because in the original RNN translation model structure, the Encoder needs to compress the whole source language sentence into a fixed dimension vector, and then the Encoder-Decoder decodes the whole target language sentence from it. is requires that the fixed dimensional vector contain all the information of the source language sentence, which is obviously difficult to achieve, so it becomes the performance bottleneck of the original RNN as a machine translation model. Although Bi-LSTM and Transformer models are better than traditional RNN models, the effect is still not ideal. Bi-LSTM model, due to the internal bidirectional time extraction of features, has a stronger timeliness of features, so it reaches the highest 35.74 in NIST04, and the average BLEU value is 34.06. Transformer, due to its own attention mechanism, well explores the potential connection between different time points, and the features obtained have stronger internal connection, and the overall effect is significantly improved. In order to clearly show the changes of the three groups of experiments, we used another way to express the experimental results, as shown in Figure 4(b). Generative Adversarial Network Model Experiment. According to the basic experiment, we select Bi-LSTM and Transformer, two models with better performance, to join the generative adversarial network. Experimental results were grouped according to the size of training parameters λ of generating adversarial network (0, 0.7, 0.8, 1.0). As can be seen from the experimental results in Figure 5, when the parameter λ of generating adversarial network is 0.7, the Bi-LSTM model achieves the best effect and the highest average value is 35.88. According to the changes of four curves, the experimental model in this paper conforms to objective laws. Transformer is the most outstanding model in all fields of artificial intelligence at present, and has been greatly improved after the introduction of GAN. As can be seen from Figure 6, the lowest BLEU introduced by Transformer model into generative adversarial network is 41.4, higher than the average of other models. When the parameter λ value of generated admission-network is 0.8, the model achieves the best result of 43.14 and the average value of 42.73. From the overall experimental results, BLEU values of Bi-LSTM and Transformer models have basically the same change rule with parameter λ, both of which are nonlinear changes. It is important for our subsequent improvement. As an expression mode closely related to culture, language deserves more features and models. Conclusion With the development of economic globalization, communication between countries, industries, and people of all countries are becoming more frequent and closer. Language is the tool of communication between people. How to quickly and accurately realize the free conversion between different languages is vital. Machine translation is an important research direction in natural language processing, and the development of deep learning related technologies has improved the methods and performance of machine translation. Machine translation as an efficient tool for language conversion, is of great practical significance in translating different languages into equivalent languages while preserving original semantics. Aiming at common neural machine translation models, this paper combines generative adversarial network with machine translation and improves the translation effect of translation models through adversarial training of generative adversarial network. In this paper, classic neural network model and attention-based Transformer model are studied. en, Bi-LSTM model and Transformer model are added with generative adversarial network, respectively. rough the addition of generative adversarial network, the newly constructed model is analyzed and studied. rough the adversarial idea of generative adversarial network, certain feedback is obtained from discriminator D and acted on generator G to improve the translation effect of the translation model, get two emotional attributes of opposite polarity, and the effectiveness of the improved analysis method is verified through the final experiment. ere are many hidden forms of emotion in language, and it is difficult to find the deep meaning of language by ordinary models, which is also the biggest advantage of the model in this paper. Data Availability e raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Computational Intelligence and Neuroscience 9
7,861
2022-05-31T00:00:00.000
[ "Computer Science", "Linguistics" ]
Hierarchical Multi-Label Classification of Scientific Documents Automatic topic classification has been studied extensively to assist managing and indexing scientific documents in a digital collection. With the large number of topics being available in recent years, it has become necessary to arrange them in a hierarchy. Therefore, the automatic classification systems need to be able to classify the documents hierarchically. In addition, each paper is often assigned to more than one relevant topic. For example, a paper can be assigned to several topics in a hierarchy tree. In this paper, we introduce a new dataset for hierarchical multi-label text classification (HMLTC) of scientific papers called SciHTC, which contains 186,160 papers and 1,234 categories from the ACM CCS tree. We establish strong baselines for HMLTC and propose a multi-task learning approach for topic classification with keyword labeling as an auxiliary task. Our best model achieves a Macro-F1 score of 34.57% which shows that this dataset provides significant research opportunities on hierarchical scientific topic classification. We make our dataset and code for all experiments publicly available. Introduction With the exponential increase of scientific documents being published every year, the difficulty in managing and categorizing them in a digital collection is also increasing.While the enormity of the number of papers is the most important reason behind it, the problem can also be assigned to the large number of topics.It is a very difficult task to index a paper in a digital collection when there are thousands of topics to choose from.Fortunately, the large number of topics can be arranged in a hierarchy because, except for a few general topics, all topics can be seen as a sub-area of another topic.After arranging the topics in a hierarchy tree, the task of categorizing a paper becomes much simpler since now there are only a handful of topics to choose from at each level of the hierarchy.However, manual assignment of topics to a large number of papers is still very difficult and expensive, making an automatic system of hierarchical classification of scientific documents a necessity. After arranging the topics in a hierarchy, the classification task no longer remains a multi-class classification because in multi-class classification, each paper is classified into exactly one of several mutually exclusive classes or topics.However, if the topics are arranged in a hierarchy, they no longer remain mutually exclusive: a paper assigned to a topic node a in the hierarchy also gets assigned to topic node b, where b is a node in the parental history of a.For example, if a paper is classified to the area of natural language processing (NLP), it is also assigned to the topic of artificial intelligence (AI) given that AI is in the parental history of NLP.This non-mutual exclusivity among the topics makes the task a multi-label classification task. Despite being an important problem, hierarchical multi-label topic classification (HMLTC) has not been explored to a great extent in the context of scientific papers.Most works on hierarchical and/or multi-label topic classification focus on news articles (Banerjee et al., 2019;Peng et al., 2018) and use the RCV-1 dataset (Lewis et al., 2004) for evaluation.This is partly because of a lack of datasets for hierarchically classified scientific papers, which hinders progress in this domain.Precisely, the existing multi-label datasets of scientific papers are either comparatively small (Kowsari et al., 2017) or the label hierarchy is not deep (Yang et al., 2018). Therefore, we address the scarcity of datasets for HMLTC on scientific papers by introducing a new large dataset called SciHTC in which the papers are hierarchically classified based on the ACM CCS tree. 2 Our dataset is large enough to allow deep learning exploration, comprising 186, 160 re-search papers that are organized into 1, 233 topics, which are arranged in a six-level deep hierarchy.We establish several strong baselines for both hierarchical and flat multi-label classification for SciHTC.In addition, we conduct a thorough investigation on the usefulness of author specified keywords in topic classification.Furthermore, we show how multi-task learning with scientific document classification as the principal task and its keyword labeling as the auxiliary task can help improve the classification performance.However, our best models with SciBERT (Beltagy et al., 2019) achieve only 34.57% Macro-F1 score which shows that there is still plenty of room for improvement. Related Work To date, several datasets exist for topic classification of scientific papers.Kowsari et al. (2017) created a hierarchically classified dataset of scientific papers from the Web of Science (WoS). 3owever, their hierarchy is only two levels deep and the size of their dataset is 46, 985, which is much smaller than its counterpart from news source data.In addition, there are only 141 topics in the entire hierarchy.The Cora4 dataset introduced by McCallum et al. (2000) is also hierarchically classified with multiple labels per paper and contains about 50, 000 papers.The hierarchy varies in depth from one to three and has 79 topics in total.However, the widely used version of Cora5 contains only 2, 708 papers (Lu and Getoor, 2003) and is not hierarchical.Similarly, the labeled dataset for topic classification of scientific papers from Cite-Seer6 (Giles et al., 1998) is also very small in size containing only 3, 312 papers with no hierarchy over the labels.Yang et al. (2018) created a dataset of 55, 840 arXiv papers where each paper is assigned multiple labels using a two-level deep topic hierarchy containing a total of 54 topics.Similar to us, Santos and Rodrigues (2009) proposed a multi-label hierarchical document classification dataset using the ACM category hierarchy.However, our dataset is much larger in size than this dataset (which has ≈ 15, 000 documents in their experiment setting).Furthermore, the dataset by Santos and Rodrigues (2009) is not available online and cannot be reconstructed as the ACM paper IDs are not provided. Recently, Cohan et al. (2020) released a dataset of 25, 000 papers collected from the Microsoft Academic Graph7 (MAG) as part of their proposed evaluation benchmark for document level research on scientific domains.Although the papers in MAG are arranged in a five level deep hierarchy (Sinha et al., 2015), only the level one categories (19 topics in total) are made available with the dataset.In contrast to the above datasets, SciHTC has 1, 233 topics arranged in a six level deep hierarchy.The total number of papers in our dataset is 186, 160 which is significantly larger than all other datasets mentioned above.The topic hierarchy of each paper is provided by their respective authors.Since each paper in our dataset is assigned to all the topics on the path from the root to a certain topic in the hierarchy tree, our dataset can be referred to as a multi-label dataset for topic classification. For multi-label classification, there are two major approaches: a) training one model to predict all the topics to which each paper belongs (Peng et al., 2018;Baker and Korhonen, 2017b;Liu et al., 2017); and b) training one-vs-all binary classifiers for each of the topics (Banerjee et al., 2019;Read et al., 2009).The first approach learns to classify papers to all the relevant topics simultaneously, and hence, it is better suited to leverage the interlabel dependencies among the labels.However, despite that it is simpler and more time efficient, it struggles with data imbalance (Banerjee et al., 2019).On the other hand, the second approach gives enough flexibility to deal with the different levels of class imbalance but it is more complex and not as time efficient as the first one.In general, the second approach takes additional steps to encode the inter-label dependencies among the co-occurring labels.For example, in hierarchical classification, the parameters of the model for a child topic can be initialized with the parameters of the trained model for its parent topic (Kurata et al., 2016;Baker and Korhonen, 2017a;Banerjee et al., 2019).In this work, we take both approaches and compare their performance. Besides these approaches, another approach for hierarchical and/or multi-label classification in recent years is based on sequence-to-sequence models (Yang et al., 2018(Yang et al., , 2019)), which we explored in this work.However, these models failed to show satisfactory performance on our dataset.We also explored the hierarchical classification proposed by Kowsari et al. (2017) where a local classifier is trained at every node of the hierarchy, but this model also failed to gain satisfactory performance.Onan et al. (2016) proposed the use of keywords together with traditional ensemble methods to classify scientific papers.However, since ground truth keywords were not available for the papers in their dataset, the authors explored a frequency based keyword selection, which gave the best performance.Therefore, their application of keyword extraction methods for the classification task can be seen as a feature selection method. Our SciHTC dataset, in addition to being very large, multi-labeled, and hierarchical, contains the author-provided keywords for each paper.In this work, we present a thorough investigation of the usefulness of keywords for topic classification and propose a multi-task learning framework (Caruana, 1993;Liu et al., 2019) that uses keyword labeling as an auxiliary task to learn better representations for the main topic classification task. The SciHTC Dataset We constructed the SciHTC dataset from papers published by the ACM digital library, which we requested from ACM. Precisely, the dataset provided by ACM has more than 300, 000 papers.However, some of these papers did not have their author-specified keywords, whereas others did not have any category information.Thus, we pruned all these papers from the dataset.Finally, there were 186, 160 papers which had all the necessary attributes and the category information.The final dataset was randomly divided into train, development and test sets in an 80 : 10 : 10 ratio.The category information of the papers in our dataset were defined based on the category hierarchy tree created by ACM.This hierarchy tree is named CCS or Computing Classification System.The root of the hierarchy tree is denoted as 'CCS' and there are 13 nodes at level 1 which represent topics such as "Hardware," "Networks," "Security and Privacy," etc.Note that CCS itself does not represent any topic (or category).It is simply the root of the ACM hierarchy tree.There are 6 levels in the hierarchy tree apart from the root 'CCS'.That is, the maximum depth among the leaf nodes in the tree is 6.However, note that the depths of different sub-branches are not uniform and there are leaf nodes in the tree with a depth less than 6.Each paper in our dataset is assigned to one or more sub-branches of the hierarchy tree by their respective authors with different depth levels and relevance scores among {100, 300, 500} with 500 indicating the most relevant.The authors also provide a set of keywords relevant to their paper.Table 2 shows the assigned sub-branches and keywords of an example paper,8 both of them being provided by the authors.Among the author-specified subbranches, we only consider the sub-branch with the highest relevance score for each paper.Thus, the categories in the first sub-branch (bolded line) in Table 2 are selected as the labels for the paper.However, considering all relevant sub-branches can present a more interesting and challenging task which can be explored in future work. There are 1, 233 different topics in total in our final dataset.However, we find that the distribution of the number of papers over the topics is very imbalanced and a few topics (especially in the deeper levels of the hierarchy) had extremely low support (i.e., rare topics).Thus, for our experiments, we only consider the topics up to level 2 of the CCS hierarchy tree which had at least 100 examples in the training set. Figure 1 shows the number of papers in each of the 95 topics up to level 2 of the hierarchy tree in our dataset.We also report the explicit topic distribution (i.e., topic name vs. support) in Appendix A. Note that since there are 12 topics (among the 95 topics up to level 2 of the hierarchy) with less than 100 examples in the training set, we remove them and experiment with the remaining 83 topics.Although we do not use the topics with low support in our experiments, we believe that they can be potentially useful for hierarchical topic classification of rare topics.Therefore, we make available not only the two-level hierarchy dataset used in our experiments but also all relevant topics for each paper from the six-level hierarchy tree. Methodology This section describes the hierarchical and flat multi-label baselines used in our experiments ( §4.1); after that, it introduces our simple incorporation of keywords into the models ( §4.2); lastly, it presents our multi-task learning framework for topic classification ( §4.3). Problem Definition Let p be a paper, t be a topic from the set of all topics T , and n be the number of all topics in T ; and let x p denote the input text and y p denote the label vector of size n corresponding to p.For our baseline models, x p is a concatenation of the title and abstract of p.The goal is to predict the label vector y p given x p such that, if p belongs to topic t, y p t = 1, and y p t = 0 otherwise, i.e., identify all topics relevant to p. Baseline Modeling We establish both flat and hierarchical classification approaches as our baselines, as discussed below. Flat Multi-Label Classification We refer to the classifiers that predict all relevant topics of a paper with a single model as flat multilabel classifiers.Although these models leverage the inter-label dependencies by learning to predict all relevant labels simultaneously, they do not consider the label hierarchy structure.In the models, all layers are shared until the last layer during training.Instead of softmax, the output layer consists of n nodes, each with sigmoid activation.Each sigmoid output represents the probability of a topic t being relevant for a paper p, We use the following neural models to obtain representations of the input text: neural model Bi-LSTM (Hochreiter and Schmidhuber, 1997), and pre-trained language models-BERT (Devlin et al., 2019) and SciBERT (Beltagy et al., 2019). Traditional Neural Models We use a BiLSTM based model similar to Banerjee et al. (2019) as our traditional neural baseline.Specifically, we take three approaches to obtain a single representation of the input text from the hidden states of the Bi-LSTM and concatenate them before they are sent to the fully connected layers.These approaches are: element-wise max pool, element-wise mean pool, and an attention weighted context vector.The attention mechanism is similar to the word level attention mechanism from Yang et al. (2016).After the Bi-LSTM, we use one fully connected layer with ReLU activation followed by the output layer with sigmoid activation.The obtained representations are projected with n weight matrices W t ∈ R d×1 .We also explore a CNN based model as another neural baseline and report its performance and architectural design in Appendix C. Pre-trained Language Models We fine-tune base BERT (Devlin et al., 2019) and SciBERT (Beltagy et al., 2019) using the HuggingFace9 transformers library.We use the "bert-base-uncased" and "scibert-scivocab-uncased" variants of BERT and SciBERT, respectively.Both of these language models are pre-trained on huge amounts of text.While BERT is pre-trained on the BookCorpus (Zhu et al., 2015) and Wikipedia,10 SciBERT is pretrained exclusively on scientific documents.After getting the hidden state embedded in the [CLS] token from these models, we send them through a fully connected output layer to get the classification probability.That is, we project the [CLS] token with n weight matrices W t ∈ R d×1 .The language model and classification parameters are jointly fine-tuned. Figure 2: The architecture of our proposed multi-task learning model using BERT as the encoder.The model jointly learns two tasks: topic classification and keyword labeling.The shared layers are at the bottom whereas the task-specific layers are at the top. Hierarchical Multi-Label Classification In this approach, we train n one-vs-all binary classifiers.As with flat multi-label classification, we use both traditional neural models based architectures and pre-trained language models, which are similar to the flat architectures described in §4.1.1 with two key differences.First, the output layer no longer contains n number of nodes.Since we train binary classifiers, we change the architectures by having output layers with only one node with sigmoid activation.Second, to leverage the inter-label dependencies we initialize the model parameters of a child node in the topic hierarchy tree by its parent node's trained model parameters similar to Kurata et al. (2016); Baker and Korhonen (2017a); Banerjee et al. ( 2019).An illustration of this method of leveraging the topic hierarchy to learn inter-label dependencies can be seen in Appendix B. Incorporating Keywords We aim to improve upon the baseline models described above by incorporating the keywords specified by the authors of every paper into the model.The keywords of a paper can provide fine-grained topical information specific to a paper and at the same time are indicative about the general (coarsegrained) topics of the paper.Thus, the keywords can be seen as a bridge between the general topics of a paper and the fine details available in it (see Table 2 for examples of general topics and keywords of a paper for the fine nuances of each). We incorporate the keywords by a simple concatenation approach.The input text x p is extended with the keywords k p specified by the authors of p. (1) We use the same network architectures as in §4.1, in both flat and hierarchical settings. Although this approach strikes by its simplicity and, as we will see in experiments, improves over the baselines in §4.1 that use only the title and abstract as input, it is often the case that at test time the keywords of a paper are not always available, which affects the results.Our aim is to build models that are robust enough even in the absence of keywords for papers at test time.Our proposal is to explicitly teach the model to learn to recognize the keywords in the paper that are indicative of its topics, using multi-task learning. Multi-Task Learning with Keywords We propose a neural multi-task learning framework for topic classification of scientific papers where the main task of topic classification is informed by the keyword labeling auxiliary task which aims to identify the keywords in the input text. Keyword Labeling Given an input sequence g., title and abstract), the objective is to predict a sequence of labels z = {z 1 , • • • , z N }, where each label z i is 1 (a keyword) or 0 (not a keyword), i.e., predict whether a word in the sequence is a keyword or not.During training, we do an exact match of the tokenized authorspecified keywords in the tokenized input text (ti-tle+abstract) and set the keyword label z i as 1 for the positions where we find a match in the input text and 0 otherwise. Shared layers. The input of the model is the sequence x p of N words.These words are first mapped into word embedding vectors (e.g., by summing word and positional embeddings in BERT), which are then fed into the encoder block that produces a sequence of contextual embeddings (one for each input token, including the [CLS] token in the transformer-based models). Task-specific layers.There are two task-specific output layers.The topic classification output layer works the same as discussed in §4.1.On the other hand, the output layer for keywords labeling consists of a fully connected layer with sigmoid activation which predicts whether a word is a keyword or not by using each token contextual embeddding. Training The model is optimized based on both tasks.During training, two losses are calculated for the two tasks (main and auxiliary) and they are combined together as a sum.This summed loss is used to optimize the model parameters.The overall loss L(θ) in our model is as follows: where L 1 (θ) is the loss for topic classification and L 2 (θ) is the loss for keyword labeling.α and β are hyperparameters to scale the losses with different weights. Experiments and Results We perform the following experiments.First, we study the difficulty of classifying topics for scientific papers from our dataset in comparison with related datasets ( §5.1).Second, we show the impact of the hierarchy of topics on models' performance and how incorporating keywords can help improve the performance further ( §5.2).Third, we evaluate the performance of our proposed multi-task learning approach ( §5.3).Implementation details are reported in Appendix D. The WoS dataset does not have the titles of the papers.Therefore, only the abstracts are used as the input sequence.For the Cora and MAG datasets as well as our SciHTC dataset, we use both title and abstract as the input sequence.We use the train, test and validation splits released by Cohan et al. (2020) for the MAG dataset but we split the other two datasets (Cora and WoS) in a 80:10:10 ratio similar to ours because they did not have author defined splits.For this experiment, our goal was to compare the degree of difficulty of our dataset with respect to the other related datasets.We thus choose to experiment with both flat and hierarchical baseline Bi-LSTM models with 300D Glove embeddings.On the MAG dataset, we only report the Flat Bi-LSTM performance since only level 1 categories are made available by the authors (with no hierarchy information).We experiment with the categories up to level 2 of the hierarchy tree for the other datasets.Table 3 shows the Macro F1 scores of these models on the four datasets along with the number of topics in each dataset and the size of each dataset.We find that: SciHTC is consistently more challenging compared with related datasets.As we can see from Table 3, both models (flat and hierarchical) show a much lower performance on SciHTC compared with the other datasets.It is thus evident that the degree of difficulty is much higher on our dataset, making it a more challenging benchmark for evaluation.An inspection into the categories of the related datasets revealed that these categories are more easily distinguishable from each other.For example, the categories in WoS and MAG cover broad fields of science with small overlaps between them.They range from Psychology, Medical Science, Biochemistry to Mechanical Engineering, Civil Engineering, and Computer Science.The vocabularies used in these categories/fields of science are quite different from each other and thus, the models learn to differentiate between them more easily.On the other hand, in our dataset, all papers are from the ACM digital library which are related to Computer Science and are classified to more fine-grained topics than the ones from the above datasets.Examples of topics from our dataset include Network Architectures, Network Protocols, Software Organization and Properties, Software Creation and Management, Cryptography, Systems Security, etc.Therefore, it is more difficult for the models to learn and recognize the fine differences in order to classify the topics correctly resulting in lower performance compared to the other datasets. Impact of Hierarchy and Keywords Next, we explore the usefulness of the hierarchy of topics and keywords for topic classification on SciHTC.We experiment with all of our baseline models (flat and hierarchical) described in §4.1 and with the incorporation of keywords described in §4.2.Precisely, each model is evaluated twice: first using only the input sequence (title+abstract) without the keywords and second by concatenating the input sequence with the keywords as in Eq. 1.We run each experiment three times and report their average and standard deviation in Table 4.As we can see, the standard deviations of the performance scores shown by the models are very low.This illustrates that the models are stable and easily reproducible.We make the following observations: The hierarchy of topics improves topic classification.We can observe from Table 4 that all hierarchical models show a substantially higher performance than their flat counterparts regardless of using keywords or not.Given that the flat models learn to predict all relevant labels for each document simultaneously, it is possible for them to learn inter-label dependencies to some extent.However, due to the unavailability of the label hierarchy, the nature of the inter-label dependencies is not specified for the flat models.As a result, they can learn some spurious patterns among the labels which are harmful for overall performance.In contrast, for the hierarchical models we can specify how the inter-label dependencies should be learned (by initializing a child's model with its parent's model) which helps improve the performance as we can see in our results. Incorporating keywords brings further improvements.From Table 4, we can also see that the performance of all of our baseline models increases when keywords are incorporated in the input sequence.These results illustrate that indeed, the fine-grained topical information provided by the keywords of each paper is beneficial for predicting its categorical labels (and thus capture an add-up effect for identifying the relevant coarser topics).Moreover, keywords can provide additional information which is unavailable in the title and the abstract but is relevant of the rest of the content and indicative of the topic of the paper.This additional information also helps the models to make better predictions. Transformer-based models consistently outperform Bi-LSTM models and SciBERT performs best.BERT and SciBERT show strong performance across all settings (hierarchical vs. flat and with keywords vs. without) in comparison with the BiLSTM models.Interestingly, even the flat transformer based models outperform all BiLSTM based models (including hierarchical).We believe that this is because BERT and SciBERT are pretrained on a large amount of text.Therefore, they are able to learn better representations of the words in the input text.Comparing the two transformer based models (BERT and SciBERT), SciBERT shows the better performance.We hypothesize that this is because SciBERT's vocabulary is more relevant to the scientific domain and it is pre-trained exclusively on scientific documents.Hence, it has a better knowledge about the language used in scientific documents. Multi-task Learning Performance The results in Table 4 show that the keywords are useful for topic classification but it is assumed that these keywords are available for papers not only during training but also at test time.However, often at test time the keywords of a paper are not available.We turn now to the evaluation of models when keywords are not available at test time.We compare our multi-task approach ( §4.3) with the models trained with concatenating the keywords in the input sequence (during training) but tested only on the input sequence without keywords.The motivation behind this comparison is to understand the difference in performance of the models which leverage keywords during training in a manner different from our multi-task models but not at test time (same as the models trained with our multitask approach).These results are shown in Table 5.We found that: Multi-task learning effectively makes use of keywords for topic classification.A first observation is that not making use of gold (authorspecified) keywords at test time (but only during training KW tr through concatenation using Eq. 1) decreases performance (see Table 4 bottom half and Table 5 top half).Remarkably, the multi-tasking models (which also do not use gold keywords at test time) are better at classifying the topics than the models that use keywords only during training through concatenation.In addition, comparing the models that do not use keywords at all and the multi-task models (top half of Table 4 and bottom half of Table 5), we can see that the multi-task models perform better.Furthermore, the performance of the multi-tasking models is only slightly worse compared with that of the models that use gold (author-specified) keywords both during train and test (see bottom halves of Tables 4 and 5).These results indicate that the models trained with our multi-task learning approach learn better representations of the input text which help improve the classification performance, thereby harnessing the usefulness of author-specified keywords even in their absence at test time. Analysis and Discussion From our experiments, it is evident that all of our hierarchical baselines can outperform their flat counterparts.But it is not clear whether the performance gain comes from using the hierarchy to better learn the parent-child dependency or it is because we allow the models to focus on each class individually by training one-vs-all binary classifiers in our hierarchical setting as opposed to one flat model for all the classes.In addition, our experiments also show that keywords can be used in multiple ways to improve topic classification performance.However, it is unclear whether or not keywords by themselves can achieve the optimal performance.Thus, we analyze our models in these aspects with the following experiments.Hierarchical vs. n-Binary We conduct an experiment with SciBERT where we train a binary classifier for each class similar to the hierarchical SciBERT model but do not initialize it with its parent's model parameters, i.e., we do not make use of the topic hierarchy.We compare the performance of this n-binary-SciBERT model with HR-SciBERT model in Table 6.We can see that the non-hierarchical approach with n binary models has more than 2 percentage points lower Macro F1.The performance of deep learning models depends partly on how their parameters are initialized (Bengio et al., 2017).For the n-binary approach, since we initialize the model parameters for each class with a SciBERT model pre-trained on unsupervised data, it is forced to learn to distinguish between the examples belonging to this class and the examples from all other classes from scratch.In contrast, when the model parameters for a node in the topic hierarchy are initialized with its parent node's trained model (for HR models), we start with a model which already knows a superset of the distinct characteristics of the documents belonging to this node (i.e., the characteristics of the papers which belong to its parent node).In other words, the model does not need to be trained to classify from scratch.Therefore, the hierarchical classification setup acts as a better parameter initialization strategy which leads to a better performance. With Keywords vs.Only Keywords We experiment with flat BiLSTM, BERT and SciBERT models with only keywords as the input.A comparison of these only keywords models with the models which use title, abstract and keywords can be seen in Table 7.We can see a decline of ≈ 12%, ≈ 8% and ≈ 5% in Macro F1 for BiLSTM, BERT and SciBERT, respectively, when only keywords are used as the input.Therefore, we can conclude that keywords are useful in topic classification but that usefulness is evident when other sources of input are also available. Conclusion In this paper, we introduce SciHTC, a new dataset for hierarchical multi-label classification of scientific papers and establish several strong baselines.Our experiments show that SciHTC presents a challenging benchmark and that keywords can play a vital role in improving the classification performance.Moreover, we propose a multi-task learning framework for topic classification and keyword labeling which improves the performance over the models that do not have keywords available at test time.We believe that SciHTC is large enough for fostering research on designing efficient models and will be a valuable resource for hierarchical multi-label classification.In our future work, we will explore novel approaches to further exploit the topic hierarchy and adopt few-shot and zero-shot learning methods to handle the extremely rare categories.We will also work on creating datasets from other domains of science with similar characteristics as SciHTC to allow further explorations. Limitations One limitation of our proposed dataset could potentially be that all of our papers are from the computer science domain and therefore, it does not provide coverage to papers from other scientific areas.However, we see this as a strength of our dataset rather than a weakness.There are other datasets already available which cover a diverse range of scientific areas (e.g., WoS).In contrast, we address the lack of a resource which can be used to study hierarchical classification among fine-grained topics, with potential confusable classes.SciHTC can be used as a benchmark for judging the models' ability in distinguishing very subtle differences among documents belonging to closely related but different topics which will lead to development of more sophisticated models. BERT Figure 3: Architecture of our flat multi-label classification baseline using BERT.Here, BERT is the only shared layer for all topics.F F t , σ and ŷp t denote the feed forward layer for topic t, sigmoid activation function and prediction indicating whether topic t is relevant for the input paper p or not (1 or 0), respectively. A Label Distribution We can see the explicit label distribution showing the number of topics belonging to each topic up to level 2 of the category hierarchy tree in Table 9. B Flat & Hierarchical Model Architectures Figure 3 illustrates the architecture of our flat multilabel classification baselines.Here, we show BERT as the encoder to avoid clutter but we also use BiL-STM, XML-CNN and SciBERT as encoders as we describe in Section 4. We can see that the encoder is shared by all topics and there is one feed-forward layer for each topic, t = 1, 2, ..., n.A sigmoid activation is applied to the feed-forward layers' output to predict whether each corresponding topic is relevant to an input paper or not (1 or 0). We can also see an example of leveraging topic hierarchy to learn inter-label dependencies in C CNN Models and Results We follow the XML-CNN architecture proposed by Liu et al. (2017), which consists of three convolution filters on the word embeddings of the input text.The outputs from the convolution filters are pooled with a certain window.The pooled output then goes trough a bottleneck layer where the dimensionality of the output is reduced to make it computationally efficient.The output from the bottleneck layer is then sent to the output layer for topic prediction. Note that Bi-LSTM, BERT and SciBERT give a contextual representation for every word in the input text which can be used for sequence labeling.This is not necessarily true for CNN.To ensure we have a representation of every word in the input text from CNN filters, the filter sizes are selected in such a way that the number of output vectors match the length of the input text, as presented in (Xu et al., 2018).Having a corresponding representation for each token is necessary for our multi-task objective.We can see the results of this model in Table 8. D Implementation Details We started pre-processing our data by converting title, abstract and keywords to lower case letters.Then, we removed the punctuation marks for the LSTM and CNN models.The text was tokenized using the NLTK tokenizer.11After tokenizing the text, we stemmed the tokens using Porter stemmer. 12Finally, we masked the words which occur less than two times in the training set with an < unk > tag.The rest of the unique words were used as our vocabulary. We address the imbalance of classes in our data by assigning the following weights to the examples of the positive class while training our CNN and LSTM based hierarchical classifiers: 1,3,5,10,15...40.The best weight was chosen based on the model's F1 score on the validation set.However, for the flat multi-label classifiers, we could not try this method because finding the optimal combination of weights would take exponential time.We also did not try this approach for hierarchical BERT and Sci-BERT because they are already very time consuming and expensive.We tuned the sigmoid thresholds from 0.1 − 0.9 on the validation data and the thresholds with the highest performance scores were chosen for every class separately.We tune the loss scaling parameters [α, β] for our multi-task objective with the following values: [0.3, 0.7], [0.4,0.6], [0.5, 0.5], [0.6, 0.4], [0.7, 0.3], [1, 1] on the development set and found that the models show the best performance with [1,1]. For all our experiments, the maximum lengths for input text (title+abstract) sequence and keywords sequence was set to 100 and 15 respectively.We used pre-trained 300 dimensional Glove 13 embeddings to represent the words for LSTM and CNN based models.The hidden state size for the bidirectional LSTMs were kept at 72 across all our models.The fully connected layer after the bi-LSTM layer has size 16 for the hierarchical models and 72 for the flat models.We tried to keep them both at size 16.However, the flat LSTM model showed very unstable performance with a hidden 12 https://www.nltk.org/howto/stem.html 13https://nlp.stanford.edu/projects/glove/ layer of size 16.The filter sizes for the XML-CNN was chosen as 3, 5 and 9 and the number of filters for each of the sizes were set at 64.The input text was padded by 1, 2 and 4 units for each of the filter windows, respectively.The pooling window was set at 32 and the bottleneck layer converted the pooled output to a vector of length 512. We used binary cross-entropy as our loss functions for both classification and keyword labeling tasks in all our models.Adam optimizer (Kingma and Ba, 2014) was used to train the models with mini-batch size 128.Except the transformer based models, the initial learning rate for all other models was kept at 0.001.For BERT and Sci-BERT, the learning rate was set to 2e −5 .The hierarchical LSTM and CNN based models were trained for 10 epochs each.We employed early stopping with patience equal to 3. On the other hand, flat-LSTM and flat-XML-CNN models were trained for 50 epochs with patience 10.The flat and hierarchical transformer based models were fine tuned for 5 and 3 epochs, repectively.We ran our experiments on NVIDIA Tesla K80 GPUs.The average training time was 2 days for the hierarchical LSTM models and additional ∼24 hours with the multi-task approach.Hierarchical CNN models took ∼24 hours to train with additional 4/5 hours more with the multi-task approach.The flat models took less than 1 hour to train for both LSTM and CNN.The flat transformer based models took ∼14 hours to train on one GPU.We used 8 of the same NVIDIA Tesla K80 GPUs to train the hierarchical transformer based models.It took ∼ 6 days to train all 83 binary models. Figure 1 : Figure 1: Number of papers in each topic up to level two of the ACM CCS hierarchy tree. Figure 4. Here, all models are binary classifiers for a single label from the topic hierarchy.θ a , θ b and θ c represent the model parameters of topics a, b and c, respectively where a is the parent topic of b and c in the hierarchy tree.Both θ b and θ c are initialized with θ a to encode inter-label dependencies and then fine-tuned to predict whether topic b and topic c are relevant to an input paper or not. Figure 4 : Figure 4: Leveraging topic hierarchy to learn interlabel dependencies.Here, θ a , θ b and θ c are the model parameters of a parent class a and its children b and c, respectively.The child models θ b and θ c are initialized with the parent model θ a and then fine-tuned on the data at child nodes b and c, respectively. Table 1 : The number of examples in each set can be seen in Table1.Dataset Splits. Table 2 : Category hierarchies with different relevance scores and keywords for a paper -both specified by the authors. Table 3 : Number of topics n, dataset size, and Macro F1 of flat (F) and hierarchical (HR) Bi-LSTM. Table 4 : Performance comparison between models which use keywords vs. models which do not use keywords.HR -hierarchical, KW -keywords.Best results (Micro-F1 and Macro-F1) are shown in bold.Here, * and # indicate statistically significant improvements by the HR models over their flat counterparts ( * ) and with KW models over their w/o KW counterparts ( # ), respectively, according to a paired T-test with significance level α = 0.05. Table 5 : Performance comparison between models which use keywords during training by concatenating them using Eq. 1 but not during testing vs. models trained using multi-task learning which also do not use keywords at test time.The superscript tr on KW tr indicates that the keywords were concatenated only during training.Best results (Micro-F1 and Macro-F1) are shown in bold.Here, * indicates statistically significant improvements of the multi-task models over the KW tr models according to a paired T-test with significance level α = 0.05. Table 6 : Comparison of performance between hierarchical SciBERT and n-binary SciBERT models which do not learn the parent-child relationships. Table 7 : Comparison of performance of flat BiLSTM, BERT and SciBERT trained and tested with only keywords (no title and abstract) vs. trained and tested with title+abstract+keywords. Table 8 : Comparison of performance among different CNN based models.Here, HR -hierarchical, KW -keywords.The superscript tr on KW tr indicates that the keywords were used only during training.
9,353.4
2022-11-05T00:00:00.000
[ "Computer Science", "Biology" ]
The status of DECIGO DECIGO (DECi-hertz Interferometer Gravitational wave Observatory) is the planned Japanese space gravitational wave antenna, aiming to detect gravitational waves from astrophysically and cosmologically significant sources mainly between 0.1 Hz and 10 Hz and thus to open a new window for gravitational wave astronomy and for the universe. DECIGO will consists of three drag-free spacecraft arranged in an equilateral triangle with 1000 km arm lengths whose relative displacements are measured by a differential Fabry-Perot interferometer, and four units of triangular Fabry-Perot interferometers are arranged on heliocentric orbit around the sun. DECIGO is vary ambitious mission, we plan to launch DECIGO in era of 2030s after precursor satellite mission, B-DECIGO. B-DECIGO is essentially smaller version of DECIGO: B-DECIGO consists of three spacecraft arranged in an triangle with 100 km arm lengths orbiting 2000 km above the surface of the earth. It is hoped that the launch date will be late 2020s for the present.. Introduction The first direct detection of gravitational wave (GW) has been done with aLIGO [1]. The first result of LISA pathfinder showed demonstration of surprisingly low-noise-level free fall exceeding expectations before launch [2]. The gravitational wave physics and astronomy took the new step to next stage. As astronomy using electromagnetic wave, gravitational wave is also expected in various frequency band to have wide frequency spectrum. Terrestrial detecters like aLIGO, aVIRGO, GEO, KAGRA and ET are most sensitive at audio frequency band around 10 to 1kHz, on the other hand, space born detector LISA is at low frequency region around mHz. At further low frequency band, PPTA (Pulsar Timing Array) and polarized CMB (Cosmic Microwave Background) are also interesting option to access to the unique information of physics and universe. Japanese DECIGO, planned space gravitational wave antenna, might be able to provide another new way to observe universe, because only DECIGO will be sensitive to deci-Hz gravitational wave signals. DECIGO DECIGO (DECi-hertz Interferometer Gravitational wave Observatory) is the planned Japanese space gravitational wave antenna [3,4,5], which was originally proposed by Seto, Kawamura and Nakamura [6] to measure the acceleration of the universe through GWs from binary NS-NS at z ∼ 1. DECIGO is targeting to observe gravitational waves from astrophysically and cosmologically significant sources mainly between 0.1 and 10 Hz, thus, to open a new window of observation for gravitational wave astronomy, and also for the universe. The scope of DECIGO is to bridge (Fig.2) the frequency gap between LISA [7] band and terrestrial detectors band such as advanced LIGO, advanced VIRGO, GEO and KAGRA.The major advantage of DECIGO specializing in this frequency band is that the expected confusion limiting noise level caused by irresolvable gravitational wave signals from many compact binaries, such as white dwarf binaries in our Galaxy, is quite low above 0.1 Hz [8], therefore there is a potentially extremely deep window in this band. Thus, as DECIGO will have sensitivity in the frequency range between LISA and terrestrial detectors band, DECIGO can serve as a follow-up for LISA by observing inspiraling sources that have moved above the LISA band, or as a predictor for terrestrial detectors by observing inspiraling sources that have not yet moved into the terrestrial detectors band. Pre-conceptual design The pre-conceptual design of DECIGO consists of three drag-free spacecraft which keep triangular configuration with formation flying technique. The separation of each spacecraft is designed to be 1,000 km, whose relative displacements are measured by a differential Fabry-Perot (FP) interferometer (Fig.1). The laser source is supposed to be frequency-doubled Yb:YAG laser with λ = 515 nm yielding output power of 10 W. The mass of the mirror is 100 kg with 1 m diameter, with low-loss high-reflectivity coatings, which enables the finesse of FP cavity to reach 10 with green light. Three sets of such interferometers sharing the mirrors as arm cavities comprise one cluster of DECIGO. As shown in Fig.1, four clusters of DECIGO, located separately in the heliocentric orbit with two of them nearly at the same position, form the constellation DECIGO. Sensitivity goal and science The target sensitivity of DECIGO, as shown in Fig.2, is supposed to be limited by quantum noise in all frequency band: by the radiation pressure noise below 0.15 Hz, and by the shot noise above 0.15 Hz. In order to reach this sensitivity, all the practical noise should be suppressed well below this level. This imposes more stringent requirements than LISA for some subsystems of DECIGO, especially in the acceleration noise and frequency noise, therefore Fig.2, the sensitivity goal of DECIGO is better than 10 −23 in terms of strain between 0.1 and 10 Hz. To achieve this sensitivities, all the practical noises have to be suppressed below the stringent requirement, especially on the acceleration noise of the mirror and frequency noise of the light. Roadmap DECIGO is expected to be launched in the era of 2030s, before that, we plan to launch a precursor satellites, B-DECIGO. Major objective of B-DECIGO is to detect astrophysical GW signals to extract scientific results, in addition to demonstration of key technologies required for DECIGO just as LISA pathfinder [2] did for LISA. The technical objectives of B-DECIGO are demonstration of accurate formation flying, precision laser metrology with long baseline FP cavity and drag-free control for multiple spacecraft, based on several fundamental precision measurement technologies like drag-free control of the spacecraft, stabilized laser system in space, precision laser metrology in space and test mass lock mechanism. B-DECIGO is basically a small version of DECIGO, but will have 100km-scale FP cavity, therefore, it is supposed to have reasonable sensitivity to detect gravitational waves with minimum specifications. We hope that it will be launched around 2020s. B-DECIGO 1 B-DECIGO is re-defined space GW antenna mission as first precursor satellite for DECIGO, succeeding former Pre-DECIGO [11]. The objectives of B-DECIGO are scientifically to detect gravitational waves from promising astrophysical sources with modest optical parameters, and also technologically to demonstrate the formation flight using three spacecraft, which is one of key technologies for DECIGO. B-DECIGO is designed to have a sensitivity that is conservative compared with DECIGO by about factor of 10 in all frequency band. Accordingly, the optical parameters and the noise requirements of B-DECIGO are less stringent than DECIGO, whereas the required acceleration noise level is still challenging compared with LISA pathfinder and LISA. B-DECIGO consists of three drag-free spacecraft containing freely-falling mirrors, whose relative displacement is measured by a differential FP Michelson interferometer. Pre-conceptual design Each spacecraft holds a couple of test-mass mirrors of 30 kg in weight and 30 cm in diameter, freely floating on the spacetime. One test-mass mirror in one spacecraft and the another testmass mirror in the other spacecraft are connected by laser beam, forming 100 km Fabry-Pérot cavity, with finesse of 100 resulting in a cavity cut-off frequency around 20 Hz. Therefore, three spacecraft are connected with three 100 km Fabry-Pérot cavities to maintain 100 km triangular formation flight. Frequency-doubled, Iodine-stabilized Yb:fiber DFB laser, with wavelength of 515 nm will be used as a light source, The laser light from Yb:fiber DFB laser with wavelength of 1030 nm is amplified with YDFA (Yb-Doped Fiber Amplifier), then frequency-doubled with nonlinear crystal to have enough power to illuminate each Fabry-Pérot cavities with 1 W. The frequency-doubled green light, then, frequency-stabilized in reference to the saturated absorption of iodine molecules to have low enough frequency noise contribution in an observational band of B-DECIGO. In order to make test-masses freely floating in inertial spacetime as a probe of GW, and also to avoid an external force fluctuation on the test-mass caused by the unwanted coupling from spacecraft motion, the spacecraft is drag-free controlled with a couple of test-mass mirrors inside spacecraft as inertial reference. The position and attitude of the spacecraft with respect to theses test-masses are drag-free controlled by feeding error signals back to the spacecraft, The formation flight of the three spacecraft to keep triangular shape is realized by continuous feedback control. The laser interferometers measure the deviation of the cavity-length, which are fed back to the position of test-mass mirrors to maintain the length of the cavities. Since the spacecraft follows the test-mass positions inside it using drag-free control scheme, as a result, exact 100-km length triangular formation is realized. One of candidate orbit for B-DECIGO is LISA-like cart-wheel orbit around the earth. (Fig.3) If the altitude of the spacecraft formation and inclination angle of orbital plane are selected properly, the reference orbit, the orbit of the center of the mass of the three spacecraft, could be a sun-synchronized dawn-dusk circular orbit. In addition, it is possible to design the dawn-dusk orbit so that there will be no eclipse in these spacecraft, by selecting the altitude between 2,000-3,000km , which is beneficial to avoid thermal shock and drift in the spacecraft, and also to keep continuous power supply from the sun. The orbital period of the formation-flight interferometer unit around the earth is about 124 min. for the altitude of 2,000km. Assuming this orbit, orbital motion of formation and the earth's annual orbital motion around the sun make the antenna pattern of B-DECIGO to observe GWs change in time scale of 100 min. Owing to this variation of antenna pattern, parameter estimation accuracy for the GW sources, such as sky localization, is expected to be improved. Sensitivity goal and science Using above essential parameters, the target sensitivity of B-DECIGO is set to be 2 × 10 −23 Hz −1/2 in strain in the current design (Fig. 4). The noise curve is basically limited by fundamental noise, optical quantum noises of the interferometer; laser shot noise and radiation pressure noise in high and low frequency bands, respectively. The external force noises level on the test-mass mirrors are set not to exceed these optical quantum noises level, which place critical requirements; the requirement is 1 × 10 −16 N/Hz 1/2 . With this sensitivity, mergers of BBHs at z = 10 will be within the observable range of B-DECIGO, assuming optimal direction and polarization of the source, and detection SNR of 8.
2,335.2
2017-06-01T00:00:00.000
[ "Physics" ]
State-Feedback H ∞ Control for LPV System Using TS Fuzzy Linearization Approach This paper discusses the linear parameter varying (LPV) gain scheduling control problem based on the Takagi-Sugeno (T-S) fuzzy linearization approach. Firstly, the affinenonlinear parameter varying (ANPV) description of a class of nonlinear dynamic processes is defined; that is, at any scheduling parameter, the corresponding system is affine nonlinear as usual. For such a class of ANPV systems, a kind of developed T-S fuzzy modeling procedure is proposed to deal with the nonlinearity, instead of the traditional Jacobian linearization approach. More concretely, the evaluation system for the approximation ability of the novelly developed T-S fuzzy modeling procedure is established. Consequently, the LPV T-S fuzzy system is obtained which can approximate the ANPV system with required accuracy. Secondly, the notion of piecewise parameter-dependent Lyapunov function is introduced, and then the stabilization problem and the state-feedback H ∞ control problem of the LPV T-S fuzzy system are studied. The sufficient conditions are given in linear matrix inequalities (LMIs) form. Finally, a numerical example is provided to demonstrate the availability of the above approaches.The simulation results show the high approximation accuracy of the LPV T-S fuzzy system to the ANPV system and the effectiveness of the LPV T-S fuzzy gain scheduling control. Introduction It is well known that the gain scheduling control is an efficient solution for the control of nonlinear dynamic processes [1,2].In particular, due to the advantage to carry forward the stability and dynamic performance analysis, the LPV gain scheduling control has been popularly studied [3][4][5][6][7][8][9][10][11][12].Currently, there are mainly two ways to realize the LPV gain scheduling control: the linear fractional transformation (LFT) gain scheduling technique based on a scaled version of the small gain theorem [5][6][7][8] and the quadratic gain scheduling technique based on Lyapunov theory [9][10][11][12].However, both of them adopt the Jacobian linearization approach to deal with the nonlinearity around each steady operating point which means that the number of scheduling parameters is equal to the number of variables relevant to the nonlinearity.As a result, too many scheduling parameters are brought in and too much computation burden is caused for the control design; that is, while the gridding process of scheduling parameters and the parameterization of decision variables in LMIs are executed, the number of LMIs would have a rapid increase.Besides, due to the local linearization around each steady operating point, the control performance is restricted within the local region which would become a weakness when the controlled variables vary widely.Also, it is hard to guarantee the system stability and control performance during the scheduling process.Hence, there requires a further study on the appropriate linearization approach for the ANPV system which can both efficiently reduce the number of scheduling parameters and enlarge the linearization range to extend the effective region of control performance. On the other hand, the T-S fuzzy model was firstly proposed by Takagi and Sugeno in 1985 [13] and developed by Sugeno and Kang [14], which was data based.To linearize the nonlinearity, the research on the model-based T-S fuzzy modeling procedure was proposed by Kawamoto et al. [15] via the nonlinear sector method which was an exact modeling procedure with constant consequent parts and nonlinear membership functions.Then, Kluska [16] exploited the local approximation method to establish T-S fuzzy model with homogeneous linear consequent parts.In order to evaluate the approximation ability, Abonyi and Babuska [17] discussed the ways to analyze the relation between the local and global approximation performances of the T-S fuzzy model.Subsequently, Teixeira and Zak [18] gave the T-S fuzzy modeling procedure with homogeneous linear consequent parts by solving a convex optimization problem.And Tanaka and Wang [19] also realized the similar results through uniform partition to the input space of premise variables and proved the universal approximation ability.However, as far as we know, the T-S fuzzy linearization to the ANPV system has been rarely studied in most of the literature. If the T-S fuzzy linearization is applied to the ANPV system, the choosing of the scheduling parameters and the premise variables can be separated.The nonlinearity of scheduling parameters can remain and that of premise variables should be linearized within their varying regions.As a result, the number of scheduling parameters can be reduced.Meanwhile, because the T-S fuzzy linearization range can be extended to any optional one, the nonlinearity with widely varying state variables can be dealt with for the nonlinear dynamic processes.Due to the above points, it is valuable to study the T-S fuzzy modeling procedure for the ANPV system.Furthermore, in order to efficiently reduce the number of T-S fuzzy rules, an ununiform partition method is utilized, and the evaluation system is also established to adjust the approximation performance.Then, the ANPV system can be linearized on demand and the LPV T-S fuzzy system with required accuracy can be obtained.However, a challenging control problem for such a class of systems is correspondingly formulated.For the T-S fuzzy control, the piecewise Lyapunov functions have been brought in to improve the solvability compared with that based on the common Lyapunov function [20][21][22].In addition, considering the approximation error to the ANPV system, the T-S fuzzy ∞ control has been also studied [23][24][25][26].Hence, the T-S fuzzy control needs to be generalized to the LPV system while improving the solvability by introducing the notion of piecewise Lyapunov function.In addition, for the sake of improving the control performance, the approximation error of LPV T-S fuzzy system to the ANPV system should be considered. In the paper, the T-S fuzzy modeling procedure with homogeneous linear consequent parts is discussed for the ANPV system while an ununiform partition method is utilized to reduce the number of T-S fuzzy rules.To adjust the approximation accuracy, the evaluation system for the approximation performance of the LPV T-S fuzzy system is established.Then, in order to improve the solvability of the LPV T-S fuzzy gain scheduling control, the notion of piecewise parameter-dependent Lyapunov function is introduced and the LPV T-S fuzzy gain scheduling control design based on the piecewise parameter-dependent Lyapunov functions is studied.Meanwhile, while taking the approximation error to the ANPV system in consideration, the sufficient conditions of the stabilization problem and the state-feedback ∞ control problem of the LPV T-S fuzzy system are given in LMIs form.More concretely, the main contributions of the paper are listed as follows. (i) The T-S fuzzy approximation with required accuracy aiming at the ANPV system is studied, and then the nonlinearity of the ANPV system can be dealt with and the LPV T-S fuzzy system with required accuracy can be obtained. (ii) The stabilization problem of the LPV T-S fuzzy system is studied by bringing in the piecewise parameterdependent Lyapunov functions while the sufficient conditions are given in LMIs form. (iii) Considering the approximation error to the ANPV system, the state-feedback ∞ control problem of the LPV T-S fuzzy system is studied based on the piecewise parameter-dependent Lyapunov functions. The sufficient conditions are given in both Riccati inequalities form and LMIs form. The rest of the paper is organized as follows.In Section 2, the developed T-S fuzzy modeling procedure utilizing ununiform partition method is proposed aiming at the ANPV system.And the evaluation system for the approximation performance of the LPV T-S fuzzy system with homogeneous consequent parts is established.Then, the LPV T-S fuzzy system with required accuracy can be obtained by applying the above ways to the ANPV system.In Section 3, the notion of piecewise parameter-dependent Lyapunov function is introduced and the stabilization problem of the LPV T-S fuzzy system is studied.In Section 4, the state-feedback ∞ control problem of the LPV T-S fuzzy system is studied and the sufficient conditions are given in both Riccati inequalities form and LMIs form.In Section 5, a numerical example is provided to demonstrate the availability of the above approaches.Section 6 concludes the paper. Establishment of LPV T-S Fuzzy System In this section, the ANPV description of a class of nonlinear dynamic processes is defined aiming at which kind of novelly developed T-S fuzzy modeling procedure is proposed and the corresponding evaluation system for the approximation performance is established.Both can be combined to deal with the nonlinearity of the ANPV system to get the LPV T-S fuzzy system with required accuracy. For a class of nonlinear dynamic processes, if we can achieve their nonlinear descriptions in mathematical models, the ANPV system can be obtained by choosing the proper variables as scheduling parameters.The general form of it can be defined as where is a vector of the scheduling parameters which may be system parameters, external inputs, or other parameters; is a column vector of the state variables with dimensions; and (, ), . . ., 1 (, ) are the nonlinear functions of and .Then, referring to the way in [18], the T-S fuzzy modeling procedure aiming at (1) can be developed to linearize the nonlinearity of the ANPV system.Considering (1), around the zero state, the Jacobian linearization approach can be easily executed to get homogeneous linear model in the local region.However, for the nonzero states, it would be unavailable.Assume the nonzero operating state 0 which can be steady or transient one and corresponds to the ith fuzzy rule, = 1, 2, . . ., .It should be noted that 0 may include the part or all of the state variables and it is gotten from the partition to the input space of premise variables.Firstly, we establish the homogeneous linear model in the vicinity of where and are arbitrarily changed variables and (), . . ., 12 () are parameter-dependent matrices.Take the first equation ẋ = (, ) + 0 (, ) + 0 (, ) for example.Due to that, (, ), 0 (, ), and 0 (, ) are nonlinear functions of operating state at any , and the matrices (), 1 (), and 2 () only depend on .Moreover, because and are arbitrarily changed, we can uniquely determine that Define () , a row vector with dimensions, as the th row of the matrix ().Then, condition (3) can be equivalently represented as where (, ) is the th row of (, ).Additionally, assume (0, ) = 0 and (, ) ∈ Expanding (, ) of (4) around the operating state 0 and neglecting second-and higher-order terms, we can get where ∇ ( 0 , ) is the gradient, a column vector, of (, ) evaluated at 0 .Substituting (5) into (6), we can get where is arbitrarily close to 0 .From (7), it can be found that the coefficient vector () needs to be estimated.And in the vicinity of 0 , an optimization problem can be constructed by defining an optimal index to evaluate the estimation.Here, define that the notation represents the set of functions which is -order continuous and differentiable on the domain of input variables. Lemma 1. Consider the following constrained optimization problem: minimize () where () is : → and ∈ 1 ; it is a convex function on the feasible set Ω = { ∈ : ℎ() = 0}, where ℎ : → and ℎ ∈ 1 .Assume that Ω is convex and there exist * ∈ Ω and ∈ such that where is the Lagrange multiplier.Then, * is the optimal solution of over Ω and is the sufficient condition for the convex optimization problem [27].Particularly, when ℎ() is affine linear, Ω is convex. Considering the estimation of () around 0 ̸ = 0 in (7), the optimal index for (, ) corresponding to the th fuzzy rule can be defined as Then, the optimal problem can be constructed as minimize where ( 5) should be fulfilled as an equality constraint and Transform the objective function in (10) into the form which is a quadratic function of the unknown coefficient vector () on .Obviously, it means that ( ()) is a convex function.Considering the equality constraint ℎ ( ()), it is the linear function of (), so the feasible set Ω = { () ∈ : ℎ( ()) = 0} is convex.Therefore, the optimal problem is a convex optimization problem and can be solved according to Lemma 1. Computing the derivative of ( ()) and ℎ ( ()) about (), we can get Then, substituting ( 13) into ( 9), we can get Considering the equality constraint ( 5), the Lagrange multiplier () can be represented as As a result, we can get the column vector Then, the ith T-S fuzzy rule corresponding to the operating state 0 can be chosen as Rule : where f (, ) represents the th local estimation of the th row of (, ), () is the corresponding coefficient vector of the th fuzzy rule, a column vector, and ( ) represents the fuzzy set and membership function of the th premise variable, , in the th fuzzy rule and = 1, 2, . . ., . The activated possibility of the th fuzzy rule under a group of premise variables is computed by x 2 x The weight coefficients are computed and the center-ofgravity method is used for defuzzification: which fulfills that Finally, the overall T-S fuzzy model can be obtained as Usually, based on (21), the evaluation of the overall approximation performance about the nonlinear function (, ) is defined as Subsequently, a kind of un-uniform partition method is utilized for the above T-S fuzzy modeling procedure which can efficiently reduce the number of T-S fuzzy rules.Meanwhile, define the subset Θ 0 = { | | | < 0 } on Θ, where 0 is a predefined positive scalar and 0 = (, )/| =0 can be gotten at the zero state = 0. Considering the nonzero states, they can be represented as is a positive scalar and ∈ + is the serial number of partition for .Define the subregion Θ 1 2 ⋅⋅⋅ on Θ (as shown in Figure 1): Then, the coefficient vector in ( 16) can be represented as Consequently, the T-S fuzzy rules can be chosen as follows: Rule 0: For Rule 0, the activated possibility 0 () is 1 inside Θ 0 and 0 outside Θ 0 .And the activated possibility of the Rule 1 2 ⋅ ⋅ ⋅ under a group of premise variables is computed by where the membership function for ( 26) is given as Then, f (, ) can be written as Here, the notion of the nonlinear measure along in the partition region 0 can be defined as Assuming the boundary of 0 is 0 , the value of 0 can be set and the proper length of 0 can be chosen to guarantee 0 ≤ 0 .Thus, the length of the partition regions, So far, all of the methods show us how to establish the T-S fuzzy system.However, the approximation performance of the T-S fuzzy system is still unknown.Subsequently, the approximation performance of the T-S fuzzy system will be evaluated.Combining with the above T-S fuzzy modeling procedure, the T-S fuzzy system with required approximation accuracy can be obtained at last. Then, using (24), the approximation performance of the T-S fuzzy system can be evaluated by where ∈ Θ 1 2 ⋅⋅⋅ and Because ∈ Θ 1 2 ⋅⋅⋅ , the maximum distance between and any vertex point of ( As a result, the evaluation system for the approximation performance of the LPV T-S fuzzy system is established.It provides a useful way to attain the required accuracy of the T-S fuzzy approximation.Then, the nonlinearity in (1) can be dealt with completely.Note that at each known operating state 0 , one fuzzy rule is established, which approximates the local dynamics around 0 .And the ith T-S fuzzy rule can be represented as Rule : (34) Remark 2. It is noted that the T-S fuzzy modeling procedure utilizes an un-uniform partition method to reduce the number of T-S fuzzy rules while guaranteeing the approximation accuracy.And the evaluation system for the approximation performance of the LPV T-S fuzzy system can be used to balance the number of T-S fuzzy rules and the approximation performance of the LPV T-S fuzzy system. Piecewise Parameter-Dependent Quadratic Stabilization of LPV T-S Fuzzy System The LPV T-S fuzzy system (34) represents a class of complex continuous-time systems in a novel form which has both fuzzy inference and locally analytic linear models.In this section, the notion of piecewise parameter-dependent Lyapunov function is introduced for the stabilization problem of the LPV T-S fuzzy system.Firstly, the rth subspace in the state space can be defined as where Note that two subspaces are generated around the th T-S fuzzy rule for the single state .The schematic about the state can be shown in Figure 2.And then, the relation between the th subspace and the th T-S fuzzy rule is Then, the overall model of the LPV T-S system in the th subspace can be represented as (37) for ∈ , where and is a set including the indexes of the membership functions which are nonzero and less than () around the th rule in the th subspace.For the overall model, Ã (, ), B * (, ), C * (, ), and D * (, ) represent the interpolation terms produced by interactions between the ith rule and the other rules in the th subspace. In order to find the piecewise parameter-dependent Lyapunov function which is continuous across the th subspace boundary at a fixed , the following constant matrix is established from the structure information of the th subspace, which fulfills = , ∈ ∩ , , = 1, 2, . . ., . (39) Then, the piecewise parameter-dependent Lyapunov function candidates that are continuous across the th subspace boundary can be parameterized as with where () is the symmetric matrix and characterizes () with together. In order to carry forward the control design, the following upper bounds for the interpolation terms in (37) can be defined as ( Since all the information of the interpolation terms in (42) is a priori knowledge, there are many ways to acquire these upper bounds.For example, one simple way is where is the state such that () = 0.5. Definition 3. On the compact set Ψ ⊂ , one has finite nonnegative numbers { } =1 .Then the bounded variations set of scheduling parameters can be defined as where 1 represents the class of functions which are piecewise continuous and one-order differential. Proof.Define the following Lyapunov function (): From (41), we can get From ( 50) and ( 51), there exist constants > 0 and > 0 such that Thus, using the conditions ( 47) and (49), () is positive and continuous across the subspace boundary.If it can guarantee that the system (37) is asymptotically stable in each subspace, the global asymptotical stability for the system (34) can be attained.Next, we will demonstrate that () can guarantee the asymptotical stability in each subspace. If there exist a constant > 0 and matrices and with appropriate dimensions, the following matrix inequality can be gotten: From (53), define the parameter-dependent positive scalar (), where the subscript means the relation between and just like means the relation between and .In many cases, () can be set as a constant value.Note that and are just used for example and do not have any special significance.However, subscripts of all * () are marked as follows to clearly show the meaning of * (). Then, from (53) we can get for the system (37).Besides, via the Schur complement lemma, it follows from (48) that Then, from (54) and (55), we have which suggests that where Moreover, there exists a constant > 0 such that As a result, we have which completes the proof of this theorem. Remark 6.It is noted that the sufficient conditions in LMIs form are easy to be solved.At a fixed , conditions (47) and (49) guarantee that the Lyapunov function is positive and piecewise continuous across each subspace.And the Lyapunov functions solved from condition (48) guarantee that the system (37) is asymptotical in each subspace.All the conditions guarantee that the system (34) is globally asymptotically stable.Besides, in many cases, for the T-S fuzzy control based on the common Lyapunov function, it was hard to find the common Lyapunov function or such a Lyapunov function did not exist at all.Thus, the previous approach can improve the solvability of the LPV T-S fuzzy gain scheduling control. State-Feedback 𝐻 ∞ Control Design of LPV T-S Fuzzy System In this section, the state-feedback ∞ control of the LPV T-S fuzzy system is studied.Considering the approximation error, a controller synthesis approach with ∞ performance is presented.In the rth subspace, the state feedback controller can be represented as Substituting (60) into (34), the closed-loop system can be obtained as where In the th subspace, the system (61) can be represented as where When substituting (37) into (1) and considering the approximating error, the closed-loop nonlinear system in the th subspace can be represented as (66) According to (37) and ( 65 Remark 10.Based on Theorem 7, the conveniently solvable conditions are obtained in LMIs form in Theorem 9.For some parameter-dependent variables, such as * (), they can be set as constant values in many cases by which the computation can be further simplified.However, subscripts of all * () are marked clearly in Theorem 5, Theorem 7, and Theorem 9 in order to show the meaning of them. Remark 11.From Theorem 9, the sufficient conditions in LMIs form have been given.Though the solving of the LPV T-S fuzzy gain scheduling control design is different from the solving of the existing LPV gain scheduling control design, most of the procedures are similar.According to [9,28,29], the solution of the existing LPV gain scheduling control design has been given.Then, the main procedures of the LPV gain scheduling control design can be shown as follows. Firstly, according to the property of , the gridding process may be executed for the nonlinear form of or not.In particular, for the affine linear form of , the multiconvexity property can be used to guarantee the continuous properties of continuously scheduled instead of the gridding process and the number of the LMIs can be reduced which can simplify the computation.Secondly, the parametrization of decision variables in LMIs needs to be executed.Through the two steps, infinite LMIs with infinite decision variables can be gotten.By solving the LMIs, the piecewise parameterdependent functions can be obtained and then the controller can be determined. Mathematical Problems in Engineering matrices () and 1 () and decision variables () and () in the rth subspace as where 0 , Besides, the gridding process is avoided when solving the parameter-dependent Lyapunov functions () in th subspace .Utilizing the multiconvexity property in [30], it just needs to validate the LMIs at the vertexes of .Here, the two vertexes at = −2 and = 2 are tested, respectively. Following the design algorithm with = 0.9, the solution to the LMIs when * () = 10 can be obtained: Then, the state-feedback controller can be determined as = (), where () = () −1 () and = 1, 2, 3, 4. Because the effective performance region of the controller about 1 is set as [−1, 1], the scheduling values of are chosen to linearly vary from 0.3 to 1.3 with the interval of 0.1 to validate the effectiveness of the controller.The state responses of 1 and 2 and the evaluation response of are shown in Figures 4, 5, and 6, respectively. Meanwhile, in order to validate the effectiveness of the controller during the scheduling process of , is scheduled from 0.5 to 1 at 0.5 seconds, which guarantees that the variation rate of is 1.The responses of 1 , 2 , and are shown in Figures 7, 8, and 9, respectively. From Figures 4, 5, and 6, they suggest the effectiveness of the parameter-dependent controller.At different scheduling points of , the system can be stabilized by the state-feedback controller with ∞ performance boundary = 0.9.From Figures 7, 8, and 9, the performance when switching at different points is validated.The time-domain response curves illustrate the stability and control performance of the system when continuously scheduling.Obviously, all the above results suggest the availability of sufficient conditions in Theorems 5, 7, and 9 for the control design of the LPV T-S fuzzy system. Conclusion In order to improve the existing LPV gain scheduling control, the T-S fuzzy modeling procedure was adopted to deal with the nonlinearity and relevant control design was studied in the paper.For the ANPV description of the nonlinear dynamic processes, a kind of developed T-S fuzzy modeling procedure with homogeneous linear consequent parts was proposed to carry forward the linearization.As a result, greater flexibility was obtained by which the number of scheduling parameters could be reduced and the approximation accuracy could be raised in any optional region.Moreover, the un-uniform partition method was utilized and the evaluation system for the approximation ability of the novelly developed T-S fuzzy modeling procedure was established.Through this way, the number of fuzzy rules could be decreased while guaranteeing the approximation performance, and the LPV T-S fuzzy system was obtained with required accuracy.For the LPV T-S fuzzy gain scheduling control, the notion of piecewise parameterdependent Lyapunov function was introduced to improve the solvability.Subsequently, the stabilization problem and the = { | () = () , , = 1, 2, . . ., , ̸ = } , X = { | () > () , , = 1, 2, . . ., , ̸ = } .
5,903.8
2013-09-30T00:00:00.000
[ "Engineering", "Computer Science" ]
EURASIP Journal on Wireless Communications and Networking 2005:3, 284–297 c ○ 2005 Ioannis Dagres et al. Flexible Radio: A Framework for Optimized Multimodal Operation via Dynamic Signal Design The increasing need for multimodal terminals that adjust their configuration on the fly in order to meet the required quality of service (QoS), under various channel/system scenarios, creates the need for flexible architectures that are capable of performing such actions. The paper focuses on the concept of flexible/reconfigurable radio systems and especially on the elements of flexibility residing in the PHYsical layer (PHY). It introduces the various ways in which a reconfigurable transceiver can be used to provide multistandard capabilities, channel adaptivity, and user/service personalization. It describes specific tools developed within two IST projects aiming at such flexible transceiver architectures. Finally, a specific example of a mode-selection algorithmic architecture is presented which incorporates all the proposed tools and, therefore, illustrates a baseband flexibility mechanism. INTRODUCTION The emergence of speech-based mobile communications in the mid 80s and their exponential growth during the 90s have paved the way for the rapid development of new wireless standards, capable of delivering much more advanced services to the customer.These services are and This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.will be based on much higher bit rates than those provided by GSM, GPRS, and UMTS.The new services (video streaming, video broadcasting, high-speed Internet, etc.) will demand much higher bit rates/bandwidths and will have strict QoS requirements, such as the received BER and the end-to-end delay.The new and emerging standards (WiFi, WiMax, DVB-T, S-DMB, IEEE 802.20) will have to compete with the ones based on wired communications and overcome the barriers posed by the wireless medium to provide seamless coverage and uninterrupted communication. Another issue that is emerging pertains to the equipment that will be required to handle the plethora of the new standards.It will be highly unlikely that the user will have available a separate terminal for each of the introduced standards.There will be the case that the use of a specific standard will be dictated by factors such as the user location (inside buildings, in a busy district, or in a suburb), the user speed (pedestrian, driving, in a high-speed train), and the required quality (delay sensitivity, frame error rate, etc.).There might also be cases in which it would be preferred that a service was delivered using a number of different standards (e.g., WiFi for video, UMTS for voice), based on some criteria related to the terminal capabilities (say, power consumption) and the network capacity constraints.Therefore, the user equipment has to follow the rapid development of new wireless standards by providing enough flexibility and agility to be easily upgradeable (with perhaps the modification/addition of specific software code but no other intervention in hardware). We note that flexibility in the terminal concerns both the analog/front-end (RF/IF) as well as digital (baseband) parts.The paper will focus on the issues pertaining to the baseband flexibility and will discuss its interactions with the procedures taking place in the upper layers. DEFINITIONS OF RADIO FLEXIBILITY The notion of flexibility in a radio context may be defined as an umbrella concept, encompassing a set of nonoverlapping (in a conceptual sense) postulates or properties (each of which must be defined individually and clearly for the overall definition to be complete) such as adaptivity, reconfigurability, modularity, scalability, and so on.The presence of any subset of such features would suffice to attribute the qualifying term flexible to any particular radio system [1].These features are termed "nonoverlapping" in the sense that the occurrence of any particular one does not predicate or force the occurrence of any other.For example, an adaptive system may or may not be reconfigurable, and so on.Additional concepts can be also added, such as "ease of use" or "seamlessly operating from the user's standpoint," as long as these attributes can be quantified and identified in a straightforward way, adding a new and independent dimension of flexibility.Reconfigurability, for instance, which is a popular dimension of flexibility, can be defined as the ability to rearrange various modules at a structural or architectural level by means of a nonquantifiable1 change in its configuration.Adaptivity, on the other hand, can be defined as the radio system response to changes by properly altering the numerical value of a set of parameters [2,3].Thus, adaptive transmitted (Tx) power or adaptive bit loading in OFDM naturally fall in the latter category, whereas dynamically switching between, say, a turbo-coded and a convolutional-coded system in response to some stimulus (or information) seems to fit better the code-reconfigurability label, simply because that type of change implies a circuit-design change, not just a numeric parameter change.Furthermore, the collection of adaptive and reconfigurable transmitted-signal changes in response to some channel-state-information feedback may be termed dynamic signal design (DSD).Clearly, certain potential changes may fall in a grey area between definitions. 2 primitive example of flexibility is the multiband operation of current mobile terminals, although this kind of flexibility driven by the operator is not of great research interest from the physical-layer point of view.A more sophisticated version of such a flexible transceiver would be the one that has the intelligence to autonomously identify the incumbent system configuration and also has the further ability to adjust its circumstances and select its appropriate mode of operation accordingly.Software radio, for example, is meant to exploit reconfigurability and modularity to achieve flexibility.Other approaches may encompass other dimensions of flexibility, such as adaptivity in radio resource management techniques. FLEXIBILITY SCENARIOS In response to the demand for increasingly flexible radio systems from industry (operators, service providers, equipment manufacturers, chip manufacturers, system integrators, etc.), government (military communication and signalintelligence systems), as well as various user demands, the field has grown rapidly over the last twenty years or so (perhaps more in certain quarters), and has intrigued and activated R&D Departments, academia, research centers, as well as funding agencies.It is now a rapidly growing field of inquiry, development, prototyping, and even fielding.Because of the enormity of the subject matter, it is hard to draw solid boundaries that exclusively envelop the scientific topic, but it is clear that such terms as SR, SDR, reconfigurable radio, cognitive/intelligent/smart radio, and so on are at the center of this activity.Similar arguments would include work on flexible air-interface waveforms and/or generalized (and properly parameterized) descriptions and receptions thereof.Furthermore, an upward look (from the physical-layer "bottom" of the communication-model pyramid) reveals an everexpanding role of research on networks that include reconfigurable topologies, flexible medium-access mechanisms, interlayer optimization issues, agile spectrum allocation [4], and so on.In a sense, ad hoc radio networks fit the concept, as they do not require any rigid or fixed infrastructure.Similarly, looking "down" at the platform/circuit level [5], we see intense activity on flexible and malleable platforms and designs that are best suited for accommodating such flexibility.In other words, every component of the telecommunication and radio universe can be seen as currently participating in the radio-flexibility R&D work, making the field exciting as well as difficult to describe completely. Among the many factors that seem to motivate the field, the most obvious seems to be the need for multistandard, multimode operation, in view of the extreme proliferation of different, mutually incompatible radio standards around the globe (witness the "analog-to-digital-towideband-to-multicarrier" evolution of air interfaces in the various cellular-system generations).The obvious desire for having a single-end device handling this multitude in a compatible way is then at the root of the push for flexibility.This would incorporate the desire for "legacy-proof " functionality, that is, the ability to handle existing systems in a single unified terminal (or single infrastructure access point), regardless of whether this radio system is equipped with all the related information prestored in memory or whether this is software-downloaded to a generically architected terminal; see [6] for details.In a similar manner, "future-proof " systems would employ flexibility in order to accommodate yetunknown systems and standards with a relative ease (say, by a mere resetting of the values of a known set of parameters), although this is obviously a harder goal to achieve that legacyproofness.Similarly, economies of scale dictate that radio transceivers employ reusable modules to the degree possible (hence the modularity feature).Of course, truly optimized designs for specific needs and circumstances, lead to "point solutions," so that flexibility of the modular and/or generic waveform-design sort may imply some performance loss.In other words, the benefit of flexibility may come at some cost, but hopefully the tradeoff is still favorable to flexible designs. There are many possible ways to exploit the wide use of a single flexible reconfigurable baseband transceiver, either on the user side or on the network side.One scenario could be the idea of location-based reconfiguration for either multiservice ability or seamless roaming.A flexible user terminal can be capable of reconfiguring itself to whichever standard prevails (if there are more than one that can be received) or exists (if it is the only one) at each point in space and time, either to be able to receive the ever-available (but possibly different) service or to receive seamlessly the same service.Additionally, the network side can make use of the futureproof reconfiguration capabilities of its flexible base stations for "soft" infrastructure upgrading.Each base station can be easily upgradeable to each current and future standard.Another interesting scenario involves the combined reception of the same service via more than one standard in the same terminal.This can be envisaged either in terms of "standard selection diversity," according to which a flexible terminal will be able to download the same service via different airinterface standards and always sequentially (in time) select the optimum signal (to be processed through the same flexible baseband chain) or, in terms of service segmentation and standard multiplexing, meaning that a flexible terminal will be able to collect frames belonging to the same service via different standards, thus achieving throughput maximization for that service, or receive different services (via different standards) simultaneously.Finally, another flexibility scenario could involve the case of peer-to-peer communication whereby two flexible terminals could have the advantage of reconfiguring to a specific PHY (according to conditions, optimization criteria) and establish a peer-to-peer ad hoc connection. The aforementioned scenarios of flexibility point to the fact that the elements of wireless communications equipment (on board both future terminals and base station sites) will have to fulfill much more complicated requirements than the current ones, both in terms of multistandard capabilities as well as in terms of intelligence features to control those capabilities.For example, a flexible terminal on either of the aforementioned scenarios must be able to sense its environment and location and then alter its transmission and reception parameters (frequency band, power, frequency, modulation, and other parameters) so as to dynamically adapt to the chosen standard/mode.This could in theory allow a multidimensional reuse of spectrum in space, frequency, and time, overcoming the various spectrum usage limitations that have slowed broadband wireless development and thus lead to one vision of cognitive radio [7], according to which radio nodes become radio-domain-aware intelligent agents that define optimum ways to provide the required QoS to the user. It is obvious that the advantageous operation of a truly flexible baseband/RF/IF platform will eventually include the use of sophisticated MAC and RRM functionalities.These will have to regulate the admission of new users in the system, the allocation of a mode/standard to each, the conditions of a vertical handover (from one standard to another), and the scheduling mechanisms for packet-based services.The criteria for assigning resources from a specific mode to a user will depend on various parameters related to the wireless channel (path loss, shadowing, fast fading) and to the specific requirements imposed by the terminal capabilities (minimization of power consumption and transmitted power), the generated interference, the user mobility, and the service requirements.That cross-layer interaction will lead to the ultimate goal of increasing the multiuser capacity and coverage while the power requirements of all flexible terminals will be kept to a minimum required level. Transmission schemes and techniques Research exploration of the next generation of wireless systems involves the further development of technologies like OFDM, CDMA, MC-CDMA, and others, along with the use of multiple antennas at the transmitter and the receiver.Each of these techniques has its special benefits in a specific environment: for example, OFDM is used successfully in WLAN systems (IEEE 802.11a), whereas CDMA is used successfully in cellular 2G (IS-95) and 3G (UMTS) systems.The selection of a particular one relies on the operational environment of each particular system.In OFDM, the available signal bandwidth is split into a large number of subcarriers, orthogonal to each other, allowing spectral overlapping without interference.The transmission is divided into parallel subchannels whose bandwidth is narrow enough to make them effectively frequency flat.A cyclic prefix is used to combat ISI, in order to avoid (or simplify) the equalizer [8]. The combination of OFDM and CDMA, known as MC-CDMA [9], has gained attention as a powerful transmission technique.The two most frequently investigated types are multicarrier CDMA (MC-CDMA) which employs frequency-domain spreading and multicarrier DS-CDMA (MC-DS-CDMA) which uses time-domain spreading of the individual subcarrier signals [9,10].As discussed in [9], MC-CDMA using DS spread subcarrier signals can be further divided into multitone DS-CDMA, orthogonal MC-DS-CDMA, and MC-DS-CDMA using no subcarrier overlapping.In [11,12], it is shown that the above three types of MC-DS-CDMA schemes with appropriate frequency spacing between two adjacent subcarriers can be unified in the family of generalized MC-DS-CDMA schemes. Multiple antennas with transmit and receive diversity techniques have been introduced to improve communication reliability via the diversity gain [13].Coding gain can also be achieved by appropriately designing the transmitted signals, resulting in the introduction of space-time codes (STC).Combined schemes have already been proposed in the literature.MIMO-OFDM has gained a lot of attention in recent years and intensive research has already been performed.Generalized MC-DS-CDMA with both time-and frequencydomain spreading is proposed in [11,12] and efforts on MIMO MC-CDMA can be found in [14,15,16,17,18]. Dynamic signal design Flexible systems do not just incorporate all possible point solutions for delivering high QoS under various scenarios, but possess the ability to make changes not only on the algorithmic but also on the structural level in order to meet their goals.Thus, the DSD goal is to bring the classic design procedure of the PHY layer into the intelligence of the transceiver and initiate new system architectural approaches, capable of creating the tools for on-the-fly reconfiguration.The module responsible for all optimization actions is herein called supervisor, also known as controller and the like. The difference between adaptive modulation and coding (AMC) and dynamic signal design (DSD) is that AMC is a design approach with a main focus on developing algorithms for numerical parameter changes (constellation size, Tx power, coding parameters), based on appropriate feedback information, in order to approach the capacity of the underlying channel.The type of channel code in AMC is predetermined for various reasons, such as known performance of a given code in a given channel, compatibility with a given protocol, fixed system complexity, and so on.Due to the variety of channel models, system architectures, and standards, there is a large number of AMC point solutions that will succeed in the aforementioned capacity goal. In a typical communication system design, the algorithmic choice of most important functional blocks of the PHY layer is made once at design time, based on a predetermined and restricted set of channel/system scenarios.For example, the channel waveform is selected based on the channel (fast fading, frequency selective) and the system characteristics (multi/single-user, MIMO).On the other hand, truly flexible transceivers should not be restricted to one specific scenario of operation, so that the choice of channel waveform, for instance, must be broad enough to adapt either parametrically or structurally to different channel/system conditions.One good example of such a flexible waveform would be fully parametric MC-CDMA, which can adjust its spreading factor, the number of subcarriers, the constellation size, and so on.Similarly, MIMO systems that are able to change the number of active antennas or the STC, on top of a flexible modulation method like MC-CDMA, can provide a large number of degrees of freedom to code designers. With respect to the latter point, we note that STC design has relied heavily on the pioneering work of Tarokh et al. in [19], where design principles were first established.Recent overall code design approaches divide coding into inner and outer parts (see Figure 1), in order to produce easily implementable solutions [20,21].Inner codes are the so-called ST codes, whereas outer codes are the classic SISO channel codes.Each entity tries to exploit a different aspect of channel properties in order to improve the overall system performance.Inner codes usually try to get There are several forms of diversity that a system can offer, such as time, frequency, and space.The ability to change the number of antennas, subcarriers, spreading factor and the ST code provides great control for the purpose of reaching the diversity offered by the current working environment.There are many STCs presented in the literature which exploit one form of diversity in a given system/environment.All these point solutions must be taken into account in order to design a system architecture that efficiently incorporates most of them. Outer channel codes must also be chosen so as to obtain the best possible overall system performance.In some cases, the diversity gain of the cascade coding can be analytically derived, based on the properties of both coding options [20].Even in these idealized scenarios, however, individually maximizing the diversity gain of both codes does not improve performance.This means that, in order to maximize the overall performance of the system, a careful tradeoff is necessary between multiplexing gain, coding gain, and SNR gain. New channel estimation methods must also be developed in order to estimate not only the channel gain values but also other related inputs (see Table 1).For example, the types of diversity that can be exploited by the receiver or the correlation factor between multiple antennas are important inputs for choosing the best coding option.Another input is the channel rate of change (Doppler), normalized to the system bandwidth, in order to evaluate the feedback delay.In most current AMC techniques, this kind of input information has not been employed, since the channel characteristics have not been considered as system design variables. FLEXIBILITY TOOLS The paper is based on techniques developed in two IST projects, WIND-FLEX and Stingray.The main goal of WIND-FLEX was the development of flexible (in the sense of Section 2) architectures for indoor, high-bit-rate wireless modems.OFDM was the signal modulation of choice [22], along with a powerful turbo-coded scheme.The Stingray Project targeted a Hiperman-compatible [23] MIMO-OFDM system for Fixed Wireless Access (FWA) applications.It relied on a flexible architecture that exploited the channel state information (CSI) provided by a feedback channel from the receiver to the transmitter, driven by the needs of the supported service. In the following sections, the key algorithmic choices of both projects are presented, which can be incorporated in a single design able to operate in a variety of environments and system configurations.Since a flexible transceiver must operate under starkly different channel scenarios, the transmission-mode-selection algorithm must rely solely on instantaneous channel measurements and not on the average behavior of a specific channel model.This imposes the restriction of low channel dynamics in order to have the benefit of feedback information.On both designs, a maximum of one bit per carrier is allowed for feedback information, along with the mode selection number.The simplicity of this feedback information makes both designs robust to channel estimation errors or feedback delay. AMC in WIND-FLEX The WIND-FLEX (WF) system was placed in the 17 GHz band, and has been measured to experience high frequency selectivity within the 50 MHz channel widths.The result is strong performance degradation due to few subcarriers experiencing deep spectral nulls.Even with a powerful coding scheme such as turbo codes, performance degradation is unacceptable.The channel is fairly static for a large number of OFDM symbols, allowing for efficient design of adaptive modulation algorithms in order to deal with this performance degradation.In order to keep implementation complexity at a minimum, and also to minimize the required channel feedback traffic, two design constraints have been adopted: same constellation size for all subcarriers, as well as same power for all within an OFDM symbol, although both these parameters are adjustable (adaptive).Two algorithms have been proposed in order to optimize the performance.The first algorithm (Figure 2) evaluates the required Tx power for a specific code, constellation, and channel realization to achieve the target BER.If the required power is greater than the maximum available/allowable Tx power, a renegotiation of the target QoS (lowering the requirements) takes place.This approach exhibits low complexity and limited feedback information requirements.The relationship of the uncoded versus the coded BER performance in an OFDM system have been given in [24] for turbo codes and can be easily extended to convolutional codes.An implementation of this algorithm is described in [25]. The large SNR variation across the subcarriers of OFDM degrades system performance even when a strong outer code is used.To counter, the technique of Weak Subcarrier excision (WSCE) is introduced as a way to exclude a certain number of subcarriers from transmission.The second proposed algorithm employs WSCE along with the appropriate selection of code/constellation size.This is called the "coded weak subcarrier excision" (CWSCE) method. In WIND-FLEX channel scenarios performance improved when using a fixed number of excised subcarriers.The bandwidth penalty introduced by this method was compensated by the ability to use higher code rates.In Figure 3, bit error rate (BER) simulation curves are shown for the uncoded performance of fixed WSCE and are compared with the bit loading algorithm presented in [26] for the NLOS channel scenario.{Rate 1} and {Rate 2} are the system throughputs when using 4-QAM with 10% and 20% WSCE, respectively.The BER performance without bit loading or WSCE is also plotted for a 4-QAM constellation. There is a clear improvement by just using a fixed WSCE scheme, and there is a marginal loss in comparison to the nearly optimum bit-loading algorithm.Based on the average SNR across the subcarriers, semianalytic computation of the average and outage capacity for the effective channel is possible in order to evaluate a performance upper bound of a system employing such WSCE plus uniform power loading.The use of an outer code helps to come close to this bound.We note that the average capacity of an OFDM system without power-loading techniques is where the expectation operator is over the stochastic channel.For a system employing WSCE, the summation is over the used carriers along with appropriate transmit energy normalization.These capacity results are based on the "quasistatic" assumption.For each burst, it is also assumed that a sufficiently large number of bits are transmitted, so that the standard infinite time horizon of information theory is meaningful.In Figure 4, the system average capacity (SAC) and the 1% system outage capacity (SOC) of the WF system employing various WSCE scenarios are presented.Here, the definitions are as follows. (i) SAC (system average capacity).This is equivalent to the mean or ergodic capacity [27] applied to the effective channel.It serves as an upper bound of systems with boundless complexity or latency that use a specific inner code.(ii) SOC (system outage capacity).This is the 1% outage capacity of the STC-effective channel.(iii) AC and OC.This is the average capacity and outage capacity of the actual sample-path channel. The capacity of an AWGN channel is also plotted as an upper bound for a given SNR.At low SNR regions, the capacity of a system employing as high as 30% WSCE is higher than a system using all carriers without power loading.At high SNR, the capacity loss asymptotically approaches the bandwidth percentage loss of WSCE.The capacity using adaptive WSCE is also plotted.In some channel realizations, in the low-to-medium SNR region, a 30% to 50% WSCE is needed.This result motivates the design of the second algorithm.The impact of CWSCE is the ability to choose between different code rates for the same target rate, a feature absent from the first algorithm.Assume an ordering of the different pairs {code rate-constellation size} based on the SNR necessary to achieve a certain BER performance.It is obvious that this ordering also applies to the throughput of each pair (a system will not include pairs that need more power to provide lower throughput).For each of these pairs, the fixed percentage of excised carriers is computed so that they all provide the same final (target) throughput. The block diagram of CWSCE algorithm is given in Figure 5.The respective definitions are as follows: (i) x i , i = 1, . . .,l, is one of the system-supported constellations; (ii) y i , i = 1, . . ., M, is one of the supported outer channel codes.These can be totally different codes like turbo, convolutional, LDPC, or the codes resulting from puncturing one mother code, or both; (iii) z i , i = 1, . . .,n, are the resulting WSCE percentages for the n competitive triplets; (iv) Pos(z i ) are the positions of the z i % of weakest gains.(v) H is the vector of the estimated channel gains in the frequency domain; (vi) N 0 is the estimated power spectral density of the noise.(vii) RUB i , i = 1, . . .,n, is the required uncoded BER for constellation x i and code y i ; (viii) PTx i , i = 1, . . .,n, is the required Tx power for the ith triplet. The algorithm calculates the triplet that needs the minimum Tx power for a given target BER.If the minimum required power is greater than the maximum available/allowable Tx power, it renegotiates the QoS.Transmitpower adaptation is usually avoided, although it can be handled with the same algorithm.The triplet selection will still be the one that needs the minimum Tx power.The extra computation load is mainly due to the channel-tap sorting.Proper exploitation of the channel correlation in frequency (coherence bandwidth) can reduce this complexity overhead.Instead of sorting all the channel taps, one can sort groups of highly correlated taps.These groups can be restricted to have an equal number of taps.There are many sorting algorithms in the literature with different performance-versuscomplexity characteristics that can be employed, depending on implementation limitations. Simulation results using algorithm 1 for adaptive transmission-power minimization are presented in Figure 6.The performance gain of the proposed algorithm is shown for 4-QAM, the code rates 1/2 and 2/3.Performance is plotted for no adaptation, as well as for algorithm 1 in an NLOS scenario.The performance over a flat (AWGN) channel is also shown for comparison reasons, since it represents the coded performance limit (given that these codes are designed to work for AWGN channels).The main simulation system parameters are based on the WIND-FLEX platform.It uses a parallel-concatenated turbo code with variable rate via three puncture patterns (1/2, 2/3, 3/4) [28].The recursive systematic code polynomial used is (13,15) oct .Perfect channel estimation and zero phase noise are also assumed. In addition to the transmission power gain, the adaptive schemes practically guarantee the desired QoS for every channel realization.Note that in the absence of adaptation, users experiencing "bad" channel conditions will never get the requested QoS, whereas users with a "good" channel would correspondingly end up spending too much power versus what would be needed for the requested QoS.By adopting these algorithms, one computes (for every channel realization) the exact needed power for the requested QoS, and thus can either transmit with minimum power or negotiate for a lower QoS when channel conditions do not allow transmission.An average 2 dB additional gain is achieved by using the second algorithm versus the first one. Adaptive STC in Stingray As mentioned, Stingray is a Hiperman-compatible 2 × 2 MIMO-OFDM adaptive system.The adjustment rate, namely, the rate at which the system is allowed to change the Tx parameters, is chosen to be once per frame (one frame = 178 OFDM symbols) and the adjustable sets of the Tx parameters are (1) the selected Tx antenna per subcarrier, called transmission selection diversity (TSD), (2) the {outer code rate, QAM size} set. The antenna selection rule in TSD is to choose, for every carrier k, to transmit from the Tx antenna T(k) with the best performance from a maximum-ratio combining (MRC) perspective.For the second set of parameters, the optimization procedure is to choose the set that maximizes the system throughput (bit rate), given a QoS constraint (BER). In order to identify performance bounds, TSD is compared with two other rate-1 STC techniques, beamforming and Alamouti.Beamforming is the optimal solution [29] for energy allocation in an N T ×1 system with perfect channel knowledge at the transmitter side, whereby the same symbol is transmitted from both antennas multiplied by an appropriate weight factor in order to get the maximum achievable gain for each subcarrier.Alamouti's STBC is a blind technique [30], where for each OFDM symbol period two OFDM signals are simultaneously transmitted from the two antennas. Each of the three STC schemes can be treated as an ordinary OFDM SISO system producing (ideally) N independent Gaussian channels [31].This is the effective SISO-OFDM channel.For the Stingray system (2 × 2), the corresponding effective SNR (ESNR) per carrier is as follows: for Alamouti, for beamforming, where λ max k is the square of the maximum eigenvalue of the k is the frequency response of the channel between the Tx antenna i and Rx antenna j at subcarrier k = 0, 1, . . ., N − 1, and N 0 is the onesided power spectral density of the noise in each subcarrier. In Figure 7, BER simulation curves are presented for all inner code schemes and 4-QAM constellation.Both perfect and estimated CSI scenarios are presented.The channel estimation procedure uses the preamble structure described in [32]. For all simulations, path delays and the power of channel taps have been selected according to the SUI-4 model for intermediate environment conditions [33].The average channel SNR is employed in order to compare adaptive systems that utilize CSI.Note that this average channel SNR is independent of the employed STC.Having normalized each Tx-Rx path to unit average energy, the channel SNR is equal to one over the power of the noise component of any one of the receivers.Alamouti is the most sensitive scheme to estimation errors.This is expected, since the errors in all four channel taps are involved in the decoding procedure.Based on the ESNR, a semianalytic computation of the average and outage capacity for the effective channel is possible in order to evaluate a performance upper bound of these inner codes.In Figure 8, the average capacity and the 1% outage capacity of the three competing systems are presented.For comparison reasons, the average and outage capacity of the 2 × 2 and 1 × 1 systems with no channel knowledge at the transmitter and perfect knowledge at the receiver are also presented.It is clear that all three systems have the same slope of capacity versus SNR.This is expected, since the rate of all three systems is one.A system exploiting all the multiplexing gain offered by the 2 × 2 channel may be expected to have a slope similar to the capacity of the real channel (AC, OC).It is also evident that the cost of not targeting full multiplexing is a throughput loss compared to that achievable by MIMO channels.On the other hand, the goal of high throughput incurs the price of either enhanced feedback requirements or higher complexity.Comparing the three candidate schemes, we conclude that beamforming is a high-complexity solution with considerable feedback requirements, whereas Alamouti has low complexity with no feedback requirement.TSD has lower complexity than Alamouti, whereas in comparison with beamforming, it has a minimal feedback requirement.The gain over Alamouti is approximately 1.2 dB, while the loss compared to beamforming is another 1.2 dB. For all schemes, frequency selectivity across the OFDM tones is limited due to the MIMO diversity gain.That is one of the main reasons why bit loading and WSCE gave marginal performance gain.The metric for selecting the second set of parameters was the effective average SNR at the receiver (meaning the average SNR at the demodulator after the ST decoding).The system performance simulation curves based on the SNR at the demodulator (Figure 9) were the basis for the construction of the Tx mode table (TMT), which consists of SNR regions and code-rate/constellation size sets for all the QoS operation modes (BER) that will be supported by the system.The selected inner code is TSD and the outer code is the same used in the WF system.Since perfect channel and noise-power knowledge are assumed, ESNR is in fact the real prevailing SNR.This turns out to be a good performance metric, since the outer (turbo) code performance is very close to that achieved on an AWGN channel with equivalent SNR.Ideally, an estimation process should be included for assessing system performance as a function of the actual measured channel, which would then be the input to the optimization.Using this procedure in Stingray, the related SNR fluctuation resulted in marginal performance degradation. Based on those curves, and assuming perfect channel-SNR estimation at the receiver, the derived TMT is presented in Table 2. By use of this table, the average system throughput (ST) for various BER requirements is presented in Figure 10.The system outage capacity (1%) is a good measure of throughput evaluation of the system and is also plotted in the same figure.The average capacity is also plotted, in order to show the difference from the performance upper bound. The system throughput is very close to the 1% outage capacity, but it is 5 to 7 dB away from the performance limit, depending on the BER level.Since the system is adaptive, probably the 1% outage is not a suitable performance target for this system.The SNR gain achieved by going from one BER level to the next is about 0.8 dB.This marginal gain is expected due to the performance behavior of turbo codes (very steep performance curves at BER regions of interest). Flexible algorithms for phase noise and residual frequency offset estimation Omnipresent nuisances such as phase noise (PHN) and residual frequency offsets (RFO), which are the result of a nonideal synchronization process, compromise the orthogonality between the subcarriers of the OFDM systems (both SISO and MIMO).The resulting effect is a Common Error (CE) for all the subcarriers of the same OFDM symbol plus ICI.Typical systems adopt CE compensation algorithms, while the ICI is treated as an additive, Gaussian, uncorrelated per subcarrier noise parameter [34].The phaseimpairment-correction schemes developed in Stingray and WF can be implemented either by the use of pilot symbols or by decision-directed methods.They are transparent to the selection of the Space-Time coding scheme, and they are easily adaptable to any number of Tx/Rx antennas, down to the 1 × 1 (SISO) case.In [35,36] it is shown that the quality of the CE estimate, which is typically characterized by the Variance of the estimation error (VEE), affects drastically the performance of the ST-OFDM schemes.In [34,35,36] it is shown that the VEE is a function of the number and the position of the subcarriers used for estimation purposes, of the corresponding channel taps and of the pilot modulation method (when pilot-assisted modulation methods are adopted).Figure 11 depicts the dependence of the symbol error rate of an Alamouti STC OFDM system with tentative decisions on the number of subcarriers assigned for estimation purposes.It is clear that this system is very sensitive to the estimation error, and therefore to the selection of the corresponding "pilot" number.Additionally, the working range of the decision-directed approaches is mainly dictated by the mean CE and the SNR, which should be such that most of the received symbols are within the bounds of correct decisions (i.e., the resulting error from the tentative decisions should be really small).This may be difficult to ensure, especially when transmitting highorder QAM constellations.An improved supervisor has to take into account the effect of the residual CE error on the overall system performance for selecting the optimal triplet, by inserting its effect into the overall calculations.Two approaches can be followed for the system optimization.When the system protocol forces a fixed number of pilot symbols loaded on fixed subcarriers (as in Hiperman), the corresponding performance loss is calculated and the possible triplets are decided.It is noted that an enhanced supervisor device could decide on the use of adaptive pilot modulation in order to minimize estimation errors by maximizing the received energy, since the pilot modulation may significantly affect the system performance.Figure 12 depicts the effect of the pilot modulation method for the 2 × 2 Alamouti ST-OFDM system including 8 pilots, 256 subcarriers, and assuming independent compensation per receiver antenna for a realization of an SUI-4 channel.Three modulation methods are considered: randomly generated pilots (RGPs), orthogonal generated pilots (OGPs), and fixed pilot pattern (FPP), where the same pilots are transmitted from any Tx antenna.Thus, the selection of the pilot modulation scheme is another parameter to be decided, since its affects system performance in a significant way.On the other hand, when the system protocol allows for a variable number of pilot symbols, the optimization procedure becomes more complex.After a training period of some OFDM symbols, the mean CE can be roughly estimated.Using this estimate and taking into account that the whole OFDM symbol is loaded with the same QAM constellation, it can be decided whether a specifically chosen constellation is robust to the CE, so that the decision directed methods (based on tentative decisions) are reliable.For the constellations where the pilot-symbol use is necessary, the supervisor has to select appropriately the position and the number of pilot symbols. TOWARDS A FLEXIBLE ARCHITECTURE As already mentioned, a flexible transceiver must be equipped with the appropriate robust solutions for all possible widely ranging environments/system configurations.To target the universally best possible performance translates to high complexity.A first step towards a generic flexible architecture should be one that efficiently incorporates simple tools in order to deliver not necessarily the best possible, but an acceptable performance under disparate system/channel environments.The aforementioned CWSCE and TSD methods do belong to this category of flexible (partial) solutions.The capacity penalty for their use (compared to the optimal solutions) has been shown herein to be small.Both require common feedback information (1 bit/carrier) and can be incorporated appropriately in a system able to work under a variety of antenna configurations, when such limited feedback information is available.When feedback information is not available, CWSCE has the appropriate modules for mode selection (algorithm 1) for the SISO case, while Alamouti can be the choice for the MIMO case.Both STC schemes transform the MIMO channel into an inner SISO one, allowing for the use of AMC (mode selection) techniques designed for SISO systems.In the Stingray system, as already explained, the average ESNR at the demodulator is a sufficient metric for choosing the Tx mode, whereas in WIND-FLEX the uncoded BER is, respectively, used.Employing TMT tables with the required uncoded BER and code-rate/constellationsize sets for all the QoS operation modes in MIMO systems will increase the complexity, but it will permit seamless incorporation of both systems into one single common architecture.The uncoded performance of the effective channel is thus the only metric that need be used for choosing the Tx mode and can be computed for a variety of STC options.Furthermore, the fully parametric PHN and RFO algorithms mentioned above are transparent to the selection of the ST coding scheme and can provide the appropriate information about their performance under different environments/modes. The overall block diagram of a proposed architecture for the mode selection algorithm is given in Figure 13.It is meant to be able to work for all systems employing one or two antennas at the Tx/Rx.The related parameters are defined as follows: (i) PN(x i ), i = 1, . . ., l, is the number of needed pilots for a specific PHN/RFO performance, when the operation mode enables variable number of pilots; (ii) H EF is the vector of the estimated effective channel gains in the frequency domain (STC dependent); (iii) PCE : pilot carrier excision (an enhancement of the WSCE module which provides the pilot positions for a given number of used pilots). Here, WSCE is active only when the system is 1 × 1.On all other Tx-Rx antenna choices, all subcarriers are assumed "on."When only a fixed number of pilot symbols are permitted (e.g., when a specific protocol is used), the PHN/RFO estimator provides the VEE for each constellation choice to the Tx power evaluation module.In a peer-to-peer communication system, where two flexible terminals could have the possibility of reconfiguring to a specific PHY, the number of pilots can be allowed to change and the optimum solution depends on the constellation size.The competitive-triplet evaluation must take this variable pilot number into account.The supervisor module is responsible for this optimization procedure.The best choice depends not only on the channel/system characteristics but also on the selected optimization criteria such as maximizing the throughput, minimizing the Tx power, and so on. CONCLUSIONS The scientific field of radio flexibility is growing in importance and appeal.Although still in fairly nascent form for commercial use, flexible radio possesses attractive features and attributes that require further research.The present paper presents the flexibility concept, definition, and related scenarios while, in parallel, explores in some depth the tool of dynamic signal design for instantiating some of these attributes in a specific application environment.Two design approaches are presented (based on the WF and Stingray projects) and the key algorithmic choices of both are presented and incorporated into one flexible design capable of successfully operating in a variety of environments and system configurations.It is evident that physical-layer flexibility requires not only novel system architectures but also new algorithms that efficiently utilize existing and/or new modulation/coding techniques that can be adjusted to various environment and system scenarios, in order to offer QoS close to that delivered by corresponding point-optimal solutions. Call for Papers Advanced concepts for wireless communications present a vision of technology that is embedded in our surroundings and practically invisible, but present whenever required.From established radio techniques like GSM, 802.11, or Bluetooth to more emerging ones like ultra-wideband (UWB) or smart dust moats, a common denominator for future progress is underlying CMOS technology.Although the use of deep-submicron CMOS processes allows for an unprecedented degree of scaling in digital circuitry, it complicates implementation and integration of traditional RF circuits.The explosive growth of standard cellular radios and radically different new wireless applications makes it imperative to find architectural and circuit solutions to these design problems. Two key issues for future silicon-based systems are scale of integration and ultra-low power dissipation.The concept of combining digital, memory, mixed-signal, and RF circuitry on one chip in the form of System-on-Chip (SoC) has been around for a while.However, the difficulty of integrating heterogeneous circuit design styles and processes onto one substrate still remains.Therefore, System-in-Package (SiP) concept seems to be gaining more acceptance. While it is true that heterogeneous circuits and architectures originally developed for their native technologies cannot be effectively integrated "as is" into a deep-submicron CMOS process, one might ask the question whether those functions can be ported into more CMOS-friendly architectures to reap all the benefits of the digital design and flow.It is not predestined that RF wireless frequency synthesizers be always charge-pump-based PLLs with VCOs, RF transmit upconverters be I/Q modulators, receivers use only Gilbert cell or passive continuous-time mixers.Performance of modern CMOS transistors is nowadays good enough for multi-GHz RF applications. Low power has always been important for wireless communications.With new developments in wireless sensor networks and wireless systems for medical applications, the power dissipation is becoming a number one issue.Wireless sensor network systems are being applied in critical applications in commerce, healthcare, and security.These sys-tems have unique characteristics and face many implementation challenges.The requirement for long operating life for a wireless sensor node under limited energy supply imposes the most severe design constraints.This calls for innovative design methodologies at the circuit and system level to address this rigorous requirement. Wireless systems for medical applications hold a number of advantages over wired alternatives, including the ease of use, reduced risk of infection, reduced risk of failure, reduced patient discomfort, enhanced mobility, and lower cost.Typically, applications demand expertise in multiple disciplines, varying from analog sensors to digital processing cores, suggesting opportunities for extensive hardware integration. The special issue will address the state of the art in CMOS design in the context of wireless communication for 3G/4G cellular telephony, wireless sensor networks, and wireless medical applications. Topics of interest include (but are not limited to): • Hardware aspects of wireless networks Authors should follow the EURASIP JWCN manuscript format described at http://www.hindawi.com/journals/wcn/.Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JWCN manuscript tracking system at http://www.mstracking.com/wcn/,according to the following timetable: Call for Papers Recent advances in wireless network technologies have rapidly developed in recent years, as evidenced by wireless location area networks (WLANs), wireless personal area networks (WPANs), wireless metropolitan area networks (WMANs), and wireless wide area networks (WWANs), that is, cellular networks.A major impediment to their deployment, however, is wireless network security.For example, the lack of data confidentiality in wired equivalent privacy (WEP) protocol has been proven, and newly adopted standards such as IEEE 802.11i robust secruity network (RSN) and IEEE 802.15.3a ultra-wideband (UWB) are not fully tested and, as such, may expose unforeseen security vulnerabilities.The effort to improve wireless network security is linked with many technical challenges including compatibility with legacy wireless networks, complexity in implementation, and cost/performance trade-offs.The need to address wireless network security and to provide timely, solid technical contributions establishes the motivation behind this special issue. This special issue will focus on novel and functional ways to improve wireless network security.Papers that do not focus on wireless network security will not be reviewed.Specific areas of interest in WLANs, WPANs, WMANs, and WWANs include, but are not limited to: • Attacks, security mechanisms, and security services Call for Papers Some modern applications require an extraordinary large amount of complexity in signal processing algorithms.For example, the 3rd generation of wireless cellular systems is expected to require 1000 times more complexity when compared to its 2nd generation predecessors, and future 3GPP standards will aim for even more number-crunching applications.Video and multimedia applications do not only drive the complexity to new peaks in wired and wireless systems but also in personal and home devices.Also in acoustics, modern hearing aids or algorithms for de-reverberation of rooms, blind source separation, and multichannel echo cancelation are complexity hungry.At the same time, the anticipated products also put on additional constraints like size and power consumption when mobile and thus battery powered.Furthermore, due to new developments in electroacoustic transducer design, it is possible to design very small and effective loudspeakers.Unfortunately, the linearity assumption does not hold any more for this kind of loudspeakers, leading to computationally demanding nonlinear cancelation and equalization algorithms.Since standard design techniques would either consume too much time or do not result in solutions satisfying all constraints, more efficient development techniques are required to speed up this crucial phase.In general, such developments are rather expensive due to the required extraordinary high complexity.Thus, de-risking of a future product based on rapid prototyping is often an alternative approach.However, since prototyping would delay the development, it often makes only sense when it is well embedded in the product design process.Rapid prototyping has thus evolved by applying new design techniques more suitable to support a quick time to market requirement. This special issue focuses on new development methods for applications with high complexity in signal processing and on showing the improved design obtained by such methods.Examples of such methods are virtual prototyping, HW/SW partitioning, automatic design flows, float to fix conversions, automatic testing and verification, and power aware designs. Authors should follow the EURASIP JES manuscript format described at http://www.hindawi.com/journals/es/.Prospective authors should submit an electronic copy of their complete manuscripts through the EURASIP JES's manuscript tracking system at http://www.mstracking.com/es/,according to the following timetable: Figure 8 : Figure 8: System average capacity and system 1% outage capacity of different STC options. Figure 13 : Figure 13: Block diagram of proposed algorithm for mode selection. Table 1 : Flexible design tools and inputs. diversity/multiplexing/SNR gain, while outer codes try to get diversity/coding gain.The best choice of an inner/outer code pair relies on channel characteristics, complexity, and feedback-requirement (CSI) considerations. Table 2 : Transmission mode table in the case of perfect channel SNR estimation. EURASIP JOURNAL ON EMBEDDED SYSTEMS Special Issue on Signal Processing with High Complexity: Prototyping and Industrial Design Wireless network security performance evaluation • Wireless link layer security • Tradeoff analysis between performance and security • Authentication and authorization for mobile service network • Wireless security standards (IEEE 802.11,IEEE 802.15,IEEE 802.16, 3GPP, and 3GPP2)Authors should follow the EURASIP JWCN manuscript format described at http://www.hindawi.com/journals/wcn/.Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JWCN manuscript tracking system at http://www.mstracking.com/wcn/,according to the following timetable: Computer Science Division, The University of Memphis, Memphis, TN 38152, USA<EMAIL_ADDRESS>Lin, Department of Computer Science and Information Engineering, National Chiao Tung University, Taiwan<EMAIL_ADDRESS>Du, Department of Computer Science & Engineering, University of Minnesota, Minneapolis, MN 55455, USA; dzd@cs.umn.edu
11,442.2
2005-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
High Coulomb Efficiency Sn–Co Alloy/rGO Composite Anode Material for Li–ion Battery with Long Cycle–Life The low cycle performance and low Coulomb efficiency of tin-based materials confine their large–scale commercial application for lithium–ion batteries. To overcome the shortage of volume expansion of pristine tin, Sn–Co alloy/rGO composites have been successfully synthesized by chemical reduction and sintering methods. The effects of sintering temperature on the composition, structure and electrochemical properties of Sn–Co alloy/rGO composites were investigated by experimental study and first-principles calculation. The results show that Sn–Co alloys are composed of a large number of CoSn and trace CoSn2 intermetallics, which are uniformly anchored on graphene nanosheets. The sintering treatment effectively improves the electrochemical performance, especially for the first Coulomb efficiency. The first charge capacity of Sn–Co alloy/rGO composites sintered at 450 °C is 675 mAh·g−1, and the corresponding Coulomb efficiency reaches 80.4%. This strategy provides a convenient approach to synthesizing tin-based materials for high-performance lithium–ion batteries. Introduction Lithium-ion battery has attracted much attention in the field of portable electronic devices and electric vehicles because of their excellent characteristics, such as high energy density, high working voltage and long cycle life. However, the theoretical specific capacity of the most commonly used commercial graphite anode electrode is only mAh·g −1 , which obviously limits the improvement of lithium storage capacity of lithium-ion batteries [1]. Therefore, exploring a new generation of anode materials with high capacity has become one of the important research fields of lithium-ion batteries. Metal tin has a high theoretical capacity (990 mAh·g −1 , Li 4.4 Sn) [2], which is one of the most likely candidates for anode materials. However, the lithium storage process of metallic tin is complex and accompanied by a huge volume change (up to 300%), which leads to serious structural damage to metal tin and the continuous formation of solid electrolyte thin films (SEI) on the surface of newly broken tin particles [3,4]. These processes will aggravate the lower Coulomb efficiency and worse cycle performance of tin [5,6]. In order to solve the volume expansion of tin, one of the strategies is to use various forms of carbon materials with high electrical conductivity as carriers to synthesize nanocomposites containing Sn and carbon, such as activated carbon [7], carbon nanotubes [8,9], carbon fibers [10,11] and graphene [12,13]. In particular, graphene with high mechanical strength properties is used as a carrier to construct a volumetric expansion buffer structure, which can significantly improve the electrochemical performance of the composite electrode. In addition, the strategy of introducing O into metals to synthesize complex metal oxides can also significantly improve the structural stability and electronic properties of metal oxide electrode materials [14,15]. by chemical reduction method. Corresponding to the FeSn 5 crystal structure, the composite structure composed of defective Fe 0.74 Sn 5 nanoparticles dispersed on graphene adapts to the change of volume structure and shortens the transport distance of Li ions and electrons. The first reversible capacity and Coulomb efficiency of Fe 0.74 Sn 5 @RGO composite are 957 mAh·g −1 and 62.9%, keeping 674 mAh·g −1 after 100 cycles. In addition, the introduction of polymers into alloys or metal oxides can also improve the structural stability of electrode materials, which has become a new research field of polymer-based composites promising for practical applications [29,30]. Based on the above literature, the formation of nanocomposite structure and the synthesis of Sn-M alloy are the optimal strategies to improve the electrochemical performance of tin-based materials. However, it is found that the common disadvantage of these tincontaining anode materials is that the first Coulomb efficiency is low, which can not meet the requirements of the new generation of lithium-ion batteries. It is urgent and important to synthesize tin-based materials with long cycle life and high Coulomb efficiency. Here, Sn-Co alloy/rGO composites have been successfully prepared by chemical reduction and then sintering treatment using graphene oxide as a carrier, in which Sn-Co nano-alloys are uniformly anchored on graphene. This structure has a variety of functions and advantages: (1) The synthesized nano-sized Sn-Co alloy has a higher resistance to structural destruction because of its small particle size. (2) The introduction of Co atoms in Sn-Co alloy as an inert medium helps to buffer the volume expansion of metal Sn. (3) The buffering effect formed by the good mechanical properties of graphene can further improve the structural stability of Sn-Co alloy, and the good electronic conductivity of the electrode is ensured because graphene has good electrical conductivity. (4) Sintering treatment can increase the grains and particles of Sn-Co alloy, thus improving the first Coulomb efficiency of Sn-Co alloy/rGO composites. The results show that Sn-Co alloy/rGO composites have good cycle performance as anode materials for lithium-ion batteries, especially with high first Coulomb efficiency. Microstructure and Composition The microstructure analysis of the synthesized Sn-Co alloy/rGO composite is shown in Figure 1. As can be seen from Figure 1a, the Sn-Co alloy/rGO composite without sintered is composed of pure Sn and Co metal according to the standard data of X-ray diffraction. After sintering at 400 • C, the phase of Sn-Co alloy/rGO composite transformed into CoSn intermetallics, while a small amount of unreacted pure Sn metal remained. After sintering at 450 • C, a small amount of CoSn 2 intermetallics were newly formed in the Sn-Co alloy/rGO composite. With the increase of sintering temperature, the phase composition of Sn-Co alloy/rGO composite no longer changes, but the FWHM of the XRD spectrum of all phases of the composite decreases, indicating that the grains of CoSn and CoSn 2 intermetallics continue to increase or the degree of crystallization increases. Similar phase compositions have also been reported in Sn-Co alloy; for example, CoSn and CoSn 2 intermetallics coexist in Sn-Co alloy prepared by mechanical ball milling [31]. However, there is no obvious carbon diffraction peak in the XRD spectrum in Figure 1a, which may be due to an amorphous structure of graphene. The typical SEM and TEM images in Figure 1b,c show the surface morphologies of Sn-Co alloy/rGO composite after sintering at 500 • C, and the morphologies of other samples are shown in Figure S1. It can be seen that the Sn-Co alloy/rGO composite is nearly spherical particles with a diameter range of about 10 to 100 nm. From the HRTEM in Figure 1d, it is found that the main regions of Sn-Co alloy/rGO composites are (201) planes of CoSn intermetallics with a plane distance of 0.201 nm, while a very small part of the regions are (211) planes of CoSn 2 intermetallics with the plane distance of 0.253 nm. Figure 1e shows the energy dispersive X-ray spectroscopy (EDS) results of Sn-Co alloy/rGO composite sintered at 500 • C, in which the atomic ratio of Sn to Co is close to 1:1. This The typical SEM and TEM images in Figure 1b,c show the surface morphologies of Sn-Co alloy/rGO composite after sintering at 500 °C, and the morphologies of other samples are shown in Figure S1. It can be seen that the Sn-Co alloy/rGO composite is nearly spherical particles with a diameter range of about 10 to 100 nm. From the HRTEM in Figure 1d, it is found that the main regions of Sn-Co alloy/rGO composites are (201) planes of CoSn intermetallics with a plane distance of 0.201 nm, while a very small part of the regions are (211) planes of CoSn2 intermetallics with the plane distance of 0.253 nm. Figure 1e shows the energy dispersive X-ray spectroscopy (EDS) results of Sn-Co alloy/rGO composite sintered at 500 °C, in which the atomic ratio of Sn to Co is close to 1:1. This is basically consistent with the raw material ratio of material synthesis and the results of XRD analysis. During the synthesis of Sn-Co alloy/rGO composite, Sn 2+ and Co 2+ in the solution are electrostatically adsorbed on graphene oxide near the oxygen-containing functional groups, such as the hydroxyl group, carboxyl group and epoxy group [32,33]. Then, these metal ions and graphene oxide are reduced to metal Sn, Co and graphene, respectively, by NaBH4 reduction. In the subsequent sintering process, Sn and Co atoms diffuse on graphene to form an Sn-Co alloy formed by a large number of CoSn intermetallics and a small amount of CoSn2 intermetallics. The schematic sketch of Sn-Co alloy/rGO composite is shown in Figure 2a. During the synthesis of Sn-Co alloy/rGO composite, Sn 2+ and Co 2+ in the solution are electrostatically adsorbed on graphene oxide near the oxygen-containing functional groups, such as the hydroxyl group, carboxyl group and epoxy group [32,33]. Then, these metal ions and graphene oxide are reduced to metal Sn, Co and graphene, respectively, by NaBH 4 reduction. In the subsequent sintering process, Sn and Co atoms diffuse on graphene to form an Sn-Co alloy formed by a large number of CoSn intermetallics and a small amount of CoSn 2 intermetallics. The schematic sketch of Sn-Co alloy/rGO composite is shown in Figure 2a. In order to further analyze the electronic structures of pure Sn metal, CoSn2 and CoSn intermetallic in Sn-Co alloy/rGO composite from the atomic level, the electronic densities of states and atomic population of the three materials were calculated by first principles, the results are shown in Figure 2b-d. As can be seen from Figure 2b-d, there is no charge transfer between Sn atoms in pure tin, and all Sn atoms share a charge. For CoSn2 inter- Figure 2. The schematic sketch of Sn-Co alloy/rGO composite (a), the crystal structure (atomic population) (b) and density of states (e) for pure Sn, the crystal structure (atomic population) (c) and density of states (f) for CoSn 2 intermetallics, the crystal structure (atomic population) (d) and density of states (g) for CoSn intermetallics. In order to further analyze the electronic structures of pure Sn metal, CoSn 2 and CoSn intermetallic in Sn-Co alloy/rGO composite from the atomic level, the electronic densities of states and atomic population of the three materials were calculated by first principles, the results are shown in Figure 2b-d. As can be seen from Figure 2b-d, there is no charge transfer between Sn atoms in pure tin, and all Sn atoms share a charge. For CoSn 2 intermetallics, each Sn atom transfers 0.24 e to the Co atom on average. Similarly, for CoSn intermetallics, the average charge transfer from each Sn atom to the Co atom is 0.37 e. It can be inferred that the Sn-Co bond in CoSn 2 and CoSn intermetallics is a metal bond with certain ionic characteristics, according to the difference in electronegativity between Sn and Co elements [34]. It can be seen from the Figure 2e-g that the density of states of pure tin near the Fermi level is mainly contributed by the s orbitals and p orbitals of Sn atoms, while the density of states of CoSn 2 and CoSn intermetallics is mainly contributed by the s and p orbitals of Sn atoms and the p orbitals and d orbitals of Co atoms, and the contribution of Co atoms is more. Therefore, pure tin, CoSn 2 and CoSn intermetallics have higher densities of states near the Fermi level, and the densities of states increase from 0.25 to 0.37 with the increase of Co content. The results show that all of them have good electrical conductivity, and the electrical conductivity of Sn-Co alloy increases with the increase of Co content. Figure 3 shows the electrochemical performance of Sn-Co alloy/rGO composites. In order to analyze the lithium intercalation mechanism of electrode materials during charge and discharge, Sn-Co alloy/rGO composites were tested by cyclic voltammetry and the results are shown in Figure 3a. It can be seen from Figure 3a that there are obvious reduction peaks at 0.9~1.1 V and below 0.8 V in the Sn-Co alloy/rGO composites without sintered, which is similar to the CV curve of pure tin [35]. The peak at 0.9~1.1 V is usually attributed to some irreversible reactions of forming SEI interface on the surface of active material particles [36]. The peak below 0.8 V corresponds to the process that pure tin reacts with lithium to form Li x Sn alloy (Sn + xLi + + xe → Li x Sn, 0 ≤ x ≤ 4.4) [37]. In the process of reverse scanning, some obvious oxidation peaks were observed at 0.55 V, 0.68 V, 0.76 V and 0.81 V, respectively. The corresponding Li x Sn alloys were dealloyed to form Li 3.5 Sn, LiSn, Li 2 Sn 5 and pure Sn metal [38]. After sintering, the oxidation peak of Sn-Co alloy/rGO composites shifts to the left, and a wide oxidation peak appears in the range of 0.52~0.65 V, which is mainly due to the formation of Sn-Co alloy during sintering. This is similar to the results confirmed by Zheng et al. [39] in the oxidation peak of Sn-Co alloy appears at 0.5~0.6 V. Figure 3b shows the first charge-discharge curve of Sn-Co alloy/rGO composite at 100 mAh g −1 . It can be seen from Figure 3b that the Sn-Co alloy/rGO composite shows a weak platform at about 1.1 V and a tilted platform below 0.80 V, which correspond to the reduction peaks in the CV curve in Figure 3a. It can also be obtained from Figure 2b that the first charge capacity and discharge capacity of Sn-Co alloy/rGO composite without sintered are 995 and 595 mAh·g −1 , respectively, and the corresponding first Coulomb efficiency is 59.8%. After sintering, the first charge capacity of Sn-Co alloy/rGO composite decreases gradually, while the first discharge capacity increases at first and then decreases. When sintering at 500 • C, the first charge capacity of Sn-Co alloy/rGO composite is 840 mAh·g −1 , the first discharge capacity reaches the maximum, which is 675 mAh·g −1 , and the corresponding first Coulomb efficiency reaches 80.4%. This may be due to the increase in the grain size of Sn-Co alloy in Sn-Co alloy/rGO composites, which leads to the increase in Coulomb efficiency. It is well known that nanomaterials have the disadvantage of low Coulomb efficiency [40], so increasing grain size and particle size is an effective strategy to improve Coulomb efficiency. reverse scanning, some obvious oxidation peaks were observed at 0.55 V, 0.68 V, 0.76 V and 0.81 V, respectively. The corresponding LixSn alloys were dealloyed to form Li3.5Sn, LiSn, Li2Sn5 and pure Sn metal [38]. After sintering, the oxidation peak of Sn-Co alloy/rGO composites shifts to the left, and a wide oxidation peak appears in the range of 0.52~0.55 V, which is mainly due to the formation of Sn-Co alloy during sintering. This is similar to the results confirmed by Zheng et al. [39] in the oxidation peak of Sn-Co alloy appears at 0.5~0.6 V. Figure 3b shows the first charge-discharge curve of Sn-Co alloy/rGO composite at 100 mAh g -1 . It can be seen from Figure 3b that the Sn-Co alloy/rGO composite shows a weak platform at about 1.1 V and a tilted platform below 0.80 V, which correspond to the reduction peaks in the CV curve in Figure 3a. It can also be obtained from Figure 2b that the first charge capacity and discharge capacity of Sn-Co alloy/rGO composite without sintered are 995 and 595 mAh·g -1 , respectively, and the corresponding first Coulomb efficiency is 59.8%. After sintering, the first charge capacity of Sn-Co alloy/rGO composite decreases gradually, while the first discharge capacity increases at first and then decreases. When sintering at 500 °C, the first charge capacity of Sn-Co alloy/rGO composite is 840 mAh·g -1 , the first discharge capacity reaches the maximum, which is 675 mAh·g -1 , and the corresponding first Coulomb efficiency reaches 80.4%. This may be due to the Figure 3c shows the rate performance of Sn-Co alloy/rGO composite with different sintering temperatures. It is found that the discharge capacity of Sn-Co alloy/rGO composite sintered at 500 • C is 675, 552, 425 and 311 mAh·g −1 at 100, 200, 1000 and 5000 mA·g −1 , respectively. When the current density returns to 100 mA·g −1 , the discharge capacity of Sn-Co alloy/rGO composite reaches 580 mAh·g −1 , which shows a good rate performance. Figure 3d shows the cycle performance of Sn-Co alloy/rGO composites with different sintering temperatures at 100 mA·g −1 . The discharge capacity of Sn-Co alloy/rGO composites without sintered is only 303 mAh·g −1 after 100 cycles. After sintering, the cycle properties of Sn-Co alloy/rGO composites increase at first and then decrease with the increase of sintering temperature. The cycle performance of Sn-Co alloy/rGO composites sintered at 450 • C reached the maximum, and the discharge capacity is 508 mAh·g −1 after 100 cycles. The long cycle test of Sn-Co alloy/rGO composite sintered at 450 • C was carried out at 200 mA·g −1 , and the results are shown in Figure 2e. It can be seen that the capacity of Sn-Co alloy/rGO composite reduces to 443 mAh·g −1 after 500 cycles from 622 mAh·g −1 , and the capacity retention rate is 71.2%. Therefore, the Sn-Co alloy/rGO composites prepared by chemical reduction and then sintering treatment shows good cycle performance, especially the first cycle Coulombic efficiency is high compared with the literature, as shown in Table S2. Electrochemical Performance In order to investigate the interface properties of electrode materials, the AC impedance spectra of Sn-Co alloy/rGO composites with different sintering temperatures were mea-sured and the results are shown in Figure 4a. The internal resistance R s , the impedance of lithium-ion diffusion in SEI R SEI and the charge transfer impedance between active material and electrolyte R ct obtained by fitting equivalent circuit model [41] are recorded in Table 1. alloy/rGO composite reduces to 443 mAh·g -1 after 500 cycles from 622 mAh·g -1 , and the capacity retention rate is 71.2%. Therefore, the Sn-Co alloy/rGO composites prepared by chemical reduction and then sintering treatment shows good cycle performance, especially the first cycle Coulombic efficiency is high compared with the literature, as shown in Table S2. In order to investigate the interface properties of electrode materials, the AC impedance spectra of Sn-Co alloy/rGO composites with different sintering temperatures were measured and the results are shown in Figure 4a. The internal resistance Rs, the impedance of lithium-ion diffusion in SEI RSEI and the charge transfer impedance between active material and electrolyte Rct obtained by fitting equivalent circuit model [41] are recorded in Table 1. Figure 4. The AC impedance spectra (a) and fitting curve (b) of Sn-Co alloy/rGO composites, the diffusion direction (c) and diffusion energy barrier (f) for pure Sn, the diffusion direction (d) and diffusion energy barrier (g) for CoSn 2 intermetallics, the diffusion direction (e) and diffusion energy barrier (h) for CoSn intermetallics. The diffusion coefficient of lithium-ion can be calculated by formula [42]: Here, V m is the molar volume (cm 3 ·mol −1 ), F is the Faraday constant (9.6485 × 10 4 C·mol −1 ), S is the electrode surface area (cm 2 ), σ is the Warburg coefficient, which is the slope of the fitting line in Figure 4b, and dE/dx is the slope of the Coulomb titration line. From the data in Table 1, it can be seen that with the increase of sintering temperature, the R S and R SEI of Sn-Co alloy/rGO composites decrease gradually, and R ct decreases at first and then increases, which is mainly due to the formation of Sn 2 Co intermetallics. However, when the sintering temperature exceeds 500 • C, the grains are easy to grow, which is not conducive to the diffusion of lithium ions in the solid phase, and finally leads to the decrease of the diffusion coefficient. In order to clarify the lithium-ion diffusion of pure Sn, CoSn 2 and CoSn intermetallics in Sn-Co alloy/rGO composites on an atomic scale, the diffusion energy barriers of lithium atoms in these phases were calculated by first principles. The diffusion direction and diffusion energy barrier are shown in Figure 4c-f. It can be seen from Figure 4c that the diffusion energy barriers of lithium ions of pure tin, CoSn 2 and CoSn intermetallics are anisotropic in different directions. For example, the diffusion energy barrier of lithium-ion along the Z-axis in pure tin metal is the lowest compared with the X-and Y-axis, only 0.11 eV, which is close to the energy barrier of Sn (0.04 eV) in the literature [43]. The diffusion energy barriers of lithium ions in CoSn 2 intermetallics along the X-axis and Z-axis are relatively lower, which is 1.72 eV and 1.83 eV, respectively. For CoSn intermetallics, the diffusion energy barrier of lithium ions along the Y-axis is the lowest, but the value is as high as 3.11 eV. It can be concluded that the diffusion energy barriers of lithium-ion increases gradually in the following order: pure Sn < CoSn 2 < CoSn. Here, the addition of cobalt to tin can effectively improve the cycle performance, but excess cobalt will significantly hinder the dynamic diffusion of lithium atoms in Sn-Co alloy. Preparation of Materials Synthesis of rGO. Graphene oxide (GO) was prepared by an improved Hummers method [44]. Firstly, 5 g NaNO 3 and 230 mL concentrated H 2 SO 4 were added to the flask with 500 mL in an ice water bath. When the solution temperature dropped to 0 • C, 10 g graphite was added to the flask and stirred for 15 min. Then, 40 g KMnO 4 was added to the flask within 30 min and stirred at 10~15 • C for 90 min. The solution was heated to 35~40 • C and stirred for 30 min. Third, 700 mL of deionized water was added to the flask within 30 min. The temperature of the solution in the flask was kept between 90 • C and 95 • C by controlling the rate of adding water. Then, the H 2 O 2 with a mass fraction of 5 wt.% was added to the flask until no bubbles appeared and filtered while it was hot. Finally, the filtered cake was dissolved in 5 wt.% HCl solution, stir evenly and filter and repeat 3~4 times, and then wash to neutral with deionized water to obtain the required GO. Synthesis of Sn-Co alloy/rGO composites. Firstly, Sn-Co alloy/rGO composite precursors were prepared by chemical reduction. The details were as follows: 22.6 g stannous chloride (SnCl 2 ·2H 2 O) and 23.8 g CoCl 2 ·6H 2 O were fully dissolved in 200 mL deionized water, then 1 g GO was added to the solution, ultrasonic for 2 h, and then a certain amount of sodium citrate and polyvinylpyrrolidone (PVP) was added to the solution, and then dispersed uniformly by ultrasonic 30 min. The resulting solution was labeled as A solution. Subsequently, 0.15 g NaOH was dissolved in 50 mL deionized water, then 1 g NaBH 4 as a reducing agent was added, and the obtained solution was labeled as B solution. In an ice water bath, the B solution was slowly added to the A solution with stirring and continued stirring for 2 h, then filtered and washed with water until neutral pH, and the obtained powder was dried at 60 • C for 24 h in vacuum. Finally, the drying product was sintered at 400~600 • C for 2 h in a tube furnace protected by argon, and the target product Sn-Co alloy/rGO composites were prepared. Materials Characterization The crystal structure of the composite was characterized by X-ray diffraction (XRD, Shimadzu XRD-6100, Cu Kα radiation, λ = 0.1542 nm). The surface morphology and microstructure of the materials were observed by scanning electron microscope (SEM, JEOL JSM-7500F) and transmission electron microscope (TEM, JEOL JEM-2010) with an energy dispersive X-ray spectrometer (EDS). Electrochemical Measurements The working electrode was prepared with the active material, conductive agent (acetylene black) and polyvinylidene fluoride (PVDF) according to the mass ratio of 85:5:10, and the active material in working electrode is~3.0 mg/cm 2 . The 2032 button battery for lithium storage performance test was assembled by using metal Li sheet as the counter electrode with 1.0 mg, Celgard 2400 polypropylene membrane as the separator and 1.0 mol/L LiPF 6 /EC+DMC+DEC (Volume ratio 1:1:1) of 0.04 mL as the electrolyte. The galvanostatic discharge-charge (GCD) was performed on a battery test system (Sunway, BTS-5 V 10 mA) with the voltage of 0.01~2.00 V (vs. Li + /Li). The three button batteries were charged and discharged 100 times, respectively, and the capacity retention rate between the maximum value and the minimum value was taken as the test result of long cycle performance. In the process of long cycle test, three button batteries were tested for 100 cycles, and the capacity retention between the maximum and the minimum was taken as the test results of the long cycle performance. The cyclic voltammetry (CV) curve was recorded using an electrochemical workstation (Chenhua, CHI604E) at a scan rate of 20 mV·s −1 in the voltage of 0.01~2.00 V. The electrochemical impedance spectroscopy (EIS) measurements were also carried out on the CHI604E electrochemical workstation with frequencies ranging from 100 kHz to 10 mHz. Theoretical Calculation According to the phase composition of Sn-Co alloy/rGO composite, the density charge and density of states of pure Sn metal, CoSn 2 and CoSn intermetallics with lowest energy configuration were calculated using the CASTEP software package [45] of plane wave pseudopotential method based on density functional theory with the consideration of spin-polarized effect. The generalized gradient approximation (GGA) of Perdew-Burke-Ernzerh [46] of (PBE) approaches were employed for all the calculations. The electronic wave functions were expanded in a plane-wave basis set using a kinetic energy cutoff of 500 eV, and the interactions between ionic cores and valence electrons are described by ultrasoft pseudopotentials [47]. For pure Sn metal, CoSn 2 and CoSn intermetallics, the K-point mesh [48] of 8 × 8 × 16, 8 × 8 × 7 and 11 × 11 × 12 was chosen for optimizing geometric configuration and analyzing the electronic properties. The transition states (TS) and barriers of the supercell (2 × 2 × 2) of pure Sn metal, Sn 2 Co and SnCo intermetallics were calculated using nudged elastic band method (NEB) [49]. Conclusions Here, Sn-Co alloy/rGO composites have been successfully prepared by chemical reduction and then sintering using graphene oxide as a carrier. The metallic elemental tin and cobalt obtained by chemical reduction are diffused in the subsequent sintering process to form Sn-Co alloys composed of a large number of CoSn intermetallics and trace CoSn 2 intermetallics. These Sn-Co alloys with grain diameters of about 5~15 nm are uniformly anchored on graphene. Increasing the sintering temperature can effectively improve the first Coulomb efficiency and cycle performance of the composites. The first charge capacity and Coulomb efficiency of Sn-Co alloy/rGO composites sintered at 450 • C are 675 mAh·g −1 and 80.4%, showing high first Coulomb efficiency. The continuous increase of sintering temperature will lead to a decrease in cycle performance, which may be caused by grain growth during the sintering process. The above results provide a strategy and technical approach for the synthesis of anode materials for lithium-ion batteries with high first Coulomb efficiency and good cycle performance. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article.
6,342
2023-05-01T00:00:00.000
[ "Materials Science" ]
COMPONENTS OF THE BUDGET SYSTEM OF UKRAINE AS FACTORS OF FINANCIAL AND ECONOMIC SECURITY The purpose of this study is to analyze the components of the budget system of Ukraine as factors of financial and economic security to identify negative trends in the context of the implementation of decentralization reform. It is proved that the research of this direction should start with the analysis of the conceptual apparatus and structural relationships between categories. At the top level of the hierarchy there is the category of national security of Ukraine, which, according to current legislation, means the protection of state sovereignty, constitutional order and other national interests of the country from real and potential threats. The category of financial and economic security is also often used in the scientific literature. Given the above classification, in this case we are talking about the financial security of the country as a factor of economic security. Methodology. To stimulate economic development, the practice of modern budget regulation provides for the presence of a planned deficit, which is a source of local and public debt. Depending on the areas of its financing, there are domestic and foreign, local and national debts. The relationship between the above indicators determines the level of budget security of the country, which is one of the most important factors of financial stability was identified in the work. Results. It is proved that, according to the results of the analysis, practical recommendations on budget policy of Ukraine as a factor of financial and economic security should take into account the following steps: against the background of growing social burden on the budget, it is necessary to continue the redistribution of budget funds in favour of the regions, which will increase their level of financial autonomy and reduce the amount of transfer payments; pursue a strict restriction policy to prevent the growth of the state budget deficit and uncontrolled increase in debt; the problem of pension provision increases the burden on the state budget every year. It is necessary to take measures to create a cumulative system of state and non-state pension insurance. Practical implications. The practical consequences prove that in 2016 the public debt of the consolidated budget of Ukraine reached a record 81% of GDP. However, effective economic and budgetary policy allowed to reduce it in 2019 to 50.3%, which was positive. Moreover, the share of external debt was 29.2%. The high budget deficit in 2020 will lead to an increase in debt to 58.7% of GDP, which offsets the previous positive changes. It is determined that at the beginning and at the end of the study period the expenditures of the pension system of Ukraine have been equal to about 10% of GDP. At the same time, financing from own revenues has decreased from 8% to 6%, which is negative. The most critical situation became after 2013, when this indicator began to decline rapidly, increasing the burden on the state budget. Value/originality of the work is an analysis of the components of the budget system of Ukraine as factors of financial and economic security, which in contrast to the existing ones is based on the need for further implementation of decentralization reform and allows to develop practical recommendations for budget regulation. Introduction At the beginning of the recession, issues related to the mechanisms of forming a strategy for ensuring the financial and economic security of the country are always gaining special relevance in research. In this context, security means the stability of the dynamic system within acceptable limits. On the other hand, security means the ability of the economic system to withstand both external and internal negative manifestations. Research in this area should begin with an analysis of the conceptual apparatus and structural Vol. 1, No. 3, 2020 relationships between categories. At the top level of the hierarchy is the category of national security of Ukraine, which, according to current legislation (On the national security of Ukraine), means the protection of state sovereignty, constitutional order and other national interests of the country from real and potential threats. Issues of national security risks have been studied by such scientists as: V. Bilous, A. Buteiko, Yu. Nikitin, O. Kostenko, H. Sytnyk and others (Nikitin, 2015). Regarding the provision of sustainable socioeconomic development, these issues belong to the sphere of economic security (Lekar, 2012). V. Heiets, V. Honcharova, S. Lekar, V. Muntian, O. Skoruk, S. Shkarlet were engaged in the development of its essence, constituent elements and problems of public administration. Also, the issue of methodological assessment of the level of economic security of the state was paid attention at the legislative level (Methodical recommendations for calculating the level of economic security of Ukraine). According to the current methodological recommendations, it includes: production, demographic, energy, foreign economic, investment and innovation, macroeconomic, food, social, and financial security. The category of financial and economic security is also often used in the scientific literature. Given the above classification, in this case we are talking about the financial security of the country as a factor of economic security. Its essence, evaluation methods and role, in the context of national interests, were studied in the works by O. Baranovskyi, A. Kalantai, O. Melykh, A. Sukhorukov andothers (Melykh, 2013: Kalantai, 2012). The purpose of the country's financial system includes monetary policy, stable functioning of the banking and nonbanking financial sectors, ensuring the financial stability of public finances on the basis of balanced revenues and expenditures of budgets at all levels and more. Given the above, the purpose of this study is to analyze the components of the budget system of Ukraine, as factors of financial and economic security, to identify negative trends in the context of decentralization reform. Components of the budget system According to the decentralization reform, which has been actively implemented in the budget system of Ukraine from 2015 to the present day, the main components of the budget system are local budgets of amalgamated territorial communities of settlements, cities, regions and the State Budget. According to the Budget Code (Budget Code of Ukraine), their sources of income are tax and nontax revenues. Among tax revenues, local budget revenues do not include import duties and value added tax. However, they have revenues from local taxes and fees that do not belong to the State budget. We have a similar situation with non-tax revenues from the NBU, which can be included only in the State budget revenues. The goal of local self-government is the development of regions and common social standards of living throughout Ukraine. This goal is achieved by generally accepted standards of budget security per 1 person living in a particular region. On the other hand, the volume and structure of expenditures of the State Budget of Ukraine are determined by the socio-economic development strategy adopted at the state level. The absolute difference between expenditures and revenues of budgets of all levels determines the volume of their deficit or surplus. In relative terms, at the local level, the absolute difference obtained is compared with the value of gross regional product, at the state level -with the volume of gross domestic product and so on. Given that each region has its own unique natural and climatic conditions of existence, structure and location of productive forces and the level of economic development, they differ significantly in terms of budget revenues. This means that today most regions do not have the opportunity to provide the necessary level of social living standards at their own expense. In order to territorially equalize the uneven development and cover part of the deficit of local budgets, intergovernmental transfers are transferred from them to the State Budget free of charge and irrevocably. In their economic essence, they are equalization grants. Indicators of budget regulation efficiency To stimulate economic development, the practice of modern budget regulation provides for the presence of a planned deficit, which is a source of local and public debt. Depending on the areas of its financing, there are domestic and foreign, local and national debts. The ratio between the above indicators determines the level of budget security of the Vol. 1, No. 3, 2020 country, which is one of the most important factors of financial stability. Table 1 shows the results of calculations of indicators of efficiency of budget regulation according to the data of 2004-2019. The decentralization reform provides for greater financial autonomy for the regions to be able to solve socio-economic problems on the ground. Column (2) Table 1 shows that from 2004 to 2014 the volume of revenues to local budgets in relative terms had a chaotic dynamics. During 2015-2019, their share in the country's GDP had a steady upward trend and increased from 6.1% to 7.6%. The share of local budget expenditures provided by transfers from the State Budget of Ukraine in 2015 was a record 59.1%. Already in 2019, this figure decreased to 46.4%. Thus, it can be stated that the system of public financial management is gradually undergoing changes aimed at developing the autonomy of the regions. However, the financial dependence of local communities on the center today remains quite high and needs further changes in this direction. From the point of view of budget security, the data for the third quarters of 2020 need special attention. Here the volume of revenues to local budgets amounted to 7.8% of GDP. At the same time, intergovernmental transfers were only 3.9%, which is much less than in previous years, according to column (3) of Table 1. This indicates a saving of money by the central government and the need to reduce costs on the ground. Despite the inefficiency of public administration, Ukraine remains one of the leaders among Western European countries in terms of the influence of the public sector on the redistribution of GDP, according to column (4) of Table 1. As we can see, over the last decade, government consolidated budget expenditures have remained almost unchanged at about 34-35% of GDP. With an aging population and increased spending on social and pension benefits, this indicator will tend to grow. That is why, at this stage it is extremely important: -firstly, to prevent the growing role of state regulation in the distribution of public product; -secondly, to continue the transfer of powers and financial resources to local communities, in accordance with the decentralization reform. The stimulating role of the budget deficit is that the return on the efficient use of borrowed funds may exceed the cost of raising them. That is why the dynamics of this indicator is given in column (5). In world practice, the acceptable level of budget deficit is 2-3% of GDP. As you can see, in 2015-2019, Ukraine almost met these restrictions. However, in 2020, the consolidated budget deficit was equal to the highest level since 2004 at 8.4%. Inefficient use of borrowed funds leads to the problem of public debt, the dynamics of which is shown in Figure 1. . 1, No. 3, 2020 As can be seen from Figure 1, in 2016, the public debt of the consolidated budget of Ukraine reached a record 81% of GDP. However, effective economic and budgetary policy allowed to reduce it in 2019 to 50.3%, which was positive. Moreover, the share of external debt was 29.2%. The high budget deficit in 2020 will lead to an increase in debt to 58.7% of GDP, which offsets the previous positive changes. Consideration of the budget of the pension fund Consideration of the budget of the pension fund needs special attention, as a significant part of the State budget expenditures each year is used to cover its deficit. Prolongation of working age is a forced measure associated with a gradual increase in life expectancy. Thus, according to WHO, in Ukraine it is 72.5 years. Of these, for men -67.6 years, for women -77.1 years and so on. On the other hand, the constant growth of the actual subsistence level and the need to adjust the average level of pension provision per capita increased the burden on the pension fund. Thus, in the prices of June 2020, the actual subsistence level was 3,974 UAH. The consequence of these trends is the formation of a budget deficit of the pension fund. Thus, according to 2019, the share of own revenues in its total expenditures was only 55.7%. The main source of financing the deficit is the State Budget of Ukraine. The dynamics of these indicators for 2004-2019 in graphical form is shown in Figure 2. In order to exclude the inflation factor, bringing the total expenditures to the prices of the base year allowed us to calculate the average annual growth rate of this indicator, which was equal to + 4.1%. In fact, this means that real pension insurance payments, in comparable prices, increased from 2004 to 2019 by 1.8 times. At the same time, the pension fund's own income increased on average by + 1.8% annually, or 1.3 times over the entire period. This increase in the deficit was offset by the state budget, the share of which in the total expenditures of the pension fund during the study period increased from 16.5% to 44.1% and in 2019 amounted to 182,270 million UAH. Preservation of these trends in the future carries significant risks in ensuring the financial and economic security of the state. To confirm this, it should be noted that pension expenditures are the most important item of state budget expenditures. In recent years, their share has ranged from 22% to 30%. That is why one of the conditions for the stability of such a system is economic growth at a faster pace, which will maintain this ratio. To this end, we have built a graph of the dynamics of expenditures of the pension fund by source of income, in % of gross domestic product, Figure 3. As you can see, at the beginning and the end of the study period, the expenditures of the pension system of Ukraine were about 10% of GDP. At the same time, financing from own revenues decreased from 8% to 6%, which was negative. The most critical situation became after 2013, when this indicator began to decline rapidly, increasing the burden on the state budget. Summarizing the above, the problem of pension provision is directly related to the stability of the budget system of Ukraine as a factor of financial and economic security. It can be stated that the possibilities of the pension system of Ukraine of the first level, on the principles of joint and several compulsory pension insurance, have completely exhausted themselves to date. That is why the urgent issue is the introduction of the second and third levels as soon as possible, which provide for the creation of a cumulative system of state and non-state pension insurance. On the other hand, accelerating economic growth can significantly reduce the burden on the state budget. Conclusions Thus, according to the results of the analysis, practical recommendations on the budget policy of Ukraine, as a factor of financial and economic security, should take into account the following steps: 1. Against the background of increasing social burden on the budget, it is necessary to maintain the level of state redistribution of gross domestic product at the expense of its expenditures. 2. It is necessary to continue the redistribution of budget funds in favour of the regions, which will increase their level of financial autonomy and reduce the amount of transfer payments. 3. To pursue a strict restrictive policy to prevent the growth of the State budget deficit and uncontrolled increase in debt. 4. The problem of pension provision increases the burden on the State budget every year. It is necessary to take measures to create a cumulative system of state and non-state pension insurance. Economic growth will help to mitigate these budget problems and their impact on the country's financial security; recession will exacerbate them. Thus, the scientific novelty of this work is the analysis of the components of the budget system of Ukraine as factors of financial and economic security, which in contrast to the existing ones proceeds from the need for further implementation of decentralization reform and allows to develop practical recommendations for budget regulation.
3,774.4
2020-12-18T00:00:00.000
[ "Economics" ]
Sensitivity of remotely sensed trace gas concentrations to polarisation Current and proposed space missions estimate column-averaged concentrations of trace gases (CO2, CH4 and CO) from high resolution spectra of reflected sunlight in absorption bands of the gases. The radiance leaving the top of the atmosphere is partially polarised by both reflection at the surface and scattering within the atmosphere. Generally, the polarisation state is unknown and could degrade the accuracy of the concentration measurements. The sensitivity to polarisation is modelled for the proposed geoCARB instrument, which will include neither polarisers nor polarisation scramblers to select particular polarisation states from the incident radiation. The radiometric and polarimetric calibrations proposed for geoCARB are outlined, and a model is developed for the polarisation properties of the geoCARB spectrographs. This model depends principally upon the efficiencies of the gratings to polarisations parallel and perpendicular to the rulings of the gratings. Next, an ensemble of polarised spectra is simulated for geoCARB observing targets in India, China and Australia from geostationary orbit at longitude 110 E. The spectra are analysed to recover the trace gas concentrations in two modes, the first denied access to the polarimetric calibration and the second with access. The retrieved concentrations using the calibration data are almost identical to those that would be obtained with polarisation scramblers, while the retrievals without calibration data contain outliers that do not meet the accuracies demanded by the mission. Introduction The Greenhouse gases Observing SATellite (GOSAT) launched by Japan's Aerospace Exploration Agency estimates column-averaged concentrations1 of CO 2 and CH 4 from high resolution spectra of reflected sunlight in absorption bands of CO 2 , CH 4 and O 2 .Similarly, NASA's second Orbiting Carbon Observatory (OCO-2) estimates CO 2 from CO 2 and O 2 spectra.While GOSAT measures two orthogonal polarisations, OCO-2 measures only one.In contrast, geoCARB (Sawyer et al., 2013;Mobilia et al., 2013;Kumer et al., 2013;Polonsky et al., 2014;Rayner et al., 2014), proposed to measure CO 2 , CH 4 and CO from a geostationary platform, will have inherent sensitivity to polarisation, principally through the diffraction gratings, but will not have any hardware (like GOSAT) or adopt any flight manoeuvres (like OCO-2) to select specific polarisations.The question arises as to whether the sensitivity of the instrument to polarisation causes significant error in retrieved gas concentrations. This paper uses the following methodology to address this issue.First, in Sect. 2 a model is developed for the polarising properties of the geoCARB spectrographs.The model depends on parameters characterising the optics and the potentially non-linear responses of the detectors; the procedure by which these parameters will be determined during preflight calibration of geoCARB is outlined in Sect.3. A simplified model that requires only the absolute efficiencies of the gratings is described in Sect. 4. D. M. O'Brien et al.: Polarisation sensitivity of remotely sensed gas concentrations Next a numerical simulator is flown over a model world to generate an ensemble of polarised spectra that captures much of the variability seen in the real world.For each spectrum in the ensemble, the Stokes vector is computed at the entrance aperture of geoCARB above the atmosphere, and the intensities falling upon the detectors are simulated using the simplified model.For these simulations, described in Sect.5, geo-CARB is assumed to be at longitude 110 • E and three frames of data are considered.The first is centred on Agra in India (27.18 • N, 78.02 • E), and consists of 1001 pixels observed simultaneously in the 4 s integration time of geoCARB.The pixels are aligned approximately north-south, and include ocean in the south and the Himalaya in the north.The second and third frames, similarly consisting of 1001 pixels, are centred on Wuhan in China (30.35 • N, 114.17 • E) and Alice Springs in Australia (23.42 • S, 133.52 • E).In order to include a variety of illumination and observation geometries, each frame is sampled three times per day, the first 3 hours before solar noon, the second at solar noon, and the third 3 hours after.Four days are simulated close to the solstices and equinoxes. In Sect.6 the simulated signals, computed taking into account the polarising properties of the surface, clouds, aerosols and molecules, are passed to the inversion algorithm that estimates the column-averaged concentrations of CO 2 , CH 4 and CO, respectively denoted X CO 2 , X CH 4 and X CO .The inversion algorithm is denied access to the polarising properties of the surface and the atmosphere.Instead it assumes that the surface is Lambertian and non-polarising, but it generates polarising elements internally as it allocates and distributes clouds and aerosols while attempting to match its prediction of the intensity incident upon the detector with the "true" intensity from the simulator.The source of polarisation within the retrieval algorithm is via scattering by clouds, aerosols and molecules.Statistics of the differences between the retrieved and true concentrations of CO 2 , CH 4 and CO are analysed in Sect.7. The polarisation sensitivity of the geoCARB spectrometers imposes strong, wavelength dependent signatures upon the spectra, which raises the question as to whether such signatures might cause unacceptably large errors in retrieved concentrations of CO 2 , CH 4 and CO.Two experiments are conducted to assess this risk. In the first, the inversion algorithm is denied access to the polarisation model of the instrument, thereby forcing it to assume that the measured signal represents the intensity at the top of the atmosphere.Although there is some degradation of accuracy for the retrieved concentrations of CO 2 , CH 4 and CO, the errors are not as large as might be expected, because the retrieval algorithm tries to attribute the wavelength signatures caused by the polarisation sensitivity of the gratings to the wavelength dependence of other geophysical parameters, especially the surface albedo.As the objective of the geo-CARB mission is to measure trace gas concentrations, and not to measure albedos, the outcome of this experiment is marginally acceptable. In the second experiment, the radiometric and polarimetric responses of geoCARB are assumed to be calibrated before launch, and the results are made available to the retrieval algorithm.In this case geoCARB returns trace gas concentrations with accuracy equal (on average) to that of a similar instrument equipped with polarisation scramblers.The latter ensure that the intensity reaching the detectors is the same (apart from a scaling factor) as the intensity arriving at the scan mirror.Thus, provided pre-flight calibration characterises both the radiometric and polarimetric responses of geoCARB, polarisation scramblers should not be needed.This is a fortunate result, because scramblers almost certainly would degrade the spatial resolution and increase both the instrument complexity and cost. Polarisation model The purpose of the polarisation model is to predict the signal at the detector2 from the Stokes vector3 of radiation arriving at the entrance aperture of geoCARB.Despite the complexity of the optical layout of geoCARB, shown in Fig. 1, in order to formulate the polarisation model it suffices to divide the optics of geoCARB into three logical assemblies, the first two being the moving scan mirrors (north-south and east-west), and the third being the fixed telescope and grating spectrograph. 4The division is shown schematically in Fig. 2, which also indicates the coordinate system used by geoCARB.All quantities in the polarisation model depend on wavelength, but the dependence is not shown explicitly in order to simplify the notation. The transformation of the Stokes vector S = (I, Q, U, V ) T incident on the north-south scan mirror to the Stokes vector arriving at the detector is described by a Mueller matrix (1) The factor R 0 rotates the plane of reference for polarisation from that used by the radiative transfer model to the reflection plane of the north-south scan mirror.It has the form where η 0 is the angle between the two planes, and generally For the radiative transfer calculation, the reference plane for nadir viewing contains the ray from the sun to the target and the normal at the target.For non-nadir viewing, the normal and the ray from the target to the satellite are used.The rotation R 0 is essentially a geometric quantity, and the degree of polarisation is preserved by the rotation. The factor M 1 represents the north-south scan mirror.It has the form where accounts for Fresnel reflection at the mirror surface with and r and r ⊥ are the reflection coefficients for linearly polarised light parallel and perpendicular to the plane of reflection.The factor B(φ 1 ) accounts for phase shift caused (principally) by the optical coating of the mirror.The matrix B has the general form where the angle φ is the advancement of the phase of light linearly polarised parallel to the reflection plane relative to light linearly polarised perpendicular to the reflection plane. Because the matrices A(p, q) and B(φ) commute, the order in which they are written is immaterial.The final factor R(η 1 ) accounts for the rotation through angle η 1 between the reflection planes of the north-south and east-west scan mirrors.The reflection coefficients r and r ⊥ and the phase shift φ are functions of wavelength and the angle of incidence, which must be characterised during radiometric and polarimetric calibration. The factor M 2 also has the form where now p 2 , q 2 and φ 2 refer to properties of the east-west scan mirror.The angle η 2 appearing in the rotation R(η 2 ) is the angle between the reflection plane of the east-west scan mirror and the reference plane for the spectrograph.The latter is defined by the optic axis and the projection of the long axis through the spectrograph slit onto the east-west scan mirror. Finally, the factor M 3 in Eq. ( 1) describes the telescope and grating spectrograph assembly.Despite the optical complexity of the system, as indicated in Fig. 1 (9) whose elements are to be determined via calibration. Let S 0 denote the Stokes vector incident on the northsouth scan mirror, as computed by the radiative transfer model.Let similarly denote the Stokes vectors immediately before the east-west scan mirror, the telescope/spectrograph assembly and the detector.During pre-flight calibration of geoCARB, the reflection coefficients and phase shifts, p i , q i and φ i , associated with the scan mirrors will be determined as functions of wavelength and angle of incidence, so the matrices A and B will be known.Furthermore, because the geometry of observation will be known, so too will the angles η 0 , η 1 and η 2 appearing in the rotation matrices.Thus, the Mueller matrices R 0 , M 1 and M 2 associated with the scan mirrors, and hence the Stokes vectors S 1 and S 2 , can be calculated.We assume that the detector responds only to the intensity incident upon its surface.Because where I 2 , Q 2 , U 2 and V 2 may be considered known, only the elements m 00 , m 01 , m 02 and m 03 of the first row of the Mueller matrix for the telescope/spectrograph assembly must be determined by the pre-flight polarimetric calibration.How this will be done is outlined in the next section. The output potential v from the detector is assumed to be a (mildly) non-linear function of the intensity incident upon the detector, For example the function g might be a polynomial in the intensity, such as where the coefficients g 0 , g 1 and g 2 are to be determined during the pre-flight radiometric calibration.In summary, the polarisation model requires the following: 1. geometric calculations to provide the rotation angles η 0 , η 1 and η 2 ; 2. optical properties of the scan mirrors; The schematic shows the coordinate system and orthogonal unit vectors u, v and w used for geoCARB.The nadir direction from the centre of the north-south scan mirror to the centre of the earth defines the negative u axis.The positive v axis points eastward along the equator.In the schematic, it is represented by the arrowhead emerging from the page in the centre of the east-west scan mirror.The w axis, defined by w = u × v, points to the north.The optical bench is parallel to the satellite platform, and its normal vector is parallel to u.The image of the slit on the east-west scan mirror is indicated by the red rectangle.The slit also is parallel to u.The north-south scan mirror rotates about the v axis through the angle denoted θ ns in the schematic.The east-west scan mirror rotates about the u axis through angle θ ew (not shown). 4. parameters (such as g 0 , g 1 and g 2 ) that characterise the response of the detector to the intensity incident upon it. Once these quantities have been specified, the calculation reduces to a simple matrix transformation of the Stokes vector incident upon the north-south scan mirror. Radiometric and polarimetric calibration During radiometric and polarimetric calibration, the northsouth and east-west scan mirrors will be set at their central positions (θ ns = π/4 and θ ew = π/4) so that the instrument points to nadir along the negative u axis, as shown schematically in Fig. 3. Unpolarised light from a well-calibrated integrating sphere will be directed along the optic axis onto the scan mirror through a linear polariser that can be rotated about the optic axis through angle θ .For the calibration configuration, the plane used to define the incident Stokes vector is the u-w plane, which also is the plane of reflection for the north-south mirror. The Stokes vector after reflection from the north-south mirror will be where S 0 = (I 0 , 0, 0, 0) T is the Stokes vector for unpolarised light leaving the integrating sphere, is the Mueller matrix for the linear polariser inclined at angle θ , and c = cos 2θ and s = sin 2θ. ( 16) The plane containing the incident and reflected beams at the east-west mirror is the v-w plane, perpendicular to the corresponding plane for the north-south mirror.Therefore, A straightforward calculation yields The Stokes vector leaving the east-west mirror and arriving at the entrance aperture of the telescope is In the calibration configuration no rotation occurs between the east-west scan mirror and the telescope/spectrograph assembly, so η 2 = 0 and the matrix R(η 2 ) is the identity.Thus, Eq. ( 19) reduces to while the intensity component of the Stokes vector incident upon the detector will be with corresponding output potential v from the detector v = g(I 3 ). In practice the linear polariser will be set at angles and for each angle the output potential will be measured as the incident intensity I 0 is stepped over the range likely to be encountered by geoCARB in space, Thus, for angle θ i and incident intensity I 0j , there will be a corresponding output potential Provided that the north-south and east-west scan mirrors have been characterised well, the nk measurements of v ij will provide an over-determined system of equations for the elements m 00 , m 01 , m 02 and m 03 of the Mueller matrix as well as the parameters (such as g 0 , g 1 and g 2 ) that define the function g.Solution of the over-determined system in a leastsquares sense will characterise both the polarimetric and radiometric sensitivity of the spectrograph from the entrance aperture of the telescope through to the output from the detector. It is important to note the role played by the phase delays φ 1 and φ 2 in Eq. ( 21).If φ 1 ≈ φ 2 , as is likely to be the case with similar coatings on the mirrors, then m 03 will be difficult to determine because its coefficient in Eq. ( 21) will be close to zero.That might not be a serious problem in practice, because the surface and atmosphere generate very little circular polarisation.However, if necessary, a well-characterised retarder could be introduced to the calibration set-up between the integrating sphere and the linear polariser to ensure a significant component of circular polarisation, thereby leading to a more accurate determination of m 03 .These matters will be addressed during the phase A study for geoCARB. Once geoCARB is in flight, the stability of the polarimetric calibration will be monitored using observations of sunglint in a manner similar to that devised for GOSAT by O'Brien et al. (2013). Simplified configuration In order to assess the polarisation sensitivity of geoCARB with information presently available, we consider a simplified (and idealised) configuration 5 in which D. M. O'Brien et al.: Polarisation sensitivity of remotely sensed gas concentrations -the mirrors are perfectly reflecting, so that p i = q i = 1, and the phase delays φ 1 and φ 2 are equal; -the polarising properties of the telescope/spectrograph assembly are dominated by the grating; -the intensity reflected from the grating when illuminated with plane-polarised light inclined at angle θ = π/4 to the rulings is the average of the intensities at θ = 0 and θ = π/2. In practice, the last assumption requires that incident radiation linearly polarised parallel to the grating rulings should not produce any diffracted light linearly polarised perpendicular to the rulings, and vice-versa.With these assumptions, Eq. ( 21) for the intensity arriving at the detector during calibration with the polariser at angle θ reduces to where for notational simplicity we have omitted the subscript from I 3 . Polarimetric calibration If we assume that the atmosphere generates little circular polarisation, then only three parameters are required to characterise the instrument, namely m 00 , m 01 and m 02 .In principle only three measurements are needed to fix their values, which for definiteness we assume to be the responses I (1) , I (2) and I (3) to unpolarised intensity I 0 with the linear polariser at angles 0, π/4 and π/2.Substitution of these angles in Eq. ( 26) leads to The first and third equations yield where the ratios are the absolute efficiencies of the grating for linearly polarised light parallel and perpendicular to the rulings.Thus, we obtain showing that the coefficients m 00 and m 01 can be expressed simply in terms of the grating efficiencies measured by the manufacturer.Figure 4 Unpolarised light from a calibrated integrating sphere will be passed through a linear polariser along the optic axis to the northsouth scan mirror.The polariser will be rotated about the u axis so that the plane of polarisation makes an angle θ with the u-w plane, as shown in the upper-right insert.When θ = 0, the plane of polarisation (after reflections) is parallel to the slit; when θ = π/2, the plane of polarisation is perpendicular to the slit.The output potential v(θ) from the detector will be monitored as a function of θ . A-band and the weak CO 2 band, and predicted in the strong CO 2 band and the CO band. The last assumption concerns the sensitivity of the grating to the U component of the radiation incident upon it.The second of the relations in Eq. ( 27) shows that Therefore, the requirement that I (2) should be the average of I (1) and I (3) forces m 02 = 0, which completes the characterisation of the simplified spectrograph.Without this requirement, m 02 could be determined from the measurement I (2) . In-flight operation Once in flight, the intensity falling upon the detector of the simplified instrument in response to the Stokes vector S = (I, Q, U, V ) T at the top of the atmosphere will be simply where The Stokes component U 0 does not appear in Eq. ( 32) because m 00 and m 01 are the only non-zero Stokes coefficients. The angle between the reference planes used by the radiative transfer code and the instrument is η 0 ; it is a purely geometric quantity that depends upon the orbit and the scan geometry.For example, Fig. 5 shows the angle η 0 for pixels in the frames through Agra, Wuhan and Alice Springs.The variation in η 0 is small when the target is close to the longitude of geoCARB, but elsewhere can be large.If we define then Eq. ( 32) reduces to Thus, the intensity reaching the detector for this idealised instrument is identical to that generated by unpolarised intensity incident upon the north-south scan mirror. Pseudo measured spectra An ensemble of spectra were generated for targets in frames passing through Agra, Wuhan and Alice Springs, as described in Sect. 1.Only land targets were selected for this study because generally the oceans are too dark at the geo-CARB wavelengths.The meteorology at each target was based on forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF), interpolated to the time and location of each observation. 6Surface properties were derived from MODIS and POLDER, which respectively provided the bidirectional reflectance distribution function and polarising properties (Nadal and Breon, 1999).Clouds and aerosols were derived from CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations).The vertical profiles of CO 2 in the simulator were derived from the Parameterised Chemical Transport Model (PCTM) (Kawa et al., 2004).For CO, the background profiles were drawn from the Measurements of Pollution in the Troposphere (MOPITT) mission (Deeter et al., 2003(Deeter et al., , 2007a, b), b).Profiles of CH 4 were taken from a snap-shot of the global CH 4 distribution calculated with the TM5 chemical transport model (Krol et al., 2005).In each case, the profiles were interpolated to the times and locations of the geoCARB observations.Generally the methods were identical to those described by Polonsky et al. 6 The specific dates for the simulations were the 21st of March, June, September and December in 2012.The equinoxes and solstices were chosen to capture the seasonal dependence.The only significance of the year 2012 is that data were already on hand for the geophysical variables; we expect similar results for other years.Three observations were simulated for each day, at local solar noon, 3 hours earlier and 3 hours later. (2014), except that superimposed on the column concentrations of CO 2 , CH 4 and CO were random variations drawn from gaussian distributions with standard deviations of 3.0, 0.1 and 0.01 ppm, respectively.The random variations were added simply to augment the parameter space sampled by the simulations.Similarly, the simulations were performed twice, once with both cloud and aerosol enabled and once with only aerosol, the aim being to generate a larger ensemble of "almost clear" scenes with which to test the sensitivity of the retrieval algorithm to polarisation.This approach is reasonable because moderately cloudy scenes are rejected by the algorithm. Histograms of surface pressure and the column-averaged concentrations of CO 2 , CH 4 and CO are shown in Fig. 7 for the ensemble of pixels in the frames over Agra, Wuhan and Alice Springs. Figure 8 presents histograms of the optical depth at the O 2 A-band of cloud liquid water, cloud ice and aerosol.The histograms in blue represent the entire ensemble; those in red show the ensemble members that pass the post-processing filter. For each target, all components of the Stokes vector S = (I, Q, U, V ) T were computed at the top of the atmosphere, with the reference plane for polarisation defined by the local normal and the direction to the satellite at the target.The spectral channels, their widths and the signal-to-noise ratios were as described for geoCARB by Polonsky et al. (2014).In particular, the instrument line shape functions were assumed to be independent of polarisation.The polarisation model for the idealised instrument was applied to the Stokes vector to calculate the intensity falling upon the detector.As shown earlier, the response of the detector is identical to that produced by unpolarised light at the entrance aperture with intensity given by Eq. ( 34).Because H and V depend strongly upon wavelength, the measured spectrum contains an artefact arising from the polarisation sensitivity of the gratings. The Stokes vector S was computed using a three-step approach: calculate the exact contribution to S from first-order scattering (1OS); calculate the multiply scattered radiance I at the top of the atmosphere (I ms ); calculate the contributions from second-order scattering to Q and U , as well as the polarisation corrections from second-order scattering to I (2OS).By combining the results of these calculations, the Stokes vector at the top of the atmosphere can be estimated reasonably accurately for nearly clear scenes (Natraj and Spurr, 2007).The 1OS and 2OS terms used code developed by Natraj and Spurr (2007).Calculation of the first-order component of I used the TMS (truncated multiple scattering) correction of Nakajima and Tanaka (1988), and all three first-order scattering terms include the direct beam scattered from the surface.The multiply scattered intensity term I ms is calculated using the successive orders of interaction (SOI) radiative transfer model (Heidinger et al., 2006) with slight updates for the infrared.The SOI model employs the delta-M phase function truncation technique of Wiscombe (1977) (O' Dell et al., 2006).Lastly, the techniques of low-streams interpolation (LSI) developed by O'Dell (2010) was used to compute the Stokes vector on a 0.01 cm −1 spectral grid; high accuracy, but widely spaced, radiances were interpolated to the fine spectral grid using a two-stream solver of the radiative transfer equation.Generally, in simulations of this type, random noise would be added to the unpolarised intensity in accordance with the noise model for geoCARB, and the resulting signal would be regarded as a measurement (or measured spectrum). However, in this study random noise was not added for the following reason.For every retrieval, differences between the true and retrieved values of the parameters can arise via many mechanisms, including the following: 1. differences between the absorption coefficients and radiative transfer models used for the forward simulation and for the retrieval algorithm; 2. the influence of the prior and algorithm controls, such as the stopping condition; 3. random noise added to the simulated spectra. The last source is the most understood, and its magnitude can be quantified easily by the posterior uncertainties returned by the retrieval algorithm, the calculation of which uses the instrument signal-to-noise ratio.Furthermore, random noise in the spectra generally will not cause a bias, because the radiative transfer problem can be linearised in the vicinity of the true solution.Consequently, we can concentrate on the biases introduced by factors other than random noise (such as the first two items listed above).Since the model errors and the random noise (items 1 and 3) are statistically independent, including the effects of random noise simply widens the bias distribution by the width of the random uncertainty. As the focus of this study is the bias introduced by polarisation effects, it was judged that the effects would be easier to spot in the narrower error distributions calculated without random noise. Trace gas recovery Optimal estimation was used to match "measured" (in reality simulated) and modelled spectra, as described by Polonsky et al. (2014) for the baseline configuration of geoCARB.In addition to the trace gas (CO 2 , CH 4 and CO) concentrations, the state vector contained many other parameters describing the surface, the atmosphere and the scattering properties of aerosol and cloud.All were adjusted iteratively during the matching process. In contrast to the measured spectra, which were computed using polarising surfaces with directional reflectance, the modelled spectra assumed that the surfaces were nonpolarising and Lambertian, with albedo varying linearly with wavelength.An estimate for the albedo was derived from the spectra using a selection of frequencies, mostly in the continuum, and a radiometric model that assumed the atmosphere was free of cloud and aerosol.The estimate so obtained then was used as both the first guess and the prior in Rogers' optimal estimation.Thus, while the modelled surface was based on reasonable prior information, it differed in detail from the measured surface.This difference ensured that simulation followed by retrieval was not a circular process, and in fact was open to the range of errors we expect with real data. Similarly, the measured spectra used cloud and aerosol profiles observed by CALIPSO, whereas the modelled spectra assumed two types of aerosol plus liquid water and ice clouds with effective radii of 8 and 70 µm respectively.The vertical profiles of particulates were assumed to be gaussian in shape.The optical thicknesses of aerosol, cloud liquid water and cloud ice, in addition to the heights and widths of the vertical distributions, were adjusted when fitting modelled to measured spectra.Thus, the modelled aerosol and cloud could differ significantly from the aerosol and cloud from CALIPSO used in the simulation of the measured spectra, again breaking the circularity of the simulation-retrieval process. For each day, each observation time and each (approximately) north-south scan line (through Agra, Wuhan or Alice Springs), the prior profile of CO 2 was taken to be the average of the profiles at all of the target pixels along the scan line.This was judged to be a fair prior, neither too optimistic nor too pessimistic, and indicative of the accuracy possible with large-scale averages predicted by general circulation models.Prior profiles of CH 4 and CO were calculated similarly. At the completion of the optimal estimation, a postprocessing filter (PPF) is applied to reject cases where the model approximation to the spectra is poor.This may happen for many reasons, but the majority of cases occur when the optical properties assumed for aerosol and cloud do not match those used to simulate the spectra.The experiments in this study used the same PPF as Polonsky et al. (2014).The PPF checks χ 2 in the bands used to retrieve X CO 2 , the retrieved aerosol optical depth at the blue end of the O 2 Aband and the number of degrees of freedom for signal in the retrieved profile of CO 2 .Each check involves comparison with a fixed, preset threshold.If any check fails, the scene is rejected.Only results that pass the PPF are shown. The functions H (λ) and V (λ) derived from the efficiencies of the gratings to polarisations parallel and perpendicular to the slits were approximated by linear functions, which take the form by definition.The coefficients α and β are listed in Table 1.Quadratic approximations produce almost identical results.Over all bands, H and V vary by approximately 15 %, so their dependence on wavelength is strong, but slow in comparison with the rate at which the gas absorption spectrum varies. Experiment 1 In the first experiment, the measured spectrum was taken to be I as defined in Eq. ( 34).Knowledge of H and V was denied to the retrieval algorithm, so it attempted to match I using only the intensity at the top of the atmosphere.Thus, the measured spectrum contains not only the intensity but also the slowly varying wavelength dependence of (H −V ), upon which is superimposed the rapid wavelength dependence of Q 0 , while the retrieval algorithm attempts to fit the measured spectrum with the intensity.In a sense this experiment represents the worst case, because it assumes that no pre-flight polarimetric calibration has been performed. The degree of polarisation, defined by varies strongly across the absorption spectrum, peaking at the line centres and falling to a backgroud level, determined principally by the surface and Rayleigh scattering, in the continuum between the lines.At wavelengths in the cores of the lines, photons are likely to have been scattered higher in the atmosphere by molecules, clouds and aerosols, which typically have stronger polarisation signatures than the surface. Figure 9 shows the mean and standard deviation of the degree of polarisation in the O 2 A-band for the ensemble of soundings in the frames passing through Agra, Wuhan and Alice Springs on the selected days and observation times.In order to illustrate the degree of polarisation likely to be encountered in the almost clear conditions required by the retrieval algorithm, the mean and standard deviation in the left-hand panel of Fig. 9 were computed from the ensemble with cloud disabled.Thus, in this ensemble, polarisation is generated by the surface and by scattering from aerosols and molecules, but not from clouds.The right-hand panel applies to the ensemble with cloud enabled. Experiment 2 In the second experiment, the retrieval algorithm was given access to the instrument Mueller matrix, which for the simplified instrument amounts to knowing the functions H and V derived from the grating efficiencies.Thus, the retrieval algorithm computes I 0 + (H − V )Q 0 /2 and uses this to match the measured spectrum.We stress, however, that the retrieval algorithm assumes a non-polarising, Lambertian surface and fixed types of aerosol and cloud whose scattering properties are specified, so its ability to reproduce the measured Stokes vector at the top of the atmosphere is limited. For reference, the results of this experiment are compared with those from an instrument with an ideal polarisation scrambler, where the measured spectrum is the intensity and the retrieval algorithm attempts to fit the measured spectrum with its internally generated representation of the intensity. Results Histograms of the biases in retrieved X CO 2 , X CH 4 , X CO and surface pressure are shown in Fig. 10, while means and standard deviations of the errors are listed in Table 2.For comparison, Table 2 also lists the results obtained for an instrument equipped with ideal polarisation scramblers.The histograms for the case with scramblers are almost indistinguishable from those for Experiment 2, and therefore are not shown. The effect of ignoring the polarising properties of the gratings is apparent in the histograms of Fig. 10.The histograms for retrieved X CO 2 and surface pressure are broader, with outliers well beyond the targets set for the geoCARB mission.The impact on retrieved X CH 4 and X CO is smaller, for reasons presently unknown, but is still significant.While the differences in the average biases shown in Table 2 appear small, they nevertheless are important, because even small biases on large spatial scales can lead to significant errors in surface fluxes of CO 2 . Figure 10 shows that the retrieval algorithm can account for the spectral slope introduced by the gratings, provided that the spectrographs are calibrated before launch.However, there are hidden side effects.For example, the slopes of the surface albedos across the spectral bands of geoCARB, retrieved simultaneously with the gas concentrations, are not as accurate as for the idealised, unpolarised case.This is demonstrated in Fig. 11 for the slope on the O 2 A-band albedo.The upper panel shows the correlation between the true and retrieved slopes for the unpolarised case.The correlation is tight, indicating that this parameter is well determined.The lower panel is for Experiment 2 with geoCARB.Although the functions V (λ) and H (λ) have been supplied to the retrieval algorithm, and although the trace gas concentrations have been retrieved well, there clearly is ambiguity in the slope of the albedo.Because the aim of geoCARB is to retrieve trace gas concentrations, this ambiguity is not a serious concern. Conclusions In this study column-averaged concentrations of CO 2 were retrieved from spectra measured at the top of the atmosphere by a geoCARB-like instrument.The ability of the retrieval algorithm to predict the polarisation state is limited because internally it assumes that the surface is non-polarising and Lambertian and that aerosols and clouds are composed from fixed types whose scattering (and polarising) properties are assigned, fixed and usually inconsistent with the real atmosphere.This inability leads to an irreducible minimum error when the algorithm is applied to a realistic ensemble of surfaces and atmospheres. For an instrument that is sensitive to the degree of polarisation, rather than just to the radiant intensity, the error in retrieved trace gas concentrations is expected to be larger.The reason is that the retrieval algorithm will have difficulty matching the measured spectrum, which is a linear combination of the elements I , Q, U and V of the Stokes vector with coefficients (Stokes coefficients) that are specific to the instrument and the viewing geometry.The Stokes coefficients generally vary slowly with wavelength, though the changes over a band may be large.Thus, the measured spectrum will mix the slow wavelength variation of the Stokes coefficients with the rapid variation inherited from the Stokes compo-nents.Unless the retrieval algorithm can imitate this wavelength dependence, errors in X CO 2 , X CH 4 and X CO can be expected. The experiments in this study show that errors caused by unknown polarisation do arise.However, generally they are small, though they remain significant for X CO 2 .They are not disastrous because the retrieval algorithm allows the surface albedo to vary linearly with wavelength over each band, and it adjusts the slope during the retrieval.This adjustment of surface albedo with wavelength compensates to a large degree for the wavelength dependence of the Stokes coefficients.Thus, even in the presence of significant polarisation at the entrance aperture, geoCARB should recover reliable estimates for both trace gas concentrations and the bandaveraged surface albedo, but it might assign the slope of the surface albedo incorrectly. Through radiometric and polarimetric calibration before launch using the procedure defined in this study, errors from polarised surfaces and clouds can be reduced to negligible levels compared with other systematic biases in the retrieval algorithm.If in the future the latter can be reduced, then polarisation biases would need to be re-examined. Figure 1 . Figure 1.Optical layout for geoCARB.The primary beam splitter divides the long-and short-wave spectrometer arms.Each Littrow spectrometer feeds two separate focal plane arrays. m , because it is fixed D. M. O'Brien et al.: Polarisation sensitivity of remotely sensed gas concentrations it may be represented by a single matrix, 00 m 01 m 02 m 03 m 10 m 11 m 12 m 13 m 20 m 21 m 22 m 23 m 30 m 31 m 32 m 33 Figure 2.The schematic shows the coordinate system and orthogonal unit vectors u, v and w used for geoCARB.The nadir direction from the centre of the north-south scan mirror to the centre of the earth defines the negative u axis.The positive v axis points eastward along the equator.In the schematic, it is represented by the arrowhead emerging from the page in the centre of the east-west scan mirror.The w axis, defined by w = u × v, points to the north.The optical bench is parallel to the satellite platform, and its normal vector is parallel to u.The image of the slit on the east-west scan mirror is indicated by the red rectangle.The slit also is parallel to u.The north-south scan mirror rotates about the v axis through the angle denoted θ ns in the schematic.The east-west scan mirror rotates about the u axis through angle θ ew (not shown). Figure 3 . Figure3.During calibration, both mirrors will be set to their central positions with θ ns = θ ew = π/4, corresponding to nadir observation.Unpolarised light from a calibrated integrating sphere will be passed through a linear polariser along the optic axis to the northsouth scan mirror.The polariser will be rotated about the u axis so that the plane of polarisation makes an angle θ with the u-w plane, as shown in the upper-right insert.When θ = 0, the plane of polarisation (after reflections) is parallel to the slit; when θ = π/2, the plane of polarisation is perpendicular to the slit.The output potential v(θ) from the detector will be monitored as a function of θ . Figure 4 .Figure 5 . Figure 4. Absolute efficiencies of the gratings, measured in the O 2 A-band and weak CO 2 band, and predicted in the strong CO 2 band and CO band.There are two gratings, each used in two orders of diffraction to serve two bands.The vertical lines define the O 2 A-band and CO 2 weak band. Figure 6 . Figure6.Functions V (λ) and H (λ) for the gratings, measured in the O 2 A-band and weak CO 2 band, and predicted in the strong CO 2 band and CO band.There are two gratings, each used in two orders of diffraction to serve two bands.The vertical lines define the O 2 A-band and CO 2 weak band. Figure 7 . Figure 7. Histograms of X CO 2 , X CH 4 , X CO and surface pressure for the ensemble of soundings in the simulation.The surface pressure histogram covers a wide range because the frame passing through Agra includes the Himalaya. Figure 8 .Figure 9 . Figure8.Histograms of the optical depth of cloud liquid water, cloud ice and aerosol for the ensemble of soundings in the simulation.The histograms in blue refer to the whole ensemble; those in red apply after the post-processing filter. Figure 10 .Figure 11 . Figure10.Histograms of the biases in X CO 2 , X CH 4 , X CO and surface pressure for the ensemble of soundings in the simulation.The red and blue histograms apply to Experiments 1 and 2, respectively. Table 1 . Coefficients in the linear approximations to H (λ) and V (λ).Wavelength λ is assumed in nm. Table 2 . Means (µ)and standard deviations (σ ) of the biases δX CO 2 , δX CH 4 , δX CO and δp s in retrieved X CO 2 , X CH 4 , X CO and surface pressure from the two experiments.The row labelled "unpolarised" contains reference results obtained for an instrument equipped with ideal polarisation scramblers.
9,587.2
2015-11-23T00:00:00.000
[ "Environmental Science", "Physics" ]
A New Perspective on Challenges in Truth-telling to Patients Background and Objectives: Patient autonomy is a recognized principle in modern medical ethics, and truth-telling to the patient; thus, it holds special importance for its contribution to this principle. In practice, however, several challenges emerge that can lead to different responses. This difference is more marked in studies conducted in the Eastern and Muslim countries due to variations in cultural and religious beliefs. Truth-telling is a challenging concept respecting placebos, medical errors, and delivering bad news on diagnosis and treatment to patients. Introduction he physician-patient relationship is based on mutual trust, and the factors that obscure this trust may disturb the relationship. Truth-Telling corresponds with respect for individuals. Moreover, it is essential in establishing and maintaining trust between the patient and physician. Honesty with patients was not addressed in classical medical ethics beginning with Hippocrates and his principles of medical ethics, or in the Declaration of Geneva and the early editions of the American Medical Association (AMA) code of medical ethics [1]. The reason was that non-maleficence was of such high value that justified lying to the patients. In modern medical ethics, beneficence and non-maleficence are regarded as essential principles. Accordingly, truth-telling is considered a rule in the modern codes of medical ethics [2]. Truth-Telling can be defined as refraining from lying, deception, misinformation, and non-disclosure [3]. It must be observed in all the communications between the physician and patient regarding diagnosis and treatment. However, the physicians might adopt a different approach. The concern about the patients' health following their awareness of and reaction to the truth has led to the physicians' paternalistic view of the patients in the past century, and occasionally the contemporary era. The emphasis on patient autonomy is the basis for truths disclosure in the physician-patient relationship in modern medical ethics. The swift change in today's world toward patient autonomy and informed consent originated in the West. This alternation impacted Asian and East Asian regions; it is opposed to withholding information in any form. However, there remains a certain difference in truth-telling to the patients between the physicians in Europe and the USA, and those in Asia and Muslim countries, as confirmed by several studies. For example, with a review of several studies conducted in the middle-eastern countries, the authors concluded that healthcare providers' viewpoints on patients and their families were oriented toward withholding information to protect them from psychological repercussions. Moreover, there exist few educational programs on physician-patient communication skills available in these countries [4]. A study examined the codes of medical ethics on disclosure in cases of terminal diseases in 14 Muslim countries. Accordingly, it was found that the approaches to truth-telling varied greatly. The codes did not cover in-formation disclosure in 5 countries. Moreover, in 7 countries, they condoned withholding information from patients. Ethical codes in one country mandated disclosure and advocated non-disclosure in another [5]. In the majority of the studies, truth-telling is limited to life-threatening diseases or delivering the news on the diagnosis. However, truth-telling impacts all aspects of the physician-patient relationship, including using placebos and disclosing medical errors. Initially, the common challenges in truth-telling were introduced. Next, the unaddressed debates in this area, medical errors, and placebos were discussed in this study. Then, the prominent approach in Islamic countries was evaluated to understand the patients' view on hearing the truth and whether or not the physicians act accordingly. The study also investigated how the patients and their families tend to receive the truth considering the predominant culture and religion. Moreover, the insight regarding the patients' and their families' perspectives can help manage truth-telling, particularly with bad news; therefore, physicians' performance across different societies and cultures was studied. Finally, the Islamic view was presented using the Verses of the Holy Quran and the Narrations from Inmate (Ahl Beit). Pay attention to this point is of grace; these Honorable Verses and the Noble Narrations are the predominant sources of decision-making in Shi'ite discourse. Methods This study was performed based on the unsystematic review of library and online sources using databases, such as Google Scholar, PubMed, Ovid, Springer, and the following keywords: "physician-patient relationship, truth-telling, placebo, medical error disclosure, and Islamic approach". More recent papers were given preference in use. Furthermore, Persian e-books and research papers were obtained from the Noormags website. Results The reviewed articles have studied the challenge of truth-telling in the field of health mainly in the 3 situations. The truth-telling challenge in placebo use, medical error reporting, and how to tell bad news, were the most challenging truth-telling cases discussed in these articles. In the second part of the discussion, the viewpoints of the Islamic countries about this topic were analyzed. Then, the domestic articles were analyzed. Finally, the Islamic point of view was analyzed. Challenges in truth-telling Truth-Telling to the patient is correlated with challenges under certain circumstances. Prevalent cases include telling the truth about using placebos, medical errors, and delivering bad news to the patients. Truth-Telling in use of placebos Using placebos requires deliberate deception or non-disclosure of the whole truth. Evidence suggests that in some cases, placebos can create the desired effects even with the full or partial disclosure of information. However, the placebo effect diminishes with the patient's awareness of the truth; it begs the question of whether or not it is ethically acceptable to use a placebo without truth-telling1. Using a placebo for clinical research seems to be ethically justifiable, as constant research is a necessity for the progress of medical sciences. Medical research is mostly experimental. Moreover, they fall in the philosophical context of logical positivism. Therefore, medical researchers in these studies, especially clinical trials, are forced to use a placebo. However, using a placebo is ethically justified only when the subjects are told they may receive a substance without therapeutic effects as they are randomly classified into groups [6]. Some argue that only the non-deceptive use of a placebo can be justifiable. This is because the trust between physician and patient is more valuable than any placebo effect [7]. In clinical trials, patients deeply trust the clinical researchers and participate in the process even when the odds of personal benefits are negligible. Therefore, the researchers should further establish trust by honoring ethical standards [8]. Concerning placebo use in treatment, the situation is different. some believe that using a placebo cannot be condoned as it disturbs the physician-patient relationship based on honesty and trust. Thus, placebo use is more applicable in treatment. Truth-Telling in medical error disclosure Truth-Telling with regards to medical errors is an essential aspect of a physician's professional commitment. In the cases of medical error, non-disclosure (even if it is called an acceptable term, like confidentiality) is deemed unethical and should be evaluated. The disclosure of non-harmful medical errors is not mandatory; however, it is preferred for creating an atmosphere of honesty between the patient and the physician [9]. In any case, disclosure becomes essential as the risk or amount of harm to the patient increases; as the level of harm decreases, disclosure becomes less crucial [10]. A study assessed physicians' perspectives about the disclosure of medical errors. As a result, it was found that 90% percent of physicians considered error disclosure as a major challenge in healthcare and stressed the necessity of a comprehensive system for reporting medical errors. In this paper, >75% of the physicians supported the disclosure of major medical errors while only half of them supported the disclosure of minor errors [11]. The fear of patient complaints against physicians, the loss of professional reputation among peers, and emotional responses by the patients' families may attribute to the non-disclosure of medical errors to patients [12]. However, ignoring major errors can seriously threaten the medical profession and community [10]. To hide the medical error is undoubtedly a deception that can undermine trust in physicians [13]. Therefore, error disclosure should not conflict with truth-telling. However, measures should be taken to minimize the repercussions of disclosure. Truth-Telling in delivering bad news or diagnosis There are numerous arguments about disclosing the diagnosis of terminal or refractory diseases to patients. The perspective and beneficence toward the patient, determine the trajectory of such decisions. By respecting patient autonomy, all information on diagnosis, treatment, and prognosis of the disease, as well as the costs and effectiveness of each available treatment must be provided to the patient. But practically, however, the realization of this right is met with numerous professional, emotional, and psychological challenges [14]. ( That is why some professionals do not recognize truth-telling as an absolute duty; they rather believe that it should be balanced concerning other ethical considerations, like nonmaleficence [15]. The results of the majority of studies on truth-telling to patients in Iran indicated that physicians and the healthcare teams preferred not to tell the truth and even considered it wrong; fearing that disclosing the truth can disappoint patients and disturb the treatment process [16,17]. Same studies reported that patients preferred to be fully informed of their situation. Several studies revealed that patients who were unaware of the diagnosis experienced a better quality of biopsychosocial life [18]. However, contemporary medical ethics' turn towards respecting patient rights, which has impacted Eastern societies, including our country. This point was explained in a study conducted in 2010 in Isfahan, Iran. In this study, 90% of the explored physicians supported informing the cancer patients of truth during the early stages of the disease, and 70% supported it in the advanced stages [19]. Patients have the right to be fully informed of their situation, in return, it is the physicians' obligation to tell the truth. In some cultures, withholding the truth from patients is preferred as a form of protective deception. They also expect the physicians' cooperation in hiding the truth from the patient. Truth-Telling is significant in terms of respecting patient autonomy and having the right to make decisions; it also strengthens the physician-patient relationship and helps the treatment process. A patient who is aware of the diagnosis better cooperates with the medical team in choosing the diagnosis and treatment methods. However, hiding the whole or part of the truth is justified if truth-telling results in serious harm to the patients and their depression, isolation, or suicide [20]. A conflict may arise between the patients' autonomy and the principle of non-maleficence. While in some cases, the patient can be prepared through step-by-step disclosure, there are always cases where this conflict cannot be resolved. Metaphors, rather than the harsh truth have been suggested as a strategy for truth-telling [21]. However, the practical efficacy of this strategy remains questionable. The right to information is recognized in the Patient's Rights Charter of the Islamic Republic of Iran. In paragraphs 1-2, it is emphasized that healthcare must be delivered based on honesty [22]. However, it does not explicitly address the delivery of bad news. For a Muslim patient, the awareness of the truth, especially regarding a terminal disease is valuable in terms of preparation for death and using the final days of life. Patients can take advantage of their remaining days to compensate for the past, seek forgiveness, pay back their debts, and make a will. The will-making is highly recommended in Islam [23]. Also, it is a good time for repentance, i.e., acceptable until the moment of death according to the hadiths from the fourteen infallibles [24]. Although truth-telling is essential, there are always cases where it is justified or necessary to withhold the truth. Therefore, a framework is required to serve as a measure under particular circumstances. Considering the Islamic and cultural background in Iran, the Islamic perspective on the justification or necessity of withholding the truth or either lying occasionally can help define ethical standards. Truth-Telling in Muslim countries Studies in Saudi Arabia reflected that most patients prefer their diagnosis to be disclosed to their relatives. Moreover, in the organizational law of the country (Article 20,1990), it is asserted that in cases of terminal disease, the physician can decide whether to disclose the truth or withhold it from the patient. In Libya, per Article 17, Act 1986, the physician is required to tell the truth to the patient under all circumstances, even in cases of terminal disease [25]. A study in the UAE reported that its citizens responded differently to truth-telling based on the nature of the truth. They also prefer to know the truth about curable diseases while they prefer not to know about refractory or terminal diseases (with <50% odds of survival over 6 months) personally [26]. In Lebanon, physicians prefer not to share information regarding cancer and refractory diseases with the patients. They believe that such an approach serves the patients better concerning their culture [27]. The physicians in Egypt employ a similar approach based on their culture and conditions. Additionally, research indicated that surgeons believe they should refrain from disclosing the information with patients encountering refractory diseases as it disappoints them [28]. Physicians in Turkey also retain the legal right to decide whether or not they should share the information with patients after examining them [29]. Contrarily, several Muslim countries employ a different approach that may reflect the dominant culture and traditions in those countries. For instance, in Kuwait, according to Act 1981, physicians are obligated to inform the patients of their clinical status and to disclose the truth even during the advanced stages of the disease. Meanwhile, under no circumstances is it permissible to deny the truth to raise the patient's hope. Iranian view on truth-telling A study on the Iranian patient preference revealed that concerning internal medicine and general surgery, >84% of patients preferred to receive the information, while only 56.2% were satisfied with the extent of disclosed information [30]. Kazemiyan et al. explored physicians' perspectives on truth-telling to patients with refractory diseases; accordingly, 35% of the respondents believed that the patients had the right to be informed of their disease. Contrarily, 6% did not recognize any rights of this type for the patients and 59% believed this right to be subject to specific cases and the realization of certain circumstances. This study determined the patients' cultural class as an essential factor in the physician's decision [31]. Taveli et al. evaluated 142 Iranian patients in Tehran. They concluded that only 48% of patients with gastrointestinal cancer were aware of their diagnosis [32]. Meanwhile, 90.4% of patients under cancer treatment in the Cancer Institute wanted to know their diagnosis. Only 39% were provided with sufficient information from their physicians. Moreover, 61.2% of families believed that the patient must be informed of the diagnosis. Besides, 84% of the family companions preferred to know the diagnosis in case they were diagnosed with cancer [33]. However, Kazemi et al. suggested that 72% of physicians maintain that the decision on disclosing the truth to the patients can differ based on their sociocultural status [14]. In a review article, Zahedi et al. suggest that the dominant culture in the Iranian community is that the physicians prefer to share the information regarding the status of the patients with their family members instead of the patients [23]. According to 2-1-4 paragraph of the Patients' Rights Charter in the Islamic Republic of Iran, the physician is obligated to provide all the information regarding the diagnosis and treatment methods and the possible adverse effects, the disease, prognosis, as well as all the information regarding the progress of the disease [22]. If the patient refuses to receive this information, it should not be provided to them. However, this Charter notes that the information must be presented at a suitable time and place considering the patient's condition. In that regard, how the news is delivered to the patient is of paramount importance. Moreover, the special conditions of the patient and the risk of serious harm should be accounted for. Truth-Telling in Islamic view Honesty is highly valued in Islam. The keyword of "Sidgh" and its derivates, is repeated 155 times and used in 144 Honorable Verces of 49 Blessed Chapters of the Holy Quran. The Holy Quran introduces the events of the day as a means of testing to distinguish the truthful from the false claimants 1 . Elsewhere, he announces the truthfulls to the rewarders for their valuable and ethical work 2 . Another noteworthy point is that the Holy Quran calls the opposite of truthful as hypocrites 3 and infidels 4 elsewhere. This way of coping with the hair address the significance of honesty in the Holy Quran. Numerous accounts are available on the importance of honesty and refraining from lying in Noble Narrations. Evaluating these Narrations indicated that in addition to strengthening the relationship between man and his creator and bringing him closer to the heavenly sublime by veracity; it can also guide human relations towards benevolence. A statement attributed to Imam Ali (PBH) asserts that: "The honest is close to bliss and prosperity, and the liers are on the verge of downfall and humiliation" [34]. He (PBH) also said: "God bestows honesty upon a servant He loves" [35]. When a physician who has been nurtured with religious teachings, such as 'honesty is the pillar of Islam and the pillar of faith' [36] or 'honesty is the best way in every thing' [37] and recognizes them as the foundation of his or her faith and virtuosity encounters a patient who was subjected to a medical error or encounters a life-threatening disease, truth-telling is only natural unless there is a good reason for acting otherwise. In deontological ethics, Kant asserts that honesty is an absolute concept, i.e., not annulled under any circumstances. He condemns lying even when it is a victim's only escape from a murderer. Contrarily, from a utilitarianists standpoint, lying is a neutral act regardless of the harm or benefit it may bring. In every situation, the best action is the one that brings the most benefit to the humans, even where this benefit give by lying or the only approach for this benefit be honesty. Accordingly, utilitarianists recognize white lies and believe that these lies are generally beneficial and in compliance with the principle of utility [38]. The Islamic view is different to an extent; while it strongly condemns lying and recommends honesty, it does not accept the severe harming of a faithful or Muslim human. As the prophet Muhammad (PBH) said: 1. "We certainly tried those that were before them and assuredly God knows those who speak truly, and assuredly He knows the liars." Blessed Chapter of Al-Ankabout, Honorable Verce of 3. 2. "That God may recompense the truthful ones for their truthfulness, and chastise the hypocrites, if He will, or turn again unto them Surely God is All forgiving, All compassionate." Blessed Chapter of Al-Ahzab, Honorable Verce of 24. 3. ibid 4. "That He might question the truthful concerning their truthfulness and He has prepared for the unbelievers a painful chastisement" Blessed Chapter of Al-Ahzab, Honorable Verce of 8. 'Dear Ali, the Lord in heavens views lying (to promote) good as a friend and honesty (to promote) Evil as an enemy [39]. Imam Ja'far Al-Sadiq (PBH) said 'If a Muslim asks a question from another Muslim and is told the truth that results in his suffering, the Lord will count the respondent among the liars, and if a Muslim asks a question from another Muslim and is told a lie that benefits him or her, the Lord will count the respondent among the honest' [40]. As understood from the general outlook of these accounts, withholding the truth and even lying to prevent harm and corruption is not only justifiable, might even be necessary in some cases. In another account, Imam Reza (PBH) said: 'If someone tells his faithful brother a truth that harms him, God counts him among liars, and if someone lies to his faithful brother to benefit him, God counts him among honest men' [41]. This emphasis on preventing harm to a Muslim is in concordance with the 'La-Zarar' (No Harm) principle in the Shi'ite fiqh. Based on this principle, the Holy Share' (The God) announces that harmful rules are not his. Therefore, this rule can render other rules null if they lead to harm. It is even argued that this rule can confirm rules that cannot be proved based on other reasons [42]. Essentially, any individual religious duty that brings harm in any way is ruled out in Islamic Shari'a, and Islam does not condone causing harm and damage to others [43]. Discussion This view in Islamic studies establishes a guideline of moderation, i.e., never addressed so thoroughly in any other philosophical discipline. This form of protection provided for humans within the framework of Sharia reflects the value of humanity in precious Islam. Notably, corruption or harm to a Muslim individual or Muslim community must be regarded as a severe logical issue. This permission to "white lies" cannot become an excuse for any small and major lies "beneficial" lying through life. Legenhausen so delicately noted: 'An individual who adheres to divine prudence uses even white lies with the utmost care, a faithful person lies rarely and only out of absolute necessity (i.e. in defense of beliefs or to establish peace among people)' [38]. Essentially, the philosophy of medicine is based on helping patients and reducing their pains. Considering the unique condition of each patient, a perfect solution cannot be prescribed for all of them and the same strategy to manage all cases. The emphasis on truth-telling in the Islamic view is no less than other schools of thought. In addition to the benefits that truth-telling brings, Islam considers truth-telling as the manner of human excellence towards God. However, Islam is a guideline for human life at all times. Furthermore, it includes different aspects of human existence. Therefore, it has evaluated various conditions of man under different circumstances and never demands anything from him that he cannot endure. Despite numerous practical challenges and conflicts with the principle of beneficence to the patients, the principle of autonomy is the effective approach to truth-telling to the patients, especially in the West. Truth-Telling is supported and justified in terms of the physicians' professional duty and due to the better results that it brings for the patients. However, it is not logical to view it as a perfect infallible practice. A physician's measure in truth-telling must be presenting the most benefits while preventing or minimizing harm to the patients. Moreover, causing severe and certain harm to a subject by telling the truth is not permissible from a religious perspective. Considering the importance of truth-telling and its impact on the establishment of trust and improvement of the physician-patient relationship, it is necessary to find a way to minimize the harm to the patient while avoiding lies, deception, and misleading as much as possible. Additionally, the same argument can be used to limit nondisclosure to cases when the patient is faced with severe, irreparable harm. As a result, the question arises as to what are the examples of severe harm and damage to the patients and how to minimize them. Multiple approaches to communications in the physician-patient relationship are proposed to minimize the harm to the patients that are outside the scope of this research. However, by employing these methods and proper communication skills, it is the truth that can be presented to patients as much as possible. Furthermore sociocultural environment, patients' opinions, as well as patients' family and their support can help in managing the information disclosure. Conclusion Considering the great significance of truth-telling, all possible approaches, to tell the truth, should be evaluated and truth-telling should be given priority; however, in specific circumstances and the mentioned examples, other options can be exceptionally considered. Thus, if disclosing the truth may expose the patient to certain, serious biopsychological risks, denying the whole or a part of the truth might be advisable, and even infrequent cases lying to protect the patient can be justified. Compliance with ethical guidelines There were no ethical considerations to be considered in this research. Funding This research did not recive any grant from funding agencies in the public,commercial,or non-profit sectors
5,605.6
2021-06-01T00:00:00.000
[ "Medicine", "Philosophy" ]
GPS/INS Integrated Navigation System for Autonomous Robot Use your smartphone to scan this QR code and download this article ABSTRACT Nowadays, autonomous robots are capable of replacing people with hard work or in dangerous environments, so this field is rapidly developing. One of the most important tasks in controlling these robots is to determine its current position. The Global Positioning System (GPS) was originally developed for military purposes but is nowwidely used for civilian purposes such asmapping, navigation for land vehicles, marine, etc. However, GPS has some disadvantages like the update rate is low or sometimes the satellites' signal is suspended. Another navigation system is the Inertial Navigation System (INS) can allow you to determine position, velocity and attitude from the subject's status, like acceleration and rotation rate. Essentially, INS is a dead-reckoning system so it has a huge cumulative error. An effective method is to integrate GPS with INS, in which the center is a nonlinear estimator (e.g. the Extended Kalman filter) to determine the navigation error, from which it can update the position the object more accurately. To improve even better accuracy, this paper proposes new method which combines the original integrated GPS/INS with tri-axis rotation angles estimation and velocity constraints. The experimental system is built on a low-cost IMU with tri-axis gyroscope, accelerometer and magnetometer and a GPS module to verify the model algorithm. Experiment results have shown that the rotation angles estimator helps us to determine the Euler angles correctly, thereby increasing the quality of the position and velocity estimation. In practice, the accuracy of roll and pitch angle is 2 degrees, the error of yaw angle is still large. The achieved horizontal accuracy is 2m when the GPS signal is stable and 3m when the GPS signal is lost in a short period. Compared with individual GPS, the error of the integrated system is about 10% smaller. INTRODUCTION For autonomous robots (such as USV, UAV, AUV, etc.) to work in a stable and efficient manner, navigation is one of the most important issues to be aware of. The Global Positioning System (GPS) was originally developed for military purposes but is now widely used for civilian purposes such as mapping, navigation for land vehicles, marine, etc. However, GPS has some disadvantages like the update rate is low or sometimes the satellites' signal is suspended. Another navigation system is the Inertial Navigation System (INS) can allow you to determine position, velocity and attitude from the subject's status, like acceleration and rotation rate. Essentially, INS is a dead-reckoning system so it has a huge cumulative error. An effective method is to integrate GPS with INS, in which the center is a nonlinear estimator (e.g. the Extended Kalman filter) to determine the navigation error, from which it can update the position the object more accurately 1 . Depending on the "depth" of the interaction and for the shared information between the GPS and INS, we have some integration methods: uncoupled integration, loosely coupled (LC), tightly coupled (TC) and deeply integrated 2 . For uncoupled method, GPS output is used as the "reset" signal for the INS. When there is no GPS solution (position and velocity), the integrated system uses INS to estimate. This method has the simplest structure, but the system cannot estimate the sensor's drift, so it is not commonly used. In LC method, GPS solutions will be compared with the inertial estimation in order to calculate the error state of the object 3,4 . In TC method the integration is "deeper", the raw measurements of the GPS (pseudorange and Doppler measurements) are directly combined with the calculated INS estimation results in an appropriate filter 4,5 . Both LC and TC systems operate in closed loop, i.e. position, velocity, attitude errors and sensor's drifts are fed back for the INS and GPS to make a navigation correction. The loosely coupled model is simpler than the tightly one. The structure of tightly coupled model and deeply integration are very complex, so we do not mention in this paper. In estimating Euler angles, conventional INS systems use tri-axis angular rate to calculate these angles. However, MEMS (Micro-Electro-Mechanical Systems) IMUs often have large disturbances, so their errors often accumulate rapidly. The INS mechanization method for update rotation angles is only available for a short period. In this paper we use a triangular Euler angles estimator. The centerpiece of this estimator is the two-stage Extended Kalman filter, using accelerometer and magnetic field values to correct the angles evaluated using rotation rate 6 . This paper introduces the building method of loosely coupled GPS/INS integrated navigation system. The Euler angles estimation and the velocity constraints are used to improve accuracy. We use MAT-LAB/Simulink software to simulate and analyze data. The experimental system is built on a low-cost IMU with tri-axis gyroscope, accelerometer and magnetometer and a GPS module to verify the model algorithm. The update rate of the integrated system is equal to the INS rate of 100 Hz and the rate of GPS is 10 Hz. The data acquisition and processing system is performed on an ARM Cortex-M4 microcontroller. INS is a navigation system that uses tri-axis inertial sensors (gyroscope, accelerometer and magnetometer) to calculate the orientation and position of an object. This system does not need an external reference, so it can continuously calculate without interruption. In this paper, outputs of inertial sensors are three components of the gyroscope, three components of the accelerometer and three components of the magnetometer in the body-frame, denoted by f b , ω b , m b respectively. The Figure 1 describes the INS mechanization in NED frame 3 . The INS uses rotation rate and acceleration values from the IMU sensor to update attitude, velocity and position by the following formula: Inertial navigation system In this formula, vector r n = [φ λ h] T is the position vector, whose components are geographic latitude, longitude and altitude (height) respectively. Vector v n is the velocity vector in NED coordinate. Matrix C n b is the direction cosine matrix (DCM, or rotation matrix) from body-frame to NED frame. The symbols ω and Ω denote the angular rate and its skew symmetric form, matrix D is the transition matrix from NED frame to latitude, longitude and altitude: Formula (1) is written in continuous form. In experiment, we have to discretize it for simplicity in calculation. Because of this discretization, the update process always has error. On the other hand, IMU sensor has other types of error like bias and scale factor. Thus, the INS errors are rapidly accumulating. To improve the accuracy of the navigation estimation, we use a tri-axis Euler angles estimator 6 . It is structured as a two-stage cascaded Extended Kalman filter (Figure 2). These filters use acceleration and magnetic field measured from the IMU to correct the Euler angles (roll, pitch and yaw). Precisely, first the EKF-1 combines the gyroscope and accelerometer measurements to calculate the Earth's gravity vector in NED frame, and then it can determine roll and pitch angles. Next, the EKF-2 uses the gyroscope, magnetic field measurements and the determined roll and pitch to calculate yaw angle. In experimental conditions, the accuracy of roll and pitch is 1 degree and accuracy of yaw angle is 3 degrees. Rotation rate and acceleration measurements can be affected by noises such as deviation, scale factor, nonorthogonality and some other types. Some types of error can be identified and calibrated in the laboratory environment. Some types are unpredictable, and have to be modeled as random noise. In the above factors, bias has the greatest effect on the measurement value of the IMU. So we can model the remaining types of noise (except bias) as white noise (denoted by the symbol η), we have the estimation equation: In (3), the symbols f and ω are estimated values of acceleration and angular velocity, f and ω are values measured from the IMU sensor, the symbol b is the bias and η is the other types of noise (modeled as white noise). Loosely Coupled scheme The loosely coupled model also referred to as "decentralized" filtering, consists of two estimators. The first one is a nonlinear estimator. It combines the INS estimation results with the GPS results to estimate the position, velocity, attitude error and the IMU sensor's error. The second is the GPS filter. It uses the pseudorange and Doppler measurement values from GPS module to determine the position, velocity. Figure 3 shows the diagram of loosely coupled model. In today's GPS modules, there is usually a built-in GPS data processor, which can calculate position, velocity and some other information from GPS raw data. In the LC model, position and velocity are fed into the nonlinear filter. The filter used in this paper is the Extended Kalman filter, which is suitable for nonlinear systems. Measurement values from IMU sensor (angular rate and acceleration) after being computed using the Euler angles estimation and INS mechanization, will be compared with the position and velocity of the GPS. The difference between two results is used as the input of the EKF. The integrated system works in closed loop, the estimated error values are fed back to adjust the state of the INS system and to compensate for the IMU measurements. This closeloop model is suitable for MEMS IMU, which has large disturbance. The error state vector δ x of the EKF filter in this model is composed of the position error δ r n , the velocity error δ v n , the attitude error ε, the acceleration bias error δ b a and the gyroscope bias error δ b g . Derive the INS mechanization function and take the first order elements 3 , we have the process model equation: In matrix F, τ ba and τ bg are the correlation time vectors of accelerometers and gyroscopes, determined based on the Gauss-Markov model. The components of vector u are white noises, with the covariance determined by the formula: , q = 2σ 2 τ (7) In the above formula, σ is the standard deviation of the Gauss-Markov noise. Matrix Q is called the spectral density matrix and its component, respectively, are covariance accelerometer, gyroscope, accelerometer bias and gyroscope bias. These values can be determined in the datasheet of the sensor 5 . The measurement model of the EKF is the difference of the INS results (position and velocity) and GPS results: In the above equation, symbol ε is the measurement noise. Its covariance matrix R can be obtained from GPS processing. The activation of the EKF is divided into 2 stages: update and prediction. The Kalman gain is computed first in the update stage. Then state variables (δ x) and error covariance (P) are updated based on prior estimates and its error covariance. After each correction, the error state vector should be reset to zero. When there is a GPS outage, we can use velocity constraints (Figure 4) to estimate errors 4 . Vehicles essentially move in forward direction. If the vehicle does not jump off the ground nor slide on the ground, its velocity in the axes perpendicular to the forward direction (y-axis and z-axis) is almost zero. So we have two velocity constraints: SIMULATION RESULTS In simulation, we use FlightGear simulation software 7 to create the data file and use MAT-LAB/Simulink to process it. The GPS signal is disturbed with noise to research about noise suppression of the estimator. The standard deviation of noise is 2.5 m in each horizontal axis and 5 m in vertical axis. Simulations were made in two cases: with and without the Euler angles estimator. We have the result accuracy is about 0.64 meters. The velocity error is within 0.1 m/s. We can conclude that the estimator has good filtering capability. Next, we will examine the quality of the system when the GPS signal is lost in intervals of 3, 5 and 10 seconds. From Table 2, we can conclude that when there is a GPS outage, the error of the system will be larger than the normal case (GPS fix). In addition, if the GPS lost time is longer, the horizontal error is larger. Using an Euler angles estimator helps to make smaller errors. Hardware development We built an experimental system to verify the implemented algorithm. The hardware (Figure 5) consists of the IMU sensor ADIS16405 from Analog Devices 8 , the GPS module from U-blox 9 and the microcontroller STM32F407 (ARM Cortex-M4) from STMicroelectronics 10 Results For MEMS IMU sensors, the amplitude of its noise is huge, so if we do not use the Euler angles estimator, the result is bad, the attitude, position, velocity errors are enormous. The estimated trajectory (red dots in Figure 6) does not have the same shape with the reference one (black line). In contrast, when we use the angles estimator, the errors are smaller, the accuracy is higher. The horizontal error of our GPS/INS system is 1.69 m, while the error of the individual GPS system is 1.93 m. For this reason, the GPS/INS algorithm can reduce over 10% of the error. On the other hand, the update rate of GPS is only 10 Hz. The integrated GPS/INS update rate is 10 times larger (100 Hz). We can see the effective of high update rate in Figure 6. Because the GPS has the low update rate of 10 Hz, there are visible spaces between the green dots (GPS-only). And if the vehicle moves very fast, the GPS cannot describe the vehicle's trajectory accurately. Differently, the blue dots (GPS/INS) approx- imately form a continuous line. From the above results, it can be concluded that the angles estimator can improve the accuracy of the navigation system and the integrated GPS/INS system performs better than the single GPS system ( Table 3). Next, assuming the GPS signal is lost for a period of 5 seconds, we will analyze the accuracy of implemented GPS/INS system in cases with and without speed constraints. We will simulate GPS outages in two cases: GPS lost in straight line and in curved line ( CONCLUSIONS In this paper, we have implemented a loosely coupled GPS/INS integrated navigation system. The main al-gorithm in this system is the Extended Kalman filter. We combined the EKF with Euler angles estimator and velocity constraints to improve accuracy. The rotation angles estimator helps us to determine the Euler angles correctly, thereby increasing the quality of the position and velocity estimation. In practice, the accuracy of roll and pitch angle is 2 degrees, the error of yaw angle is still large. The achieved horizontal accuracy is 2m when the GPS signal is stable and 3m when the GPS signal is lost in a short period. Compared with individual GPS, the error of the integrated system is about 10% smaller. In addition, the positive point of the GPS/INS is its update rate reaches 100 Hz, which is 10 times larger than the initial system. When there is a long-period GPS outage, the LC algorithm's result is very bad, so we need to use the tightly coupled model. In the future, we will research about this model, point out its advantages and disadvantages, and compare with the original model. After that, we will find the optimal switching method between two models.
3,558.8
2020-04-12T00:00:00.000
[ "Engineering", "Computer Science" ]
Determination of FVIIa-sTF Inhibitors in Toxic Microcystis Cyanobacteria by LC-MS Technique The blood coagulation cascade involves the human coagulation factors thrombin and an activated factor VII (fVIIa). Thrombin and fVIIa are vitamin-K-dependent clotting factors associated with bleeding, bleeding complications and disorders. Thrombin and fVIIa cause excessive bleeding when treated with vitamin-K antagonists. In this research, we explored different strains of toxic Microcystis aeruginosa and cyanobacteria blooms for the probable fVIIa-soluble Tissue Factor (fVIIa-sTF) inhibitors. The algal cells were subjected to acidification, and reverse phase (ODS) chromatography-solid phase extraction eluted by water to 100% MeOH with 20%-MeOH increments except for M. aeruginosa NIES-89, from the National Institute for Environmental Studies (NIES), which was eluted with 5%-MeOH increments as an isolation procedure to separate aeruginosins 89A and B from co-eluting microcystins. The 40%–80% MeOH fractions of the cyanobacterial extract are active against fVIIa-sTF. The fVIIa-sTF active fractions from cultured cyanobacteria and cyanobacteria blooms were subjected to liquid chromatography-mass spectrometry (LC-MS). The 60% MeOH fraction of M. aeruginosa K139 exhibited an m/z 603 [M + H]+ attributed to aeruginosin K139, and the 40% MeOH fraction of M. aeruginosa NIES-89 displayed ions with m/z 617 [M ́ SO3 + H]+ and m/z [M + H]+ 717, which attributed to aeruginosin 89. Aeruginosins 102A/B and 298A/B were also observed from other toxic strains of M. aeruginosa with positive fVIIa-sTF inhibitory activity. The active fractions contained cyanobacterial peptides of the aeruginosin class as fVIIa-sTF inhibitors detected by LC-MS. Introduction The blood coagulation cascade [1][2][3][4][5] is composed of intrinsic, extrinsic and common pathways involving human coagulation factors. It is initiated by vascular injury and tissue factor (TF) exposure, which triggers the extrinsic pathway [2]. The extrinsic pathway involves activated factor VII-tissue factor (fVIIa-TF) complex activated by Ca 2+ , cephalin or phospholipid [6]. The activation of the fVIIa-TF complex triggers activation of factor X (fX) to activated factor X (fXa) leading to activation of activated factor II (fIIa) or thrombin generation [2]. Thrombin generation needs fVIIa-TF complex, which initiates coagulation and has become the target of therapeutic studies [7]. The activated factor Analysis of the fVIIa-sTF active extracts of M. aeruginosa K139 and NIES-89 by LC-MS [25] gave a good lead for the active compounds present as fVIIa-sTF inhibitors ( Figure 2, Table 2 We have isolated aeruginosin K139 (30) but unfortunately, complete chemical shift assignments were not determined [26]. The paper by Nishizawa et al. [24] published aeruginosin K139 (30) chemical structure by MS elucidation. However, the stereochemistry of the compound was not deduced. Aeruginosin K139 (30) will be elucidated completely in our next paper. Moreover, aeruginosin K139 (30) has a chemical structure similar to aeruginosin 602 (31) reported by Welker et al. [27]. Aeruginosins K139 (30) and 602 (31) have identical fragmentation pattern reported by Nishizawa et al. [24] and Welker et al. [27]. Both compounds were also elucidated using the LC-MS technique. However, for consistency, this paper will refer aeruginosin with m/z 603 [M + H] + as aeruginosin K139 (30) al. [27]. Aeruginosins K139 (30) and 602 (31) have identical fragmentation pattern reported by Nishizawa et al. [24] and Welker et al. [27]. Both compounds were also elucidated using the LC-MS technique. However, for consistency, this paper will refer aeruginosin with m/z 603 [M + H] + as aeruginosin K139 (30) We tested other toxic Microcystis strains for the presence of aeruginosins. Aeruginosins could also be found in some other strains of toxic Microcystis, with the presence of aeruginopeptins and microcystins. Indeed, the M. aeruginosa M228 strain was positive against fVIIa-sTF assay. The aeruginopeptins or microcystin-YR (20), with tR 14.9-18.4 min, co-existed with the active compounds. However, testing of the pure compounds of aeruginopeptins and microcystins (Table 1) We tested other toxic Microcystis strains for the presence of aeruginosins. Aeruginosins could also be found in some other strains of toxic Microcystis, with the presence of aeruginopeptins and microcystins. Indeed, the M. aeruginosa M228 strain was positive against fVIIa-sTF assay. The aeruginopeptins or microcystin-YR (20), with t R 14.9-18.4 min, co-existed with the active compounds. However, testing of the pure compounds of aeruginopeptins and microcystins (Table 1) The EC 50 s, calculated by Biodatafit [28], of the 40% MeOH fraction of M. aeruginosa NIES-89 containing aeruginosin 89A/B (26/27) were 0.010 µg/mL and 7.123 µg/mL for thrombin and fVIIa, respectively. Thus, the 40% MeOH fraction of M. aeruginosa NIES-89 had computed 0.001 thrombin/fVIIa ratio. The dual inhibitory activity of aeruginosins 89A/B (26/27), and also K139 (30), against thrombin and fVIIa enzymes, make aeruginosins good candidates for fVIIa-sTF inhibitors. Mar . Drugs 2016, 14, 0000 9 The EC50s, calculated by Biodatafit [28], of the 40% MeOH fraction of M. aeruginosa NIES-89 containing aeruginosin 89A/B (26/27) were 0.010 μg/mL and 7.123 μg/mL for thrombin and fVIIa, respectively. Thus, the 40% MeOH fraction of M. aeruginosa NIES-89 had computed 0.001 thrombin/fVIIa ratio. The dual inhibitory activity of aeruginosins 89A/B (26/27), and also K139 (30), against thrombin and fVIIa enzymes, make aeruginosins good candidates for fVIIa-sTF inhibitors. We have detected aeruginosins 98A (36) and B (37) from M. aeruginosa NIES-98. The MeOH fractions from the aforementioned cyanobacteria are not active in the fVIIa-sTF assay. Thus, from our readings, we compare the fVIIa-sTF inhibitory activity of aeruginosins to phenylamidine. Kadono [29] has denoted the importance of phenylamidine P1 moiety in fVIIa inhibition, which has an inhibitory activity against fVIIa-sTF. The presence of the cyclic amino alcohol moiety in aeruginosins may contribute to efficient binding against fVIIa. However, this hypothesis needs to be established by a structure-activity relationship and subject to another paper. Based on Kadono's paper [29], inhibitors "1-5" with linear structure and containing three peptide bonds exhibit both thrombin and fVIIa inhibitory activities. The number of peptide bonds contributes to the fVIIa inhibitory activity of the compounds and lessens its thrombin inhibition. The addition of one more peptide bond gives promising fVIIa-TF inhibitory activities. This additional peptide bond has been noted in inhibitors "2" to "5" [29] and aeruginosins. The presence of P3 moiety in aeruginosins has certain effects on inhibition of fVIIa and thrombin. The fVIIa and thrombin have the same catalytic triad Ser195-His58-Asp102, S1 pocket, and activation site Arg-Ile [30,31]. Aeruginosins from toxic Microcystis cyanobacteria is a class of fVIIa-sTF inhibitors with thrombin-inhibiting activity. The aeruginosins could be developed into a specific fVIIa-sTF inhibitor that may avoid bleeding and bleeding complications. Some common fVIIa scaffolds from our review [19] have been identified, and we have correlated to the scaffolds of the cyanobacteria origin. The We have detected aeruginosins 98A (36) and B (37) from M. aeruginosa NIES-98. The MeOH fractions from the aforementioned cyanobacteria are not active in the fVIIa-sTF assay. Thus, from our readings, we compare the fVIIa-sTF inhibitory activity of aeruginosins to phenylamidine. Kadono [29] has denoted the importance of phenylamidine P1 moiety in fVIIa inhibition, which has an inhibitory activity against fVIIa-sTF. The presence of the cyclic amino alcohol moiety in aeruginosins may contribute to efficient binding against fVIIa. However, this hypothesis needs to be established by a structure-activity relationship and subject to another paper. Based on Kadono's paper [29], inhibitors "1-5" with linear structure and containing three peptide bonds exhibit both thrombin and fVIIa inhibitory activities. The number of peptide bonds contributes to the fVIIa inhibitory activity of the compounds and lessens its thrombin inhibition. The addition of one more peptide bond gives promising fVIIa-TF inhibitory activities. This additional peptide bond has been noted in inhibitors "2" to "5" [29] and aeruginosins. The presence of P3 moiety in aeruginosins has certain effects on inhibition of fVIIa and thrombin. The fVIIa and thrombin have the same catalytic triad Ser195-His58-Asp102, S1 pocket, and activation site Arg-Ile [30,31]. Aeruginosins from toxic Microcystis cyanobacteria is a class of fVIIa-sTF inhibitors with thrombin-inhibiting activity. The aeruginosins could be developed into a specific fVIIa-sTF inhibitor that may avoid bleeding and bleeding complications. Some common fVIIa scaffolds from our review [19] have been identified, and we have correlated to the scaffolds of the cyanobacteria origin. The arginine and its arginine-derivatives (argininal and argininol) are essential for its fVIIa-sTF inhibition. In addition, structure-activity relationship (SAR) studies will be done in order to deduce the most active scaffold in aeruginosin. We hope to establish a particular SAR study between basic P1 arginine of aeruginosins and fVIIa enzyme. We will also consider the fVIIa enzyme and P3 moiety interaction as proposed in the study. Furthermore, synthesis and modifications have been deemed to make it specific for fVIIa. Assays involving a combination of co-factor(s) and enzymes (TF-fVIIa-fXa-fIIa, etc.) will be performed for a better diagnostic test for the specificity of aeruginosins. Culture Condition Five-liter to ten-liter cyanobacterial cultures of 50 strains M. aeruginosa and Anabaena strains were grown in M. aeruginosa (MA) and C medium with N-Tris(hydroxymethyl) methyl-3-aminopropanesulfonic acid (TAPS) rather than Tris (hydroxymethyl) aminomethane (CT) media [39] for fVIIa-sTF and thrombin inhibitory assays. The M. aeruginosa K139 strain was grown in C medium with Bicine in preference for Tris (hydroxymethyl) aminomethane (CB medium) [24]. The M. aeruginosa strains were obtained from Microbial Culture Collection, National Institute for Environmental Studies (NIES), Japan unless otherwise indicated. The cultures were grown in a 5-L glass bottle by aeration at 20˝C for 2-4 weeks with continuous light except M. aeruginosa NIES-89 under 12L:12D cycle. The algal cells were centrifuged using Kubota 7000 centrifuge at 9000 rpm before lyophilization. The lyophilized cells were stored at´30˝C until micro-extraction. Extraction The freeze lyophilized algal cells (100 mg) were extracted with 3 mL (ˆ3) 5% acetic acid, homogenized for 30 min, and centrifuged using Kubota 5920 at 4000 rpm. The resulting supernate was evaporated in vacuo at 40˝C. The supernate was eluted by solid phase extraction (SPE) using Sep-Pak Vac 6 mL (1 g) C18/tC18 cartridge (Waters brand). Increasing concentrations of MeOH from water to 100% MeOH with 20% increments was used to elute the supernate. For M. aeruginosa NIES-89, a 5%-increment MeOH was used to separate aeruginosins from microcystins. The cyanobacterial extracts and pure peptides from Microcystis were subjected for in vitro assays. Standard microcystins were bought from Wako Pure Chemical Industries, Ltd., Osaka, Japan. The thrombin assay was performed following the procedure by Anas et al. [22,40,41], in parallel with fVIIa, fVIIa-sTF assays. The crude MeOH fractions active against fVIIa-sTF were subjected to LC-MS experiment to determine the active compounds present. Serine Protease Inhibitory Assays All assay experiments were done in a cold condition at 4˝C using an ice bucket until pre-incubation and reaction at 37˝C. Thrombin Inhibitory Assay Thrombin inhibitory assays were performed following the procedure of Anas et al. [22,40,41] using 1 mg/mL and 100 µg/mL concentrations with H 2 O, 50% EtOH or 100% EtOH as solvents. The final concentration in each assay was 100 µg/mL and 10 µg/mL, respectively. Leupeptin was used as a positive control from Peptide Institute, Osaka, Japan. The Bz-Phe-Val-Arg¨pNA HCl was purchased from Bachem AG (Bubendorf, Switzerland) and used as a substrate. Solvents H 2 O, 50% EtOH, and 100% EtOH were used as negative controls. Pure compounds were tested at a final concentration of 1 µg/mL unless otherwise indicated. FVIIa and FVIIa-sTF Assays Preparation of L-α-Cephalin or 3-sn-Phosphatidylethanolamine Buffer The fVIIa and fVIIa-sTF assays used L-α-cephalin buffer solution. The fVIIa-sTF assay was performed following the procedure by Nakagura et al. [42] with modification. The L-α-cephalin as buffer solution was prepared as follows: Buffer (A): Five hundred milliliters (500 mL) of water was added to 6.057 g of Tris (hydroxymethyl)aminomethane (Nacalai Tesque, Kyoto, Japan) to make 100 mM Tris-HCl solution; 4.383 g NaCl (Nacalai Tesque) was added to the resulting solution to make 100 mM NaCl, and 500 mg bovine serum albumin (BSA) (Sigma, A7284, St. Louis, MO, USA) was added. The pH was adjusted to 7.40; Buffer (B): A 200 mL of Buffer A was added to 0.3329 g of CaCl 2 (Nacalai Tesque). The resulting solution (Buffer B) was adjusted to pH 7.48 before it was stored at 4˝C in preparation for the next day experiment. A 30 µg/mL 3-sn-phosphatidylethanolamine from the bovine brain (Sigma, USA) or L-α-cephalin was added to Buffer B on the day of the experiment. FVIIa Assay The 80 µL 3-sn-phosphatidylethanolamine buffer, 50 µL of 100 mM fVIIa enzyme in a buffer, and 20 µL of sample solution were dispensed in each well of a 96-well plate (Iwaki: 3881-096, Tokyo, Japan). The 96-well plate with the solution was pre-incubated at 37˝C for 5 min separately together with 1 mM of Chromozyme t-PA (N-Methylsulfonyl-D-Phe-Gly-Arg-4-nitranilide acetate), from Roche Diagnostics (Mannheim, Germany), dissolved in water as a substrate. The 50 µL of the substrate was added, and the mixture was agitated to start the reaction. The absorbance was noted at 405 nm using Thermo Scientific Multiskan FC microplate photometer until favorable binding was observed. FVIIa-sTF Assay The same buffer preparation for fVIIa assay was used for the fVIIa-sTF inhibitory assay. The fVIIa: sTF ratio was 0.30 µg/mL: 0.39 µg/mL, and was prepared in Section 3.3.2. Preparation of FVIIa Enzyme The human factor VIIa (HFVIIa) enzyme, purchased from Enzyme Research Laboratories, South Bend, IN, USA, was added and adjusted with 20 mM Tris-HCl/0.1 M NaCl/pH 7.4. The final enzyme concentration should be 95.06 µg/mL. The 100 µL enzyme solutions were stored in plastic cryogenic vials (Iwaki: 2712-002, Tokyo, Japan) at´80˝C until use. The fVIIa enzyme, 95.06 µg/mL, and 100 µL volume solution was added to 7.822 mL of 3-sn-phosphatidylethanolamine buffer on the assay preparation. Preparation of Soluble Tissue Factor (sTF or F3-28H) The sTF or Recombinant Human Soluble Tissue Factor (F3-28H) or Human F3 was purchased from Creative Biomart, Shirley, NY, USA. The sTF was added with 10 mM PBS, pH 7.4, to make 1 mM (25.624 µg/mL), and transferred in 300 µL volumes in plastic cryogenic vials (Iwaki: 2712-002, Tokyo, Japan), stored at´80˝C until use. The sTF solution (25.624 µg/mL, 300 µL) was added to 4.7 mL of the 3-sn-phosphatidylethanolamine buffer in an amber bottle before use. FVIIa-sTF Assay Procedure The 30 µL buffer, 100 µL fVIIa-sTF, and 20 µL sample solutions were added to a well in a 96-well plate. The solution was pre-incubated at 37˝C for 5 min, together with 1 mM Chromozym t-PA in water as a substrate. A 50-µL substrate was added to start the reaction, agitated, and the absorbance was monitored at 405 nm using Thermo Scientific Multiskan FC microplate photometer. The initial and final readings were noted for 40 min. LC-MS Preparation of Samples and Determination of fVIIa-sTF Active Compounds Acetonitrile (99.8% purity) was purchased from Necalai Tesque, Ultrapure Water (LC/MS grade), and Formic Acid (abt. 99%, LC/MS grade) was purchased from Wako Pure Chemical Industries, Ltd., Osaka, Japan. The reversed-phase C18 (ODS) methanol fractions, which were positive for fVIIa-sTF assays, were subjected to LC-MS and dereplicated to know the active compounds present. One hundred microliters (100 µL) of 100 µg/mL from an EtOH solution of positive ODS fractions was transferred to a small vial. The EtOH solution was evaporated in vacuo at 40˝C before adding 100 µL of 10% MeCN to make up 100 µg/mL solution for LC-MS analysis. The LC-MS analysis was performed using Thermo Finnigan LCQ deca XP Plus LCMS analytical instrument with Agilent 1100 Series capillary liquid chromatography system. The samples were analyzed using a solvent gradient from 10% MeCN with 0.1% HCOOH to 100% MeCN with 0.1% HCOOH over 60 min. The analysis was done using reversed phase super ODS (TSK-gel, TOSOH Bioscience, Tokyo, Japan) 50ˆ2 mm column, with flow rate 0.2 mL/min, 30˝C column oven, with 200˝C capillary temperature, and UV detection at 220 nm. Solvent optimization of M. aeruginosa NIES-89 40% MeOH fraction used gradient elution from 10% MeCN with 0.1% HCOOH to 15% MeCN with 0.1% HCOOH over 60 min using the aforementioned conditions and parameters. The LC-MS data were processed in Xcalibur Qual Browser ver. 1.2-1.3. The total ion chromatogram (TIC) and extracted ion chromatogram (EIC) were treated, and peaks were identified for the probable compounds present. Conclusions This research paves a new avenue for toxic Microcystis study on its role in medical research. We deduce the importance of serine protease inhibitory peptides aeruginosins from toxic Microcystis strains and relate it to the blood coagulation cascade using the LC-MS technique. Argal-containing aeruginosins are potent fVIIa-sTF inhibitors, which could be found in 40% to 80% MeOH ODS fractions in the study. Aeruginosins are potent fVIIa-sTF inhibitors, and we have detected six aeruginosins by LC-MS. The 40% MeOH fraction of M. aeruginosa NIES-89 containing a mixture of aeruginosins 89 A (26) and B (27) displays an EC 50 value of 7.123 µg/mL for fVIIa inhibitory assay and a thrombin inhibitory activity of 0.010 µg/mL. The aeruginosin 89 A (26)/B (27) has a dual inhibitory activity against thrombin and fVIIa with 0.001 thrombin/fVIIa inhibition ratio. We need to develop or increase the thrombin/fVIIa ratio for aeruginosin by subjecting it to a structure-activity relationship (SAR) study in the future. Increasing the thrombin/fVIIa ratio could make aeruginosin more specific to fVIIa, which could be done by peptide modification. Future directions of this research aim to establish the structure-activity relationship (SAR) study of different aeruginosins present in this paper. This research is our preliminary study for aeruginosins as probable fVIIa-sTF inhibitors of the blood coagulation cascade. We aim at establishing the concrete fVIIa-sTF scaffolds, which will result in less bleeding and bleeding complications from cyanobacteria, specifically Microcystis, as our future research. We need to develop a new drug that could inhibit fVIIa with less bleeding and bleeding complications in the future.
4,062.8
2015-12-30T00:00:00.000
[ "Biology" ]
Silicon Lens Optimization to Create Diffuse, Uniform Illumination from Incoherent THz Source Arrays Arrays of terahertz (THz) sources provide a pathway to overcoming the radiation power limitations of single sources. Several independent sources of THz radiation may be implemented in a single integrated circuit, thereby realizing a monolithic THz source array of high output power. Integrated THz sources must generally be backside-coupled to extended hemispherical dielectric lenses in order to suppress substrate modes and extract THz power. However, this lens also increases antenna gain and thereby produces several non-overlapping beams. This is because individual source pixels are relatively large. Hence, their spatial separation on-chip translates to angular separation in the far-field. In other words, there are gaps in their field of view into which very little THz power is projected. Therefore, they cannot homogeneously illuminate an imaging target. This article presents a simple, practical, and scalable method to convert arrays of incoherent THz sources into a diffuse, uniform illumination source without the need for reducing pixel size. Briefly, individual beam divergence is optimized by tailoring the dimensions of the extended hemispherical dielectric lens such that the far-field beams of adjacent source pixels overlap and combine to form a uniform far-field beam. We applied this method to an incoherent 8 × 8-pixel THz source array radiating 10.3 dBm at 0.42 THz as a proof of concept and thereby realized a 10.3-dBm 0.42-THz diffuse, uniform illumination source that was then deployed in a demonstration of THz active imaging. Introduction High-power, uniform-profile illumination sources operating at high frequencies are a much-needed but missing element in camera-based terahertz (THz) active imaging applications to produce high-quality images with a high, homogeneously distributed signal-to-noise ratio (SNR) and high spatial resolution. However, the radiation power of individual THz sources decreases drastically with respect to increasing frequency [1,2]. To address this, several sources may be operated together to realize source arrays of greater overall radiation power. In this regard, silicon-integrated technologies are viable as they offer a high fabrication yield to incorporate such arrays into a single chip. In recent years, both coherent [3][4][5][6][7][8][9][10] and incoherent [11][12][13][14][15] single-chip THz source arrays have been reported. A salient difference between these two approaches is that coherent source arrays require an on-chip synchronization mechanism for phase-locking. Although this may incur some restrictions upon performance and array size, recent works have produced highly scalable devices [6,9]. Despite ongoing advances in efficient on-chip synchronization methods, incoherent devices remain more scalable and produce higher radiation power [12]. Aside from array size and radiation power, there are other crucial differences between coherent and incoherent approaches. Coherent radiation can prove detrimental to image fidelity in many situations, as speckles form due to wave interference [16,17], which is not the case with incoherent radiation [13]. Furthermore, unlike communications and radar, amplitude-only THz imaging does not require a coherent signal. For these reasons, the objective of this work is to explore incoherent THz source arrays for active imaging applications. A uniform-amplitude beam profile is desired in order that an imaging target is homogeneously illuminated, yielding constant dynamic range across the imaging plane. However, in contrast to this aim, contemporary incoherent THz source arrays produce several non-overlapping beams [12,13], which produces dead zones in the imaging plane, as shown in Fig. 1(a). These dead zones have two causes: the source array pixels are large relative to a wavelength, and pixel pitch translates into an angular separation of free-space beams. Thus, according to [18], overlapping beams can only be achieved with a lens-coupled source array if the pixel cells are small enough so that they can be close together. However, the size of source pixels cannot be easily reduced due to the required power generation networks. Therefore, another solution is required to increase the beam overlap through an optimized silicon lens, yielding a uniform beam, as shown in Fig. 1(b). In the past, significant research effort has been dedicated to the maximization of directivity for a single radiating element coupled to an extended hemispherical silicon lens as well as associated problems [19][20][21][22][23][24][25][26]. It has also been shown that beams from multiple small coherent sources incorporated into a single chip can be combined into a single Gaussian beam [27]. However, creating diffuse, uniform illumination from large-scale arrays of incoherent THz sources has not yet been demonstrated. [12,13] produce directional, non-overlapping beams that lead to undersampling of the image plane, and (b) the primary objective of this study; a uniform-intensity, high-power THz illumination source To this end, this article presents a method to convert an arbitrary array of THz sources into a diffuse, uniform illumination source by tailoring the geometry of the backside-coupled lens-without the need for any modification to the integrated circuit itself-and a proof of concept is realized using a 0.42-THz device with a power of 10.3 dBm. Silicon-only single-shot THz active imaging is demonstrated together with a CMOS camera based on the focal plane array (FPA) from [28]. Previous attempts at incoherent power-combining of THz source arrays in free space have relied upon multi-device scaling [29] or lossy external optics [30]. Both of these approaches are bulky and inefficient. In comparison, the approach taken here is a single-chip-to-single-lens solution. It exhibits significant advantages in terms of both compactness and efficiency. THz Source Array The following is a summary of the details of the incoherent silicon-integrated source array chip used here, which has been presented in detail in [12]. For more details, the reader is referred to the cited work. The chip is implemented in a commercial SG13G2 SiGe BiCMOS process available from IHP microelectronics, with 350-GHz/450-GHz f t /f max SiGe HBT transistors. This chip incorporates 8 × 8 incoherent THz sources. Together, they radiate a rotal radiation power of 10.3 dBm at ∼0.42 THz. Each source pixel is composed of a power generation network coupled to an on-chip circular slot antenna. Each source pixel cell, thereby, occupies a die area of 365 µm × 365 µm. For THzrange power generation, free-running fundamental Colpitts oscillators, followed by a common-collector doubler, are used. The oscillators are not mutually phase-locked. Consequently, the radiation frequencies of the source pixels are not the same, making the source array an incoherent device. The difference in the radiation frequency has two causes: process variations and variations in bias. Related to this, mutual pixel coupling, which can cause two or more pixels to lock onto the same oscillation frequency, is mitigated by placing grounding shields around the source pixels and the silicon lens. Grounding shields prevent any leakage from electromagnetic fields, whereas the lens suppresses undesired substrate modes [31] by acting as a semi-infinite substrate. According to [32], an array atop a silicon lens is subject to a mutual pixel coupling on the order of -25 to -30 dB. In the source array, the isolation is further enhanced by the architecture of the source pixels. In particular, the frequency doubler after the oscillator prevents any out-and in-coupling of the fundamental oscillator signal. The lens not only suppresses unwanted substrate modes but also improves the front-to-back radiation ratio and imparts mechanical rigidity and thermal stability to the silicon chip [22]. For these reasons, the vast majority of integrated THz sources are coupled to extended hemispherical silicon lenses [2,[33][34][35]]. Simulation Model For pre-estimation of the optimal lens extension length required for a minimum −3-dB beam overlap among all source pixel pairs and to create a uniform beam, numerical simulations were utilized based on the theory from [22,23]. It is noted that full-wave simulations are impractical because the silicon lens is electrically large. Our simulation model accounts for Fresnel losses, which is to say, the amplitude pattern of fields on the external surface of the lens accounts for reflection losses. These losses are spatially dependent, and hence reflection impacts the overall far-field radiation pattern and directivity. Aside from Fresnel losses, other reflection-related effects are not modeled; specifically, the fields that are back-reflected into the lens volume and undergo multiple subsequent reflections before ultimately being radiated via the lens surface. Nevertheless, multiply reflected radiation is anticipated to contribute primarily to sidelobes, and hence it is of little interest to this work, as we aim to produce overlap between main lobes. The main difference to the math provided in the cited works is the radiation pattern of the lens-internal antenna. We employ a more abstract beam pattern in the interest of generality: where the value q makes it possible to set any desired internal beam divergence, θ HPBW,int , parametrically: Simulation Results In order to obtain a comprehensive picture of achievable far-field radiation patterns, combined far-field radiation patterns were determined for a range of normalized lens extension lengths from L/R = 0.0 to L/R = 0.6, where L is lens extension length, and R is lens radius, as shown in Fig. 2(a). To this end, single-pixel far-field radiation patterns were computed for all 8 × 8 source pixels located at the base of a 15-mm (71.7λ) diameter extended hemispherical silicon lens. The summation of these singlepixel beams then produced the combined field radiation patterns. The unnormalized power was used in this case to account for differences in directivity. The permittivity of the silicon lens was set to 11.67 1 . It is noted that a precise permittivity value is critical to avoid a systematic offset in the lens extension length. The intended comprehensive picture can be transported best through cuts along the diagonals of the combined far-field radiation patterns (D-plane cuts) as a function of the normalized lens extension length, shown in Fig. 2(b). As seen from this figure, the D-plane is oversampled for small values of the lens extension length. The same applies to large values, while medium values create gaps in the field of view (FoV) into which very little power is projected. The latter case corresponds to a lens extension that is optimized for maximum directivity. It is also observed that small values of lens extension length allow the generation of uniform or flat-top illumination, as the edges of the combined far-field radiation pattern are sharply separated. In contrast, large values smear the edges. Thus, large extensions have the effect of producing a more Gaussian illumination. The cause for the differences in the beam shape generated is because a hemispherical "focal surface" with inverted orientation to the hemispherical surface of the lens exists inside the dielectric, with a different far-field direction associated with each point on this surface. In normal operation, the lowest point of this focal surface corresponds to the lens extension. To be appropriately focused, off-axis pixels would have to be shifted upward in the positive z-direction to compensate for the curvature of the focal surface. If we either shorten or increase the lens extension, the directivity of the center pixel is reduced because it is essentially defocused. When the lens extension is increased, the directivity of the off-axis pixels is reduced more than that of the center pixel because the off-axis pixels are farther away from the focal surface. On the other hand, when shortened, the off-axis pixels are raised in the positive z-direction. Thus, they are closer to the focal surface than the center pixel. Therefore, the reduction in directivity is less than the directivity reduction for center pixels. Ultimately, the desired beamform determines whether the lens extension needs to be increased or shortened, with the former yielding a Gaussian beam and the latter a uniform beam. Another noteworthy point observable from the simulated D-plane cuts is that the FoV decreases with increasing lens extension length. A small FoV is desirable concerning realizing small f -number quasi-optical systems for THz active imaging that are easier to align than those with large f -numbers. Since the overarching aim was to create uniform illumination from the source array at a minimal FoV, a lens extension length just above the intermediate position that causes gaps in the FoV is the best option. Here, uniform illumination is produced at a minimal FoV. Following the above discussion, the normalized lens extension length of 0.244 was selected for fabrication and testing to create uniform illumination from the source array, as indicated by the dashed line. In addition, two other cases with normalized lens extension lengths of 0.284 and 0.367 were selected for fabrication and testing in the interest of providing a more complete and comprehensive picture. The first case (0.284) represents the aplanatic case for silicon, which is associated with zero coma and zero spherical aberration for a central pixel [19]. This first case is intended to fill some gaps in the FoV. The second case (0.367) is intended to undersample the FoV due to non-overlapping beams. This second case refers to the lens design implemented in the previous works [12,13]. Figure 3 shows the experimental setup employed for far-field radiation pattern characterization. For relative power measurements, the radiated THz signal of the source arrays with different lens extension lengths was successively collected with a SiGe HBT THz direct power detector with 700-V/W responsivity and 8-pW/ √ Hz NEP at around 0.42 THz [36]. A far-field distance separated the source arrays and the detector. Specifically, the distance was 70 cm, which is above the Fraunhofer distance of 2D 2 /λ = 63 cm for a lens diameter of 15 mm and the source array radiation frequency of ∼0.42 THz. The detector was fixed in place and connected to a spectrum analyzer via a 40-dB voltage amplifier. The source arrays were mounted onto a sixaxis table-top robot arm in order to facilitate a rotational raster scan. Each source array was scanned over a ±36 • × ±36 • -sector of the hemisphere to capture far-field beams of all source pixels. Fig 4(d)-(f). It can be seen from these results that the beam overlap (or fill factor) indeed increases with decreasing lens extension length, as intended. The source arrays coupled to extended hemispherical silicon lenses of 0.367, 0.284, and 0.244 normalized lens extension length cover experimentally tested FoVs of 50 • , 55 • , and 55 • , with 3-dB fill factors of 6.5%, 53.4%, and 99.98%, respectively. The corresponding peak-to-peak ripple values are ∼20 dB, 8.6 dB, and 3.5 dB. Thus, the primary objective of this study, namely the realization of a high-power, diffuse, uniform-intensity THz source, has been achieved. Finally, given that the measured combined far-field radiation patterns are in excellent agreement with their simulated counterparts, we may conclude that the simulation model has been successfully validated. Single-Shot Imaging Setup An illustration and photograph of the experimental setup are shown in Fig. 5(a) and (b), respectively, with which silicon-only THz active imaging was performed. That is to say, both the source and detector of THz waves are silicon-based integrated circuits. The two source arrays coupled to lenses of the highest and lowest lens extension length were successively deployed in a collimated beam setup that consists of a series of optics and a commercially available CMOS THz camera from Ticwave GmbH, Wuppertal, Germany 2 . This camera is based on the FPA from [28]. It is noted that a variety of THz applications have been demonstrated with this camera ranging from shadow imaging [28] over light-field imaging [37][38][39] to source characterization [40][41][42][43], among others. The total radiation power of the source array is 10.3 dBm [12], and the video-rate camera NEP at the source array radiation frequency of 0.42 THz is 2.5 µW [44]. All components were fixed within a cage system to ensure mechanical rigidity. The source arrays and the camera are operated via USB. The presented imaging system is portable, spanning an overall system size of 136 mm×50 mm×50 mm. The key enabling factor in implementing this compact, portable THz active imaging system is the incoherent operation of the source array and camera. It is noted that the proposed THz uniform illumination source array-based active imaging setup is not tied to a CMOS camera. Other direct power detectors, such as microbolometers [45], CMOS-NEMS [46], SiGe HBTs [47], and Schottky barrier diodes [48], can replace the CMOS detectors, provided that they can be integrated into a single chip. The beam propagation within the imaging setup works as follows. The optical train consists of two PTFE lenses. At the source array side, a 50-mm diameter collimating PTFE lens with an f-number of 0.75 (L1), which closely corresponds to the source array FoVs of 50 • and 55 • , is employed for collimation. The source array lens center, where central rays of the source pixel beams cross [37], coincides with the focal point of L1. The source array emits diverging wavefronts emanating from the focal point of L1. Consequently, collimated waves are provided in the object plane, where they illuminate an imaging target. The camera lens center, where central rays of camera pixel beams cross [37], coincides with the focal point of the right collimating PTFE lens (L2; diameter of 50 mm and f-number of 1), which closely corresponds to the 46 • FoV of the camera. Thus, the object plane is projected onto the camera lens. Ultimately, an object under illumination may appear as an inverted image on the FPA surface. Since the pixel beams of the source array and the pixel beams of the camera are collimated (or planarized) in the object plane, far-field conditions prevail exactly at this point. Moving away from this position will cause an object to be out of focus. Such defocusing will cause producing blurred images. Imaging Results A 3-mm thick, 50-mm diameter metal plate with a T-shaped cut-out served as the imaging object, as shown in Fig. 6(a). This object blocked most of the beam, allowing only a small portion of THz power to pass. Figure 6(b) and (c) show THz images of the marked portion of the object acquired with the camera using the 2.75-mm and 1.83-mm extended hemispherical silicon lens-coupled source array, respectively. It can be seen that the THz image depicted in Fig. 6(b) is undersampled, whereas the one depicted in Fig. 6(c) is oversampled. Hence, the "T" is only recognizable in the latter. Raw images without any image-processing applied are shown. Each image was acquired over a ∼30-second time span, as the CMOS camera was operated at 30 fps and averaged 1024 frames. Frame-averaging was applied to increase the SNR [49]. Diffraction at the internal edges of the "T" may play a role in the blurring observed at the internal object edges, which results in the degradation of overall image quality. It should be noted that this diffraction-induced degradation of the overall image quality is not due to the imaging system itself but to the tiny slit width of the imaging object, which is close to a wavelength of just under 1 mm. Conclusion In this article, a practical and scalable method to operate a single-chip array of incoherent THz sources as a uniform illumination source has been demonstrated. For demonstration, this method was applied to the source array from [12], thereby realizing a 0.42-THz source that distributes 10.3 dBm evenly over its FoV. Briefly, our method is based upon optimization of the geometry of a backside-coupled silicon lens. Silicon-only THz active imaging has been successfully demonstrated together with a CMOS THz camera based on the FPA presented in [28]. All single-element integrated THz sources suffer from a low radiation power and are relatively large in terms of wavelength [1,2]. Furthermore, lens coupling is ubiquitous among THz sources [2,[33][34][35]50]. For these reasons, the presented technique of reduced-extension length hemispherical silicon lens-coupled THz source arrays is of general utility as a free-space power-combining technique; for example, also for incoherent source arrays of resonant tunneling diodes [14], or photomixers [15]. Furthermore, when the source that constitutes the array is mutually phase-locked, this would naturally yield coherent power-combining. This manner of scalable coherent power-combining could be essential to photo-mixing emitters, which exhibit broad frequency tunability, but currently suffer from a low radiation power at high frequencies [27,34,35]. An array of photo-mixing sources could be fed from a single high-power beating laser, thereby leading to innate synchronization among adjacent sources.
4,659.2
2021-09-01T00:00:00.000
[ "Physics" ]
Aggregation of Classifiers: A Justifiable Information Granularity Approach In this study, we introduce a new approach to combine multi-classifiers in an ensemble system. Instead of using numeric membership values encountered in fixed combining rules, we construct interval membership values associated with each class prediction at the level of meta-data of observation by using concepts of information granules. In the proposed method, uncertainty (diversity) of findings produced by the base classifiers is quantified by interval-based information granules. The discriminative decision model is generated by considering both the bounds and the length of the obtained intervals. We select ten and then fifteen learning algorithms to build a heterogeneous ensemble system and then conducted the experiment on a number of UCI datasets. The experimental results demonstrate that the proposed approach performs better than the benchmark algorithms including six fixed combining methods, one trainable combining method, AdaBoost, Bagging, and Random Subspace. Introduction In supervised learning, the relationship between feature vectors and class labels of training observations is exploited to learn the discriminative decision model. As data gathered from different sources can vary quite substantially, a learning algorithm that achieve high accuracy on one dataset can perform less well on another dataset. Experiments have shown that there is no single learning 2 algorithm that performs well on all data and it is difficult to know a priori which learning algorithm is suitable for a particular dataset. Hence, the research on how to combine several learning algorithms into a single framework to obtain a better discriminative decision model has generated a great deal of interest [1][2][3]. In many classification systems, the outputs usually reflect the probabilities of an observation belonging to given classes. However, in many practical situations, one may not be able to associate a precise probability with every event, particularly when only limited information is available. In this case, interval probabilities with lower and upper bounds provide a more general and flexible way to describe the uncertainty of the underlying knowledge [4]. Interval probability models have been successfully applied to many applications involving probabilistic and statistical reasoning, especially when there is a conflict between different sources of information [5]. In ensemble systems, each learning algorithm uses different methodology to learn base classifier on a given training set, thereby introducing uncertainty to the outputs. In ensemble learning, the metadata of an observation reflects the agreements and disagreements between the different base classifiers. A combiner which can explicitly represent knowledge with uncertainty is therefore desirable. Several combiners that exploit this idea have been proposed, such as fuzzy integral in neural network [6] and Decision Template [7]. In this study, instead of dealing with precise numerical membership values like those encountered in traditional classification system, we propose a novel combining classifiers algorithm that captures the uncertainty in the outputs of base classifiers in an explicit manner using the notion of information granularity. Information granules and Granular Computing are directly attributed to the pioneering work by Zadeh [8-10] and further developed in [11][12][13][14][15]. Specifically, the prediction of base classifiers will be processed by justifiable information granularity to generate interval class memberships associated with class labels. As mentioned before, interval values are a flexible way to describe the uncertainty in the underlying knowledge. Therefore, the proposed algorithm will be more general than existing ensemble systems since it can output both interval values and crisp class memberships. Our experiments have confirmed that it performs significantly better than many existing ensemble systems. 3 The paper is organized as follows. In Section 2, we briefly discuss ensemble methods, with a focus on heterogonous ensemble systems. The concept of information justifiability in the design of information granules is also emphasized. In Section 3, a novel fixed combining method based on the idea of justifiable granularity is discussed. Experimental results are presented in Section 4 in which we compared the results of the proposed method to a number of benchmark algorithms on twenty one datasets. Finally, conclusions are presented in Section 5. Heterogeneous ensemble systems and fixed combining method There are many taxonomies of ensemble method that focus on different factors and views at the ensemble systems [1,[16][17][18]. In [17], six strategies were introduced to build a sound combining system. The rationale behind these strategies is that "the more diverse the training set, the base classifiers, and the feature set, the better the performance of the ensemble system". • Different classifiers (also called Heterogeneity scenario [19]): A set of different learning algorithms is used on the same training dataset to generate different base classifiers, a combiner then make decision from the outputs (called Level1 data or meta-data) of these classifiers [24][25][26][27][28][29][30]. This approach focuses more on the algorithms to combine meta-data to achieve higher accuracy than any single base classifier. 5 class with label given by the classifier. There are two popular types of output for for each = 1, … , : • Crisp (Boolean) Label: return only class label P | ∈ 0,1 and ∑ P | = 1 • Soft Label: return posterior probabilities that belongs to classes, i.e. P | ∈ 0,1 and ∑ P | = 1 In this work, we focus only on the soft label. In this case, the posterior probability reflects the support of a class to an observation. The meta-data of an observation is defined in the form of the following matrix: While meta-data of all training observations, a × posterior probability matrix, is defined as: Justifiable Information Granularity If the probability distribution of data is known in advance, it is easy to represent the data by its distribution function. However, this information is usually unavailable in many real-world applications, and point estimates such as mean, median and skewness are often used to describe the data. Nevertheless, in many scenarios, pointwise information is less useful for subsequent reasoning [13]. Instead, information granularity explicitly models the inherent uncertainty present in the data. The concept of information granularity has been defined on many formal ways of describing • Experimental evidence: The designed information granule Ω should reflect the existing experimental data so that the numeric evidence accumulated within the bounds of Ω attains the highest value. When the granule is formalized as a set (interval), the more data included within the bounds of the granule, the more legitimate this set becomes. • Sound semantics: This requirement implies that the information granule should have welldefined semantics and exhibit high specificity. This implies that the smaller (more compact) the information granule (higher information granularity) is, the better (higher specificity) it is. For example, if the information granule comes in the form of an interval, the knowledge expressed as an interval [2, 4] is regarded to be more specific than the one residing within the interval [0, 10]. The principle of justifiable granularity is about constructing an information granule in the form of an interval to satisfy the two requirements outlined above. It is noted that two requirements mentioned above are only for the form of information granule proposed in this paper. In fact, there are 7 several different approaches to formalize information granular such as in [45,46] in which S = |J − K| is the length of interval Ω = J, K , and J and K are the lower and upper bounds of the interval, respectively. It is obvious that the two requirements are in conflict since increasing the cardinality will result in the reduction of the specificity. A compromise can be reached by using the product of these two functions: To build the information granule Ω on a given dataset H, we select the median (denoted by 2YO H ) as the numerical representative of the experimental data . Then, Ω = J, K is formed by specifying its lower and upper bounds in which J ≤ 2YO H ≤ K. Since the upper and lower bounds are constructed independently, we only discuss the procedure to find K (J is determined in the same way). Based on (3) we have: The optimal upper bound of the interval is determined by maximizing the values of [ K i.e., The optimal lower bound is found in the same manner The following algorithm summarizes the construction of information granule (7) End For The Proposed framework We now construct a combining method based on the concept of information granularity for the classification problem. In the proposed method, justifiable granularity will be applied to meta-data of observation to form the interval class memberships and then the predicted label is obtained via a 9 translation to numerical class memberships. As the generated interval class memberships depends on V, the performance of the method depends on V too. In the training process described in the Algorithm 2, we first introduce a method to find the optimal value of V from a set g by exploiting the meta-data of training observations. In this algorithm, we divide the training set h into i disjoint parts h , … , h j , where h = h ∪ … ∪ h j and |h | ≈ ⋯ ≈ |h j |, and their corresponding h m , … , h mj in which h m = h − h . Then, T-fold CV is applied onto training set h such that the meta-data of observations in h n is obtained by classifiers generated by learning the learning algorithms on the associated part h mn (denoted by op mn in Algorithm 2). The meta-data of all training observations in h form a × matrix as in (1b) in which the q row of is the prediction (meta-data) for training observation r . For each r , we apply the principle of justifiable granularity to its meta-data to construct the interval membership values and then predict the class label of r based on a discriminative decision model operating on the intervals. In (1a), the 2 column is the output of classifiers for predicting r to be in the 2 class. For each value of V in g, we apply Algorithm 1 on meta-data r to obtain the interval class memberships sP . | r , P . | r t, 2 = 1, … , Reasoning can be done on the interval membership values, e.g. using interval arithmetic [47], to form the final classification result. In this paper, we introduce a transformation from intervals in (8) to numerical class memberships using the following expression: where NCM r ∈ denotes numerical class memberships that r belongs to class , ‚ ƒP . | r , P . | r " is the function that generates the numerical representation of the interval by using the lower and upper bounds, while ℎ ƒ †P . | r − P . | r †" is a decreasing function of the length of the interval sP . | r , P . | r t which reflects the specificity (or weight) of the numerical value generated by the de-granularization process from g. In this work, the function ‚ • is chosen in the form of: ‚ ƒP . | r , P . | r " =ˆ. while ℎ • is given by one of these three expressions. ℎ ƒ †P . | r − P . | r †" = 1 (11) The Boolean class label of r is then predicted to be in the class with the maximum class membership grades: Since r is a training observation, class label of r i.e. r is known in advance. After looping the procedure though all training observations, classification error rate associated with each V ∈ g can be computed as: in which • Θ = 1 if Θ is true and 0 if otherwise. The optimal value of V is the one that minimizes Y••. This optimal value will be used as input of the next algorithm to predict the class label for unlabeled observations. In the classification process, for an unlabeled observation ž , we use the trained base classifiers op ,…,# to obtain the meta-data of ž as in (1a). In detail, meta-data of ž associated with base classifier op is obtained in the form of vector 5P | ž , … , P | ž 6 in which P | ž is the posterior probability that ž belongs to class given by op . After that, interval membership values for each class prediction are computed from the meta-data as in (8) i.e. sP . | ž , P . | ž t 2 = 1, … , . Finally, the classification is obtained by (14). We arrive at the following classification process based on justifiable granularity: Algorithm 3: Predicting label for unlabeled observation Assign class label to ž by (14) End For Clearly, the proposed method described above is a trainable combining method because the meta-data of training observations is exploited to find the value of V in the training process. If a 15 specific value of V is used, the proposed method becomes a fixed combining method in which the label in the meta-data of training set is not used to train the combiner. In the experiment, we evaluate the proposed method in both cases i.e. trainable and fixed combining method. Datasets and Experimental Settings To evaluate the performance of the proposed method, we carried out experiments on twenty one UCI datasets as shown in Table 2. These datasets are often used to assess the performance of classification systems [48]. Abalone 8 4174 3 Artificial 10 700 2 Australian 14 690 2 Blood 4 748 2 Bupa 6 345 2 Contraceptive 9 1473 3 Dermatology 34 358 6 Fertility 9 100 2 Haberman 3 306 2 Heart 13 270 2 Penbased 16 10992 10 Pima 8 768 2 Plant Margin 64 1600 100 Satimage 36 6435 6 Skin_NonSkin 3 245057 2 Tae 20 151 3 Texture 40 5500 10 Twonorm 20 7400 2 Vehicle 18 946 4 Vertebral 6 310 3 Yeast 8 1484 10 We performed extensive comparative studies with a number of existing algorithms as Classifier [49], Nearest Mean Classifier, and Logistic Linear [50], were chosen to construct the heterogeneous ensemble system. These learning algorithms were chosen to ensure diversity of the ensemble system. The proposed method is compared to the benchmark algorithms with respect to the classification error rate and F1 score (which is the harmonic mean of Precision and Recall) [51]. We performed 10-fold cross validation and run the test 10 times to obtain 100 test results for each dataset. All source codes were implemented in Matlab running on a PC with Intel Core i5 with 2.5 GHz processor and 4G RAM. To assess the statistical significance of the results, i.e., to determine whether the difference in classification error rate is meaningful statistically, we used Wilcoxon signed-rank test [52] (level of significance was set to 0.05) to compare the classification results of our approach and each benchmark algorithm. The influence of › and ¥ We first analyzed the influence of the parameters on the classification results. Here, we evaluated the effect of V on the classification error rate by setting this parameter to one of the values in 0, 0.1, 0.2, … , 3.9, 4 . For each dataset, we ran the proposed method for each value of V, and reported the classification error rate corresponding to the three functions ℎ , ℎ ' and ℎ • . The relationships between V and the classification error rate on some datasets are displayed in Fig.2. Several observations could be made. First, it is interesting to see that the three h functions have very similar error rate profile in the proposed ensemble system on the two-class datasets. Meanwhile, on the other datasets, the error rates related to ℎ and ℎ • are nearly equal and are lower than that of ℎ ' . For example, on Contraceptive, Vehicle, Tae, and Yeast, the error rates related to ℎ ' are 3-5% higher than that of ℎ and ℎ • . It is noted that ℎ ' is more sensitive to the interval length than the others. Specifically, if the interval length is too small, the function ℎ ' returns large values because lim «→¬ « = +∞. Since some information granule intervals can be very small (see Table A.1), we suggest using ℎ or ℎ • to generate the numerical class memberships from the interval-based information granules. In subsequent discussion, we only report the classification results for ℎ • . . Comparison with the benchmark algorithms The mean and variance of error rates and F1 scores of ten learning algorithms, the benchmark algorithms, and the proposed method (using ℎ • ) are reported in Tables A.2 to Table A.7. We first compared the average ranking of the proposed method to the ten learning algorithms [52]. Table 3 shows the average ranking of ten learning algorithms and the proposed methods with respect to the error rate and F1 scores on the experimental datasets. The Proposed CV10 and Proposed Specific10 are Table 4 show that the proposed method is significantly better than all benchmark algorithms on the experimental datasets. It demonstrates the benefit of using information granules to capture the uncertainty in class label prediction as oppose to just using pointwise information in the meta-data. Note that our framework is not only able to return the numerical class memberships for class label prediction but also the interval membership values that reflect the uncertainty associated with the class prediction by the base classifiers. In detail, the proposed method with cross validation clearly outperformed all six fixed combining rules. Proposed CV10 also outperformed the trainable combining method Decision Template (12 wins vs 1 loss for error rate, and 7 wins vs 3 losses for F1 score). It also achieved better results than the three homogeneous ensemble methods: Bagging (12 wins vs 3 losses for error rate, and 12 wins vs 6 losses for F1 score), Random Subspace (16 wins vs 2 losses for both error rate and F1 score), and Adaboost (18 wins vs 2 losses for error rate, and 16 wins vs 3 losses for F1 score). When the specific value of V = 1 was used, the proposed method is still better than all the fixed combining rules. Proposed Specific10 also outperformed Adaboost (17 wins vs 2 losses for error rate, and 14 wins vs 4 losses for F1 score), Bagging (9 wins vs 3 losses for error rate, and 8 wins vs 6 losses for F1 score), Random Subspace (15 wins vs 2 losses for error rate, 13 wins vs 2 losses for F1 score). It also outperformed Decision Template by 10 wins vs 2 losses for error rate and 6 wins vs 3 losses for F1 score. Time complexity analysis In the case of using a specific value of V = 1, the time complexity of training base classifiers is equal to those of other fixed combining counterparts like Sum Rule and Product Rule. Meanwhile, in the case of using optimal value of V, the overall time complexity of the proposed method using cross validation will be ² ƒmax ƒarg max ,…,# ² " × i, × × × ³´‚ "" in which ²5arg max ,…,# ² " × i6 is the time complexity of generating meta-data of training set by running i-fold Cross Validation with " learning algorithms ( = 1, … , ) having complexity ² " , and ² × × × ³´‚ is the time complexity to obtain the interval class memberships for training observations. The time complexity of testing process is ² × × ³´‚ . Based on the experimental results, our testing process is slightly more complex than other fixed combining methods with longer running time. Different number of learning algorithms To demonstrate the effectiveness of the proposed method, five additional learning algorithms, and Proposed Specific15 can be found in the supplement material). First, the average rankings shown in Table 5 indicated the outstanding performance of the proposed method compared to the 15 learning algorithms, where Proposed CV15 ranks first with average ranking of 2.90 and 3.52 for error rate and F1 score, respectively, closely followed by Proposed Specific15 (its ranking is 4.33 and 4.55, respectively). Besides, the statistical test results in Table 6 show that both Proposed CV15 and Proposed Specific15 achieve significantly better performance than all the benchmark algorithms. Bias-variance theorem is often used to demonstrate that ensemble methods can reduce bias without tradeoff in variance [56]. Conclusions In this paper, we have introduced a novel fixed combining classifiers ensemble method based on the justifiable granularity concept. Instead of using a single membership value given by pointwise statistics such as the mean, maximum, minimum, or median, we applied the justifiable granularity concept on the meta-data to find the interval associated with each class prediction. This interval reflects the uncertainty in class prediction given by the base classifiers and is a richer representation of information in the meta-data. The numerical class memberships can then be computed from these intervals by considering their bounds and interval length for class label prediction. Extensive experiments were conducted using an ensemble system of ten and fifteen base classifiers, and performance comparison with respect to classification error rate and F1 score was done with several benchmark algorithms on twenty one UCI datasets. Moreover, other designs of information granule such as in [45,46,57] could also be studied. These will be the directions of our future work. [ [ • : The benchmark algorithm is equal to Proposed CV10, □: The benchmark algorithm is better than Proposed CV10, ■: The benchmark algorithm is worse than Proposed CV10 ◊: The benchmark algorithm is equal to Proposed Specific10, ▲: The benchmark algorithm is better than Proposed Specific10, ▼: The benchmark algorithm is worse than Proposed Specific10
5,062.4
2017-03-15T00:00:00.000
[ "Computer Science" ]
A RUSBoosted tree method for k-complex detection using tunable Q-factor wavelet transform and multi-domain feature extraction Background K-complex detection traditionally relied on expert clinicians, which is time-consuming and onerous. Various automatic k-complex detection-based machine learning methods are presented. However, these methods always suffered from imbalanced datasets, which impede the subsequent processing steps. New method In this study, an efficient method for k-complex detection using electroencephalogram (EEG)-based multi-domain features extraction and selection method coupled with a RUSBoosted tree model is presented. EEG signals are first decomposed using a tunable Q-factor wavelet transform (TQWT). Then, multi-domain features based on TQWT are pulled out from TQWT sub-bands, and a self-adaptive feature set is obtained from a feature selection based on the consistency-based filter for the detection of k-complexes. Finally, the RUSBoosted tree model is used to perform k-complex detection. Results Experimental outcomes manifest the efficacy of our proposed scheme in terms of the average performance of recall measure, AUC, and F10-score. The proposed method yields 92.41 ± 7.47%, 95.4 ± 4.32%, and 83.13 ± 8.59% for k-complex detection in Scenario 1 and also achieves similar results in Scenario 2. Comparison to state-of-the-art methods The RUSBoosted tree model was compared with three other machine learning classifiers [i.e., linear discriminant analysis (LDA), logistic regression, and linear support vector machine (SVM)]. The performance based on the kappa coefficient, recall measure, and F10-score provided evidence that the proposed model surpassed other algorithms in the detection of the k-complexes, especially for the recall measure. Conclusion In summary, the RUSBoosted tree model presents a promising performance in dealing with highly imbalanced data. It can be an effective tool for doctors and neurologists to diagnose and treat sleep disorders. . Introduction In addition to monitoring sleep disorder disease, sleep analysis hinged on an electroencephalogram (EEG) can also play a critical role in people's mental and physical health (Al-Salman et al., 2021, 2022b. K-complex, as one of the most prominent transient waveforms in sleep stage 2, is usually utilized for sleep research and clinical diagnosis (Al-Salman et al., 2019b;Latreille et al., 2020). Due to this significance, the determination of the k-complex in an epoch is extremely important for sleep experts. K-complex, which was first discovered in Loomis et al. (1938), is a transient waveform of more than ±75 mV for a first negative sharp wave immediately followed by a slower positive component, and it was also reported that the frequency scales focus on 12-14 Hz waves (Richard and Lengellé, 1998). The duration of k-complexes was between 1 and 2 s, and other studies reported that the maximum duration is between 1 and 3 s (Al-salman et al., 2018;Al-Salman et al., 2019b). In general, k-complex detection based on sleep specialist visually scored is regarded as the gold standard. However, it is time-consuming, subjective, and onerous (Lajnef et al., 2015). Thus, more and more researchers focus on developing an automatic k-complex detection method to speed up diagnosis and alleviate the burden of neurologists. A large number of studies on the automated detection of the k-complexes have been developed, which focus on feature extraction, feature selection, and pattern recognition stages. Some studies presented the literature concerning feature extraction, such as temporal information Bhuiyan, 2016a, 2017a;Al-Salman et al., 2022a), spectral estimation (Herman et al., 2008;Hassan and Subasi, 2016), and chaotic information estimation (Peker, 2016;Al-salman et al., 2018;Al-Salman et al., 2019a;Nawaz et al., 2020). Aykut et al. employed features based on amplitude and duration properties of the k-complex waveform, and the results were evaluated with the ROC analysis which proved up to 91% success in detecting the k-complex (Erdamar et al., 2012). Hassan et al. presented a method of analyzing EEG waveforms based on the spectral features computed from tunable Q-factor wavelet transform (TQWT) sub-bands, and the reported results were significantly better than the existing results (Hassan and Bhuiyan, 2016b). The scheme based on TQWT and bootstrap aggregating for EEG signals was developed, and the results showed that the proposed method is superior in terms of sensitivity, specificity, and accuracy . Tokhmpash et al. used the TQWT method to transform EEG signals, and then various features were extracted from the TQWT sub-bands. The empirical results showed the high efficiency of the proposed method in the analyzing of EEG signals (Tokhmpash et al., 2021). The TQWT is also applied to decompose an EEG signal into various sub-bands at different levels; the findings showed that the proposed scheme with estimating the Hjorth parameters preserves efficiency and is appropriate for the automated identification of EEG signals (Geetika et al., 2022). Some time and frequency analysis methods based on variational mode decomposition were utilized to determine the k-complex, and the highest average accuracy was obtained at 92.29% (Yücelbaş et al., 2017). Wessam proposed an efficient method based on fractal dimension to detect k-complexes from EEG signals, and the findings revealed that the proposed method yields better classification results than other existing methods (Al-Salman et al., 2019b). However, to the best of our knowledge, one of the state-ofthe-art linear or non-linear features in the detection of k-complex has not been undertaken yet. Hence, selecting optimal feature sets plays an essential role in the k-complex detection system. In recent years, various methods have been applied successfully in many fields to realize the optimal feature subset selection (Xu et al., 2020;Jainendra et al., 2021). Moreover, pattern recognition techniques also offer a great potential to analyze EEG signals more effectively, which is typically based on supervised or unsupervised approaches (Hassan and Bhuiyan, 2017b;Zhang et al., 2022). Rakesh et al. put forward a fuzzy neural network for k-complex and achieved better results with an accuracy of 87.65% and a sensitivity of 94.04% (Ranjan et al., 2018). Ankit et al. presented a sparse optimization method, and the authors concluded that the proposed method is promising for the practical detection of k-complex (Parekh et al., 2015). Huy et al. proposed a hybrid-synergic machine learning method to detect k-complex, and the results indicate that the performance of the proposed model was at least as good as a human expert (Vu et al., 2012). The ensemble model combining a least square support vector machine, k-means, and naive Bayes is used to identify the detection of the k-complex. The results demonstrate that the proposed approach is efficient in EEG signals (Al-Salman et al., 2019b). To build a reliable detection model, adequate volumes of k-complexes and non-k-complex datasets are necessary. Unfortunately, the number of epochs obtained from EEG signals with non-k-complexes is greater to a larger degree than that of those with k-complexes. Considering that most classifiers have a strong ability to predict instances with majority volumes while having a weak ability to predict instances belonging to the minority volumes. Hence, the problem to classify imbalanced data effectively is becoming the biggest challenge in k-complex detection. In this study, to develop and present a procedure of kcomplex detection in an epoch, a robust method for the imbalance dataset was proposed based on TQWT coupled with the RUSBoosted tree classifier. The block diagram of the proposed methodology is depicted in Figure 1. Each EEG signal of 30 min was filtered with a fourth-order pass-band Butterworth filter at 0.5-30 Hz to smooth the EEG signal and remove the environment noise caused by muscle activity and eye movement. Then, the EEG signal was segmented into epochs of 0.5 s with an overlapping of 0.4 s, each epoch corresponding to a signal state for k-complex or non-k-complex. The multidomain features (time, spectral, and chaotic theory) were extracted from each sub-bands of epoch based on TQWT decomposing. To minimize the complexity and reduce the dimensionality of features, the feature selection method based on search-based feature selection consistency (SFS consistency) is employed before classification. For further analysis, the RUSBoosted tree algorithm was implemented to improve the performance in recall for the imbalanced dataset. FIGURE Schematic outline of the proposed computer-assisted k-complex detection scheme. FIGURE Filtered EEG signal (the blue line is EEG signals with k-complex, and the red line represents EEG signals with non-k-complex). . Materials and methods . . The EEG recordings The EEG dataset analyzed in this study was acquired from 10 subjects (aged 28.1 ± 9.95 years, which consists of four men and six women). All were recorded at a sleep laboratory of a Belgium hospital (Brussels, Belgium) at a sampling frequency of 200 Hz, and can be found online at https://zenodo.org/record/2650142. The waveform of k-complex and non-k-complex is presented in Figure 2. The EEG recordings were visually scored by two experts with the specified recommendation (Devuyst et al., 2010). As the duration time of the k-complex is about 0.5-2 s, the EEG signals were divided into segments for k-complex detection using the sliding window technique (Siuly et al., 2011;Al-Salman et al., 2021). Based on previous empirically-based studies, the window size was selected as 0.5 s with an overlap of 0.4 s in this study (Al-Salman et al., 2019c). The multi-domain features based on the analysis of the EEG signals were employed to represent k-complex and non-kcomplex from each 0.5 s EEG segment. All the analyses were carried out based on the Cz-A1 channel. For the DREAMS database, only five of the 10 subjects are annotated by two experts, and the rest are annotated by expert 1. In this study, two different evaluation scenarios were used. The first scenario considers the annotations marked by expert 1 for all subjects, and the second scenario consists of the annotations marked by expert 2 for the five subjects. Table 1 presents the number of k-complex by the experts for Scenarios 1 and 2 in the DREAMS database. It is found that the number of k-complex by the first expert is dramatically greater than the number by the second expert. Therefore, the choice of different scenarios has a direct influence on the results and can be used to verify the performance of the proposed method. . . Tunable Q-factor wavelet transform (TQWT) The tunable Q-factor wavelet transform, which is proposed by Selesnick (2011), is a flexible discrete wavelet transform (DWT). Similar to the DWT, TQWT employs a two-channel filter bank, which consists of a low-pass filter with parameter α and a high-pass filter with parameter β, to decompose EEG signal into transient components and sustained components using adjustable Q-factors. It can be expressed mathematically as Equations 1, 2. For further analysis, the sustained component's output of the low-pass filter is regarded as the input signal for the next two-channel filter bank. The transient components' output of the high-pass filter for each layer is deemed as the output signal. One simple example of wavelet transform with J level is illustrated in Figure 3. Here, Q-factor: This parameter determines the width of the bandpass filter. TQWT decomposition achieves flexibility by tuning and adapting this parameter of the wavelet transform. The higher the Q-factor is, the more effective the extraction of the sustained . /fnins. . Wavelet transform with J level using a two-channel filter bank, which consists of the low-pass filter and high-pass filter. components. Meanwhile, the decomposing waveform based on a lower Q-factor is suitable for extracting the features of the transient component. Number of decomposition levels (J): If the number of filter bands is denoted by J, an input signal will be decomposed into J+1 sub-bands. Among these bands, J sub-bands were obtained from the high-pass filter of each level filter band, and one came from the low-pass filter of the final level filter band. With the increase of the decomposition level, the time domain waveform becomes wider, and the features increase dramatically. Taking into consideration various ranges of motivation, the TQWT is used in the proposed scheme (Hassan and Bhuiyan, 2016b). First of all, considering that k-complex waves are characterized by the appearance of multifarious rhythms, TQWT can improve localization in the frequency domain by varying the Q-factor. Hence, this decomposition method is suitable for spectral analysis. Second, the filters employed in TQWT are more computationally efficient in the frequency domain (Selesnick, 2011). Third, EEG is a non-stationary signal and its chaos properties alter between k-complex and non-k-complex. TQWT decomposition can also give the wave in the time domain; hence, it has emerged as a powerful technique in both time features and chaos features for EEG analysis (Fraiwan et al., 2010). These superiorities verified that the TQWT decomposition is an effective tool for the analysis of EEG and hence it is employed in the proposed scheme. . . Multi-domain feature extraction from TQWT sub-bands To derive salient features from the raw EEG data that can effectively reflect the epochs to the respective k-complex is the main objective of the feature extraction stage of the EEG-based k-complex detection system. Hence, a multi-domain method, based on time domain estimation, spectral estimation, and chaotic analysis, was employed to extract the representative features from each 5 s EEG epoch. A total of 25 hybrid features were extracted from each sub-band. . /fnins. . The extraction feature methods based on the time domain have been proven to be an efficient method for analyzing the characteristics of EEG signals (Vidaurre et al., 2009). Though it is widely used in speech and audio signal classification (Chu et al., 2009), spectral features have been used for EEG signals (Hassan and Bhuiyan, 2016b). These features are typically calculated by applying a fast Fourier transform (FFT) to short-time window segments of EEG signals followed by further processing. Considering that the property of EEG signals is somewhat chaotic, in addition to the traditional features of the EEG signal, the chaotic features based on non-linear dynamical analysis are also highly recommended to investigate the dynamic characteristics of EEG (Li et al., 2017;Nawaz et al., 2020). In the current study, 12 time domain features, seven spectral features, and six chaotic features are extracted for further analysis, as shown in Figure 4. We have computed the feature vector for each EEG subbands based on TQWT decomposition. As the decomposed EEG signals with J+1 sub-bands, the feature vector of J+1 sub-bands on each epoch is computed to construct a 25 * (J+1)-dimensional feature vector. . . Search-based feature selection using consistency measures Considering that reducing the dimensionality of feature sets may be improving the performance in reducing costs and enhancing the ability of comprehensibility, another effective step in the detection system for k-complex is to find optimal feature subsets. Selection features based on search-based feature selection (SFS) analyses were used in this study to research and select the important features. The following context briefly illustrates the selection features (Dash and Liu, 2003;Hernández-Pereira et al., 2016). The SFS method based on the consistency filter, as one of the most effective methods, traverses all the candidate subsets to find the best one using the evaluation measures based on the independence of an inductive algorithm (shown in Figure 5). The evaluation measure evaluates the attributes of selected features according to the inconsistency rate (IR). If the IR for current selection features is smaller than the pre-selection features, current selection features are deemed as the selected features. Although SFS has the disadvantage in time-consuming, it does not need the stopping criterion or a pre-specified threshold. . . RUSBoosted tree model for the k-complex detection The distribution across k-complex or not is highly skewed: non-k-complexes have more epochs than those k-complexes. Therefore, the detection problem for the imbalanced dataset is a major challenge for k-complex detection. The RUSBoosted tree model, as an efficient way to overcome this problem, can improve the prediction performance by reducing bias between positive and negative samples at the expense of a slight decrease in the large group sets (Khoshnevis and Sankar, 2020;Jain and Ganesan, 2021;Noor et al., 2022). The present research fused a random under-sampling (RUS) technique and adaptive boosting (AdaBoost) algorithm with a decision tree as the RUSBoosted tree model, as shown in Figure 6. First of all, to obtain the balanced distribution, the under-sampling . /fnins. . method was implemented to deal with the minority and majority class size for the imbalanced training dataset. Second, considering the AdaBoost algorithm's ability to reduce bias and variance mistakes, it is employed to tackle problems involving imbalanced datasets. Hence, the RUS technique along with AdaBoost is utilized by combining an ensemble of decision trees as a classifier for further analysis. In this study, the parameters (i.e., the number of classifiers was selected as 30 for the model, with a maximum number of splits of 20 and a learning rate of 0.1) were melded into the RUSBoosted tree for the detection of k-complex. . . Performance evaluation First, statistical hypothesis testing is performed to validate the relevance and suitability of features according to discriminatory capability are statistically significant or not. If the features are not statistically significant, they have to be ignored for negative influence on performance. To estimate the significant level of kcomplexes and non-k-complexes, we perform a one-way analysis of variance (ANOVA). The difference is considered to be statistically significant if the p-value is <0.05 at a 95% confidence level. Second, to evaluate the detection ability of the proposed method, some metrics based on the confusion matrix (shown in Table 2) were used. In Table 2, TP describes the situation that both the actual k-complexes and predicted states are yes. FN represent the situation that predicted k-complexes as no while actual kcomplexes as yes. FP means the actual state is not k-complexes, which is adverse to the predicted label based on an algorithm. TN means the situation that both the actual k-complexes and predicted states are no. To evaluate the performance of the detection algorithm, Cohen's kappa coefficient, recall, and F-measure are computed. In addition to these metrics, the area under the ROC curve (AUC) was also used to estimate the performance of a classifier. Further details about the metrics are provided in the following paragraphs. The kappa coefficient, calculated based confusion matrix, as a measurement for consistency tests, can also be used to measure classification accuracy. It is defined as Equation 4 as follows: Here, P e is obtained as follows: Recall measure, which is also called sensitivity measurement, reflects the proportion of the actual positive prediction. It can be expressed mathematically from Equation 6 as follows: . /fnins. . F-measure is the top priority measurement in analyzing the overlapping between the two sets. It can be defined by weighted recall and precision, and β reflects the relative importance. If the parameter of β > 1, it means that recall has more influence on F-measure. 0 < β < 1 reflects that precision has a broader effect on F-measure, compared with recall. β = 1 represents the measurement degenerates into standard F-measure. It is noted that β = 10 is selected. To further illustrate the effectiveness of features selected using a feature selection-based consistency-based filter, the separability analysis using Fisher criteria was applied, which is obtained from Equation 9 as follows: Here, S w and S m represent the within-class and between-class scatter matrix, respectively. tr(S) means the trace of square matrix S. To evaluate the performance of the proposed method, the 5fold cross-validation method is utilized. The k-complex segments and non-k-complex segments are divided into five groups, respectively. For each time, the training dataset consists of four kcomplex groups and four non-k-complex, while the resting groups are deemed as testing groups. All groups are tested in turn. In this study, the overall performance is computed over the five iterations. . Results and discussion . . Parameter selection for TQWT The selected optimal parameters to decompose the EEG epoch are J and Q. The detection performance (kappa measures and recall value) based on the aforementioned procedure of feature extraction . /fnins. . and selection has been analyzed sequentially for incremental values of Q range from 1 to 10 with an increment of one. Figures 7, 8 depict the influence of parameters on detection performance for the kcomplex. It is observed from Figure 7 that the optimal parameter of J is 3, in which the best kappa measures and recall value are achieved. The optimal value for J is determined in the same way. From our experimental analyses, as shown in Figure 8, it has been observed that the best matrices are achieved for Q = 4. . . Quality evaluation for feature extraction and selection In this section, the results of all the features computed from various TQWT sub-bands were present in terms of significance, as shown in Table 3. The test is performed at a 95% confidence level. It can be observed from Table 3 that the features highlighted in bold are not significant (p > 0.05), and a difference is statistically significant if p ≤ 0.05. The results show that the performance of time domain features to classify k-complex was significantly better than other features for sub-bands 1 and 2. In sub-band 3, spectral features significantly outperformed time and chaotic features. However, the statistical performance of time features in sub-band 4 was the worst in all three kinds of features. Based on these results, we can conclude that not all of the sub-bands features achieved good discriminatory capability for k-complex detection. Hence, it is necessary to select some of these features to improve the k-complex detection performance and decrease time consumption. We investigate the AUC and time performance for two different feature sets, namely all features and selected features. The comparisons of the performance are shown in Figure 9. It is evident that the AUC based on selection features is slightly incremented than all feature sets. Compared with the performance of all feature sets, there is a dramatic decrement in time comparison for selected feature sets. In this study, we also investigate the separability of the two different feature sets using J F . The larger the value of J F is, the more separable the features are. Figure 10 presents the value of J F and compares different feature sets (all features or selected features are used). It is evident that the J F based on selected features is higher, which confirmed that the selected features can characterize the k-complex effectively. It can be confirmed by the inferences drawn from Figure 9. According to these results, the feature selection method was more effective, particularly in AUC, time comparison, and separability estimation. Furthermore, the experimental outcomes presented in Figures 9, 10 confirm that the feature selection method is more effective. . . Performance for various classification models For this research, we have verified several classification methods such as linear discriminant analysis (LDA), logistic regression, linear support vector machine (linear SVM), and RUSBoosted tree. Figure 11A indicates the receiver operating characteristic (ROC) curve for different classification methods. According to the results, the line in the upper left represents better performance in the detection of k-complexes. The area under the curve (AUC) of 1 indicates a perfect classification performance. Although this . comparison is for the data set of subject 1, it has to be noticed that the k-complex classification can be improved using RUSBoosted tree methods. Figure 11B demonstrates a box plot of the area under the curve (AUC) for different pattern recognition methods. The AUC was obtained as 0.931 ± 0.085, 0.814 ± 0.166, 0.925 ± 0.127, and 0.954 ± 0.043 for LDA, logistic regression, linear SVM, and RUSBoosted tree, respectively. According to these results, we conclude that the AUC of the RUSBoosted tree is significantly better than others. The purpose of this investigation is to establish the suitability of the RUSBoosted tree algorithm for imbalanced dataset problems. The performance of the RUSBoosted tree algorithm is investigated for several traditionally state-of-the-art classifiers including LDA, logistic regression, and linear SVM. For further evaluation, Figure 12 reports the performance of some of these classifiers for the proposed scheme. The kappa coefficient, recall measure, AUC, and F 10 -score were used to evaluate the effectiveness of the proposed scheme. The proposed method achieved an average performance of recall measure, AUC, and F 10 -score of 92.34 ± 7.06%, 95.4 ± 4.32%, and 83.59 ± 8.23%, respectively. Depending on the results, the performance based on the kappa coefficient, recall measure, and F 10 -score provided evidence that the RUSBoosted tree surpassed other algorithms in the detection of the k-complexes. However, the performances based on the kappa coefficient using the RUSBoosted tree (54.22 ± 4.04%) are slightly worse than linear discriminant analysis (59.26 ± 14.67%). In summary, the prediction results confirmed a superiority value for different metrics and a balanced classification performance. It also indicated that the prediction algorithm based on the RUSBoosted tree model was tending to outperform than the traditional classifiers, especially for the minority classes. . . Performance comparison of the proposed method based on the ratio of segment number To verify the performance of the proposed methods, the execution time, recall, and F 10 scores are used. Figure 13 presents the execution time of the RUSBoosted tree model and the others classifiers. For further analysis, we assume that the number of the segments of the k-complex is fixed at 263, and the number of the segments of the non-k-complex is outnumbering k-complex (the number of segments of the non-k-complex increased from 1 to 10 times compared to the number of the segments of kcomplex, and the number of segments was selected randomly from the database). The time to train the classification model was deemed as execution time. According to Figure 13, the slowest execution time was recorded with the RUSBoosted tree model compared with other classifiers. Along with the increasing number of segments, the execution time is also increased dramatically. In addition, the performance was also compared with the other three classifiers based on recall and F 10 scores. Figure 14 achieves the results that the proposed method is slightly increased along with the increase in the ratio of the number of the segments between nonk-complex and k-complex. While the other classifiers' performance significantly decreased. High F 10 values mean that the proposed method is inclined to small samples. From these results, we can get the conclusion that the proposed method was suitable to deal with the imbalanced dataset. . . Comparison with existing methods based on Scenario According to previously reported methods, some of the automatic k-complex detection methods have been estimated using the same database as discussed in Section 2.1. In Table 4, the proposed method is compared with existing methods. Krohne et al. (2014) detected k-complexes using wavelet transformation combined with feature thresholds with the same database. In this study, pseudo-k-complexes were identified from each EEG segment and then the feature threshold method was used to reject false positives. A mean recall of 74% was achieved. Parekh et al. (2015) FIGURE Relationship between the execution time and ratio of segment numbers for subject (the number of k-complex is fixed as , and the segment number of non-k-complex is multiple of the number of k-complex from to ). reported their results of the k-complex detection using a fast nonlinear optimization algorithm, an average recall and kappa of 61% and 0.54 were achieved, respectively. Another study was made by Ranjan et al. (2018), in which a fuzzy algorithm combined with an artificial neural network was used to detect k-complex, they reported an average accuracy and specificity of 87.65 and 76.2%, respectively. A fractal dimension coupled with an undirected graph features technique was utilized by Al-Salman et al. (2019b) to detect k-complexes. The accuracy and specificity of 97 and 94.7% were reported, and the performance was highest than others. Oliveira et al. (2020) focused on designing a multitaper-based kcomplex detection method in EEG signals and achieved a recall of 85.1%. The proposed method outperforms the other methods in almost all performance metrics (accuracy and specificity), except the method of fractal dimension coupled with undirected graph features (Al-Salman et al., 2019b). In terms of recall and kappa, the proposed method achieves the highest performance. These results demonstrated that the proposed method achieved a better performance in terms of detection performance. . . Comparison based on di erent scenarios As already mentioned, some of the automatic k-complex detection methods have been proposed and compared with the proposed method with the regard to the scenarios previously discussed, as shown in Table 5. In Scenario 1, the proposed methods achieved a mean accuracy of 92.19 ± 3.9% and a mean recall of 92.41 ± 7.47%. The proposed method achieved a dramatically better recall than others (Devuyst et al., 2010;Yazdani et al., 2018;Oliveira et al., 2020), but slightly worse accuracy. A higher recall value indicates that the proposed method is able to detect the most of small samples (true k-complex marked by an expert). . /fnins. . ± 11.33%, respectively. The reason why the recall and accuracy decrease for the scenario may be that the second expert marked few labels as k-complex compared to expert 1. It is consistent with Table 1. It is denoted that the proposed method was effective to detect the k-complex. . Conclusion This study developed a k-complex detection scheme, consisting of TQWT, multi-domain features, feature selection, and RUSBoosted tree algorithm to overcome the shortages of the existing classification-misclassification of classifier training from the imbalanced data. According to the results, the highest recall value was achieved for the proposed scheme. The results denoted that the methods could be worth utilizing in the automatic identify the k-complex for sleep specialists. It has been evidenced that the proposed scheme is comparable to or better than the stateof-the-art classifiers. The results also show that the ability of the RUSBoosted tree model to deal with the imbalanced classification problems compared with the state-of-art methods is quite well. In general, according to the experimental outcomes, we can conclude that the proposed scheme can relieve physicians of the burden of visually inspecting a large volume of EEG data. However, the study suffers from several drawbacks. First, it is necessary for researchers to locate the locations of the k-complex in the related epochs. Second, the proposed scheme relied on a single channel to detect k-complex. While as one of the important features of brain activity, the interaction between brain regions is not fully utilized. Data availability statement Publicly available datasets were analyzed in this study. This data can be found at: https://zenodo.org/record/2650142. Ethics statement Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. Written informed consent was not obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. Author contributions YL contributed to the conception and design of the study. YL and XD performed the statistical analysis and wrote the first draft of the manuscript. Both authors contributed to the manuscript revision and read and approved the submitted version. (315020018). This study also obtained support from Shaanxi's Key Disciplines of Special Funds to Finance Projects.
7,194.6
2023-03-14T00:00:00.000
[ "Computer Science", "Medicine" ]
Towards a Privacy-Preserving Way of Vehicle Data Sharing – A Case for Blockchain Technology? Vehicle data is a valuable source for digital services, especially with a rising degree of driving automatization. Despite regulation on data protection has become stricter due to Europe’s GDPR we argue that the exchange of vehicle and driving data will massively increase. We therefore raise the question on what would be a privacy-preserving way of vehicle data exploitation? Blockchain technology could be an enabler, as it is associated with privacy-friendly concepts including transparency, trust 1 Introduction and Scope Introduction and Motivation Future smart vehicles will provide advanced autonomous driving functions and will be highly connected to other vehicles, roadside infrastructure and to various cloud services. The information gained through these wireless interconnections will be used by any smart vehicle to enrich its own information gathered by built-in sensors such as cameras and radar sensors to further increase the reliability of its autonomous driving functions. However, it will also assist to solve automotive research topics like detection of driver fatigue or driver distraction. These research topics will receive additional focus at the time autonomously driven vehicles will face real world problems on the street and will have to force the driver to takeover. However, the data collected within current vehicles of limited smartness can be used beyond assisting their drivers in driving. Moreover, vehicle data is valuable for third parties [1,2,3] including e.g. vehicle manufacturers (i.e., OEMs), suppliers, and traffic managers to name three stakeholders, although, there are still many open issues connected to the exchange of vehicle usage data. One dominant challenge for vehicle and driving data exploitation is how to safeguard the privacy of the driver. Despite the privacy regulation has gotten stricter in Europe with the General Data Protection Regulation (GDPR) [4], we argue that the exchange of vehicle usage data will increase a lot in the future due to two recent developments, tech startups pushing artificial intelligence technologies and the rising interest of the automotive industry to foster the automated driving paradigm. Shortcomings of current vehicle data provisioning approaches are: Data, information, and services are mostly exchanged within proprietary closed environments, as collected vehicle usage data is usually directly sent from the smart vehicle to a single service provider (e.g., by a device connected to the OBD-II interface of the vehicle or via the drivers' smartphone). As a result, a vehicle owner willing to share data with multiple service providers will have to provide the data multiple times while collecting the data with different devices in parallel. This can be critical due to the large amount of data collected by smart vehicles (up to 4TB of data per day are expected [5]), and because a significant portion of current service providers (e.g., Automile and Zubie) is using dedicated OBD-II dongles to gather data from smart vehicles. Thus, it is currently not feasible or at least not practical to use several services at the same time. Finally, these closed systems certainly disrespect the vehicle owner's privacy, as they do not make it transparent how they further monetize the gathered 2 data nor with whom they share it. They typically do not allow the end user to control what data is transferred and shared. And most of them have a lock-in effect, i.e. they use the vehicle data for their own purposes. Finally, their business models do not scale yet as their user community is still composed mostly of early adopters [1]. Contributions and Structure Sharing data always holds the risk of violating one's privacy. So, what is a privacy-preserving way of vehicle data exploitation? Can the Blockchain technology act as an enabler? Blockchain technology is currently revolutionizing the way smart contracts between parties will be managed due to its outstanding advantages namely decentralization and transparency per design. The application of Blockchains as a solid basis for a secure data exchange platform seems to be promising to solve the challenge of monetizing vehicle usage data while protecting the data owner's privacy. In contrast to closed systems, a so-enabled Open Vehicle Data Platform for vehicle usage data based on smart contracts maintained within Blockchains would allow the user to choose which service providers can access certain vehicle data for which exploitation purpose. Thus, end users can make use of services from various service providers at the same time, while being in full control over the collected data, which will also be crucial for autonomous driving. Full control can be achieved by employing privacy settings for each authorized service provider. The user can decide whether to share only anonymized data (e.g., as required by traffic management systems), vehiclespecific data (e.g., for OEMs for continuous improvement), or even user-specific data (e.g., as required by insurance companies to provide flexible insurance rates in Pay-As-You-Drive (PAYD) models [6]). Such a platform will be able to support a wide range of service providers and allow different benefit/business models advantageous for both the users and the service providers. Towards proposing a concept for an Open Vehicle Data Platform, in Section 1, we reviewed existing solutions for vehicle data sharing, highlight strengths and weaknesses, and particularly focused on potential privacy issues. Thereafter, in Section 2, we provide related work and background for Blockchain technology in the automotive domain and for connected vehicles. Consequently, we discuss the actors and roles of a vehicle data sharing ecosystem, the underlying privacy challenge and propose possible privacy setting schemes protecting the privacy of the involved users, followed by a concept for a Blockchain-based Open Vehicle Data Platform in Section 3. In the latter, Blockchain technology ensures a trustworthy data exchange between all involved entities and users. After providing a description of a conceptual workflow, we discuss open issues and related aspects required to realize the proposed data sharing platform and thereby conclude the paper with a discussion and outlook in Section 4. Blockchain Technology (in automotive) Blockchains were first introduced as underlying technology of Bitcoin in 2008 [7]. In this initial form, single transactions are used to describe a cash flow from one entity to another. Every new transaction is distributed to the entire Blockchain system and in a subsequent step a predefined amount of these transactions is compiled into a block, and finally this block is then stored in the Blockchain. The latter can be seen as a distributed database, where blocks are immutably chained to each other. The immutable property on block and on transaction level is ensured by using cryptographic hash functions and digital signatures. Every entity within the Blockchain system can easily verify a transaction as well as a block without requiring any trusted party within the system. Newer versions of Blockchain allow, besides the exchange of simple transactions, also the creation of smart contracts. The latter can be seen as executable "if-then" condition which is stored on the Blockchain and can e.g. be used to trigger a cash flow by an event (e.g., transfer the flat rent to the landlord on the 1 st day of each new month). Besides simple examples, smart contracts also allow describing more complex relations between companies, governmental bodies, etc. and Thus, Blockchains and especially smart contracts can potentially be used to solve certain open issues in the automotive industry due to its capability to preserve privacy; in particular w.r.t. long-term research topics like detection of driver attention/fatigue and current topics like utilizing vehicles as distributed comprehensive environmental sensors, thereby connecting vehicles more and more to each other (V2V) as well as to the surrounding infrastructure (V2I). As a result of this, Blockchain technology raised enormous attention in research, academia and industry. Various projects and initiatives covering different industrial domains were started in the last months with the goal of identifying real business opportunities for the use of Blockchain in future products, or even to develop concrete (distributed) applications where the use of Blockchain technology can be beneficial, including the automotive industry which has identified potential areas for the use for Blockchains. Recently, automotive car manufacturers BMW, GM, Ford and Renault started the Mobility Open Blockchain Initiative (MOBI) together with other industrial and academic partners such as Bosch, Blockchain at Berkeley, Hyperledger, Fetch.ai, IBM and IOTA [8]. Also, other vehicle manufacturers are evaluating Blockchains or are already working on concrete projects: In 2017, Daimler started a project where Blockchain technology is used to manage financial transactions [9]. Furthermore, the automotive supplier ZF teamed up with IBM and UBS to work on a Blockchain-based automotive Platform called Car eWallet with the goal of paving the way for autonomous vehicles by allowing automatic payments and by providing other convenience features [10]. Hence, Blockchain definitely gained attention in the automotive industry. However, concrete ideas, products and services are needed to show that Blockchain is more than a hyped technology but rather allows the development of new business cases. Connected Vehicles and Data Exploitation Future vehicles will communicate with each other as well as with surrounding road infrastructure to collect valuable information about road conditions and to sense the current traffic situations (e.g., very relevant in traffic intersection scenarios). Furthermore, vehicles will increasingly be connected to the Internet to provide a wide range of convenience services to the users, to gather latest traffic and map information, the current city traffic strategy or even to report an accident (i.e., eCall). This Internet connection could of course also be used to transfer environmental data collected by the vehicle itself (e.g., camera, Radar, or Lidar data) to the cloud. Intel recently released a statement saying that future (self-driving) vehicles will collect up to 4 TB of data each day [5]. A wide range of different service providers (not restricted to automotive) would be interested in using the collected data in various ways. Sharing the collected data could/should also be beneficial for the owner/driver of the vehicle (see Section 3.1) and, on the down side, will raise serious privacy issues, as the exchanged information could be used to e.g. track down the user's location or analyze the user's behavior (see also Section 3.2). Several tech startups such as Automile, Dash, and Zendrive, as well as large initiatives driven by vehicle manufacturers such as AutoMat (coordinated by Volkswagen), started initiatives with the goal to collect and utilize data from single vehicles up to entire fleets following different purposes [1]: i) Provide specific services in order to generate a benefit for the driver or the vehicle/fleet owner in return for sharing data. ii) Create value by monetizing the collected data coming from a mass of vehicles to third parties, which in turn use it as input for algorithms. iii) Further improve the business offerings of service providers and develop new services. Furthermore, in times of a shift of the automotive industry towards digitalization, in times to manage different SAE levels of autonomous driving on the road simultaneously, and in times of the Internet of Things where sensors are increasingly connected to the Internet, the automotive industry still tries to solve many long-known phenomena. These phenomena in- 4 clude for example the detection of the drivers distraction, fatigue and trust or the vehicles security and safety, which will increasingly be done in the cloud, by feeding the algorithms with sensitive and privacy relevant data from vehicle usage. Data ownership of vehicle sensor data seems to be yet unclear from a legal perspective. Driver, vehicle owner, passengers, and the vehicle manufacturer may claim their right on certain data. In the AutoMat project, coordinated by Volkswagen, it is argued that as usual in other domains, e.g. in the music show business, "the copyright is distributed proportionally among the members of the value chain" [11]. This copyright distribution would give vehicle manufacturers the right to use the data a driver produces without charge, and thus would bring vehicle manufacturers into the profitable data platform provider role (as they can integrate a data interface in their cars easily). However, from a driver's/vehicle owner's/passenger's perspective, copyright should not be distributed as there would not be any data without them driving the vehicle. This is usual in many domains e.g. digital camera manufacturers do not have a copyright on produced photos, and a competitive market with open data platforms will force innovative solutions and offer more benefits to the data owner to attract data provision. A Vehicle Data Sharing Ecosystem A series of stakeholders including vehicle developers, vehicle manufacturers, insurers, and even smart cities could benefit a lot from an open privacy-preserving vehicle data sharing platform, and thus participate in a vehicle data sharing ecosystem. The following Fig. 1, sketches such a vehicle data sharing ecosystem and highlights the connections between the different stakeholders. The figure illustrates stakeholders and advantages for their businesses (based on shared vehicle data), as well as advantages for vehicle owners (using the service the stakeholder provides based on their shared data). Thereby different connection types and privacy levels are envisaged, as different stakeholders are interested in different aspects of the data collected by connected vehicles. As indicated in Fig. 1, certain service providers such as city planners or map providers are not interested in who is driving (i.e., do not need driver-specific information) or what specific type of vehicle (i.e., do not need vehicle-specific information such as brand, color, or model). Thus, these services can be satisfied by providing anonymized vehicle usage data. Other (automotive) services targeting on the vehicle development lifecycle (e.g., predictive maintenance or wearout of vehicle components), will only require vehicle-specific data, whereas other services will be mainly interested in userspecific information (i.e., who is/was driving). The proposed Open Vehicle Data Platform will address the fact that different services require a different kind of data and allow specifying which components of the collected data is shared to enable services. Thereby, privacy is especially addressed as a connected vehicle will not necessarily have to share an entire dataset with a service provider but rather only the data which is really needed by the service provider to provide a specific service. In the simplified model of a vehicle data ecosystem four types of data sharing might be distinguished: sharing anonymous data, driver-specific data, vehicle specific data, or a combination of them. Fig. 1. Vehicle usage data can be used for various services and by different entities and bring advantages to both the vehicle owner/user as well as the service provider / data consumer. From a more abstract point of view, a vehicle data sharing ecosystem can have several types of actors linked by value flows, as indicated in the e3value model in Fig. 2. For instance, a driver can share driving and vehicle data with a gateway provider who then forwards this data to a data platform provider. In return the driver may receive money but will probably have to mount a vehicle data gateway device in his vehicle. A service provider may use driving data from the data market/platform to establish a preventive maintenance service for drivers. While drivers may pay service providers a fee for consuming this service, the data market receives another fee from the service provider for providing the technical data, which is the baseline for this service. Consequently, the ecosystem has mutual dependencies and thus allows scenarios where e.g. a driver uses an attractive service which is offered for free, because an organizational consumer (in current scenarios from the market usually without the knowledge of the driver) pays the service provider for the development and service provision in the background, in order to get the data or access to a valuable service based on this data. Actors and value flows (e3value model) of a vehicle data sharing ecosystem. The Privacy Challenge for Data Sharing As discussed before, service providers will monetize data collected by connected vehicles and thus should reward drivers providing the data with certain benefits. In case that the exchange of data between the connected vehicle and the service provider is insecure [12] (or the service provider itself is compromised / acts malicious), privacy issues ranging from tracking down the user to stealing sensitive information can arise. Hence, security and privacy must be addressed when designing a vehicle usage data platform, and, as general rule, a service provider should only be allowed to access relevant (i.e., for providing a specific service) data collected by a connected vehicle. The driver may conduct a driving behavior which could be interpreted in a negative way and might not be willing to share the so generated driving data with others as this would either imply legal, social or ethical consequences. For instance an aggressive driving behavior might cause social (if shared with friends while benchmarking) or even legal consequences (if captured by the police). Drivers becoming aware of this fact may not want to contribute to any data sharing platform at all if their shared vehicle data could allow to cause negative consequences for them. This fact is also reflected in current studies and surveys, where users are asked about trust and privacy w.r.t. connected vehicles. In one of these studies, Walter et al. [13] details the user concerns regarding connected vehicles and highlights the needs for a privacy-aware data sharing mechanism. Defining a privacy configuration mechanism w.r.t usability and transparency brings up different opportunities: One approach is a distinction between vehicle specific and driver specific data, where one can opt to share both of them either anonymized or not, just one or none. Another approach would be to have four easy understandable levels with decreasing privacy: i) don't share, where simply no data is shared at all, ii) private, where data is provided e.g. to calculate some basic individual statistics, but cannot be 7 used for anything else, iii) anonymized for public usage, where data can be used like in private level and additionally is provided to public in an anonymized way, and iv) public, where all data is provided to public. However, this approach would raise awareness of drivers and service providers would have to adopt the concept, hence it limits possibilities and perhaps opens legal loopholes and at the end of the day it lacks transparency which specific data a service has access to. Therefore, we argue that it is feasible to adopt the approach of Android smartphone applications, which clusters the access to certain data into topics (i.e. An app needs access to one's contacts and images). The level of detail is a decisive factor for such clusters: emission values can be clustered under a huge topic named vehicle sensor data or be seen as an individual emission values category, while using quite granular categories would require basic technical understanding of every user. The authors still see improvement potential as this solution has somehow a touch of too much information, comparable to terms and conditions no one really reads carefully. A Concept for a Blockchain-based Open Vehicle Data Platform The concept provided in this section sketches a privacy preserving Open Vehicle Data Platform. Instead of going into detail and arguing for certain tools and architectures, we'd rather spread our idea by describing the workflow. A vehicle is capable of acquiring a lot of valuable data and the driver of the connected vehicle shall be able to decide if and how this data is shared with service providers, as discussed earlier. In the proposed concept and as indicated in Fig. 3, smart contracts based on Blockchain technology are used to specify whether a service is allowed to access data from a certain vehicle and also which kind of data will be shared. Once an agreement between the connected vehicle and the service provider (i.e., smart contract) is signed, Blockchain technology is exploited to i) make sure that the smart contract cannot be tampered with, as well as ii) to make the smart contract available to so called Brokers. The latter provides an online storage, where data collected by connected vehicles is stored securely, and it is also responsible to handle the access of a specific service on data stored on its online storage according to existing smart contracts. Furthermore, the Broker will maintain secure data connections between its online storage and connected vehicles as well as service providers by using suitable protection mechanisms (e.g., TLS). In the proposed concept, several Brokers will take over the aforementioned tasks, and thereby also allow connected vehicles to switch between different Brokers or even to store data on different locations. The Blockchain will thereby fulfill two essential tasks. Firstly, the Blockchain provides tamperproof storage for smart contracts as well as other transactions, and secondly also provides a way to ensure the authenticity of data collected by a connected vehicle and stored on an online storage, as the hash of a collected dataset is integrated in a transaction and then stored on the Blockchain. Such a transaction can also be seen as a trigger for service providers informing them about the latest available dataset. Please note that storing data directly on the Blockchain is not advisable from technological point of view. Also note that existing contracts on the Blockchain can simply be revoked or changed by filing a new contract between the connected vehicle and the concerned service provider. 8 Fig. 3. Data exchange between origin (vehicle) and target (service providers) is managed by a broker using Blockchain technology for smart contracts. The proposed concept will rely on two different entities which are stored on the Blockchain, namely i) Smart contracts, describing which data is shared with a certain service provider and also specifies the corresponding reward. It will contain information about the Broker that is used to store the collected data, and the timespan in which a certain service is allowed to access the collected data. Each smart contract will be signed by the connected vehicle (is owner) and the service provider before it is stored on the Blockchain; ii) Dataset transactions, containing the hash of a dataset stored on the online storage of a Broker. Every transaction is signed by the connected vehicle (or its owner), and also by the Broker once the dataset was successfully transferred (and verified) to its online storage. The proposed concept is able to securely interconnect connected vehicles and services providers in a privacy-preserving way, by utilizing Blockchain as tamperproof, decentralized database, as well as by using dedicated Brokers providing a secure online storage and handling access control w.r.t. the stored data. In the following, we summarize seven steps required to share data between a connected vehicle and a service provider and use this example to highlight the benefits of the propose vehicle data sharing platform: 1. Initially, the owner of a connected vehicle wants to use a certain service and, in further consequence, will get into contact with the responsible service provider. In this initial step, the user will be informed about the type of the data the service provider requires to provide a specific service. 2. If the user agrees to this terms, a smart contract specifying the relation between the connected vehicle, its owner, and the service provider is created and signed by the vehicle owner (representing the connected vehicle) and the service provider. 3. Once the smart contract is finalized, it will be stored on the Blockchain. 4. While being used, the connected vehicle will continuously collect valuable data, which is divided into datasets (e.g., after a predefined time or once a certain amount of data is collected) and sent encrypted to the online storage of the Broker. Each transferred dataset is accompanied by a dataset transaction containing the hash of the dataset as well as the digital signature of the connected vehicle (its owner). 5. Hence, the Broker on the one hand can verify that the dataset was not altered while being transferred, and on the other is held from changing the dataset itself as this would invalidate the digital signature already included in the dataset transaction. Once the currently received dataset is verified, the Broker will add its signature (thus completes the transaction) to the transaction and broadcast it on the Blockchain network. 6. Service providers can monitor the Blockchain and will be directly notified about the latest available dataset by looking for relevant dataset transaction. In case such a transaction was found, the service provider requests the dataset by establishing a connection with the Broker. 7. Next, the latter looks for a suitable smart contract on the Blockchain and provides access to data as specified in the smart contract or declines the request in case no smart contract was found or it was revoked. Conclusion, Discussion and Outlook This paper was aimed to launch the discussion on how the Blockchain technology may help to establish an open vehicle data sharing platform, respecting the privacy of both the vehicle owner and the vehicle driver. Thereby smart contracts are introduced as a mode to fully digitize the data sharing relationship between a consumer (e.g. a driver, who provides his data with the purpose to use services) and a service provider (e.g. a provider of a preventive maintenance service). They describe what kind of data will be provided by whom and for what data exploitation purpose. While these smart contracts are stored on the Blockchain to increase the trust between the vehicle data sharing ecosystem stakeholders, the shared data itself will not be stored on the Blockchain, but for instance on a separate data platform and a data market. However, a series of issues and research topics remain open and will be targeted in future work: There are certain pre-requisites vehicles would need for the provided concept. For example, a standardized vehicle data interface across manufacturers, where in general all vehicle data can be provided to extern (to be stored on SD card or on a hard drive if used for private purposes, or to be sent to online destinations), would ease data acquisition. Only data which is marked to be stored/sent to somewhere should be captured, all other data should be deleted or continuously overwritten. In order to participate, users need to be able to authorize themselves (e.g. to use their privacy settings in every vehicle they use) to the vehicle and the Broker, so they need to register and have an identity. Using Blockchain technology ensures a privacy preserving way to securely share the data from the vehicle to the service provider. If a service provider gets access to one's data, then this indicates that he is not allowed to resell it unless this is explicitly mentioned in the contract. However, in praxis this can not be prevented with the presented concept, thus privacy can not fully be ensured. As mentioned in Section 3.2, how to cluster data in useful groups and in which granularity is a topic for future research. An initial version could be as follows: -Emission data -Vehicle data (e.g. base weight, number of passengers, year of manufacture, type, brand) -Environment data (e.g. road topography, temperature outside, rain) -Traffic data (e.g. detected entities around the vehicle including humans and vehicles, information about the streets throughput rate) -Driver data (e.g. Driver ID, music channel, mood, fatigue level, driving score, heart rate) -Ride data (e.g. GPS position, temperature inside, start datetime, target) -Other data
6,315.4
2018-08-18T00:00:00.000
[ "Computer Science" ]
Pion form factor and charge radius from Lattice QCD at physical point We present our results on the electromagnetic form factor of pion over a wide range of $Q^2$ using lattice QCD simulations with Wilson-clover valence quarks and HISQ sea quarks. We study the form factor at the physical point with a lattice spacing $a=0.076$ fm. To study the lattice spacing and quark mass effects, we also present results for 300 MeV pion at two different lattice spacings $a=0.04$ and 0.06 fm. The lattice calculations at the physical quark mass appear to agree with the experimental results. Through fits to the form factor, we estimate the charge radius of pion for physical pion mass to be $\langle r_{\pi}^2 \rangle=0.42(2)~{\rm fm}^2$. I. INTRODUCTION Pion is one of the most prominent strongly-interacting particles next to the nucleon since it is a Goldstone boson of QCD. For this reason, it is important to study the pion internal structure and find out if there is a connection between its internal structure and its Goldstone boson nature. This issue is particularly relevant for understanding the origin of mass generation in QCD, see e.g. discussions in Refs. [1,2]. Knowledge of internal structure of the pion is much more limited than that of the nucleon. On the partonic level, the parton distribution function (PDF) of the pion has been studied through the global analysis of the Drell-Yan production in pion-nucleon collisions and in tagged deep inelastic scattering (DIS), for recent analyses see Refs. [3,4]. Recently, there have been many efforts in lattice QCD to study the pion PDF [5][6][7][8][9][10], which have used the quasi-PDF in Large Momentum Effective Theory [11,12], the pseudo-PDF [13,14] and current-current correlator [15][16][17] (also referred to as a "good lattice cross-section") approaches, see Refs. [18][19][20][21] for recent reviews. Lattice calculations of the lowest moments of pion PDF [22][23][24][25][26][27] are also available and can be used as additional constraints in the global analysis. Form factor, defined as with J µ being the electromagnetic current and Q 2 = −(P 2 − P 1 ) 2 , provide a different insight into pion structure, namely the charge distribution. It can be, in principle, measured in electron-pion scattering. Generalized parton distribution (GPD) combine the information contained in PDF and form factors and provide a three- *<EMAIL_ADDRESS>dimensional image of a hadron. In the case of the nucleon, the study of the GPDs is the subject of large experimental and theory efforts (see e.g. Ref. [28] for a recent review). Experimental study of the pion GPD is far more challenging and will be only possible at Electron-Ion Collider (EIC), if at all. Fortunately, GPDs can be studied on the lattice using LaMET, including pion GPDs [29][30][31][32]. Experimentally, the pion form factor was measured by scattering of pions off atomic electrons in Fermilab [33,34] and CERN [35,36]. This allowed determination of the pion form factor for momentum transfer Q 2 up to 0.253 GeV 2 [33][34][35][36]. For larger Q 2 , one has to determine the pion form factor from the electroproduction of charged pions off nucleons. The corresponding experiments have been performed in Cornell [37][38][39] DESY [40,41], and Jlab [42][43][44][45][46]. These determinations, however, were model-dependent. The recent determination of the pion form factor up to Q 2 of 2.45 GeV 2 is carried out by the F π collaboration using data both from DESY and JLab [46]. Experiments at the future EIC facility will allow us to probe even higher Q 2 up to 30 GeV 2 and possibly see the partonic structure in an exclusive elastic process and make contact with asymptotic large-Q 2 perturbative behavior [47]. In the timelike region, the pion form factor can be determined by analyzing e + e − → π + π − process [48] (see also references therein). This analysis also constrains the form factor in the spacelike region. Lattice QCD calculations allow one to obtain the pion form factor from first principles, i.e. without any model dependence, up to relatively large Q 2 . Therefore, they will provide an important cross-check for the experimental determinations. The first lattice calculations of the pion form factor date back to late 80s and were performed in the quenched approximation [49,50]. More recently, lattice calculations of the pion form factor have been performed with two flavors (N f = 2) of dynamical quarks [51][52][53][54][55], with physical-mass strange-and two light-quark flavors (N f = 2 + 1) [56][57][58][59][60][61][62], as well as with a dynamical charm quark, a strange quark and two flavors of the light quarks with nearly-physical masses (N f = 2 + 1 + 1) [63]. Most of the lattice studies focused on the small Q 2 behavior of the pion form factor and the extraction of the pion charge radius. The pion charge radius is very sensitive to the quark mass. Chiral perturbation theory predicts a logarithmic divergence of the pion charge radius when the quark mass goes to zero [64]. Therefore, one has to work at the physical quark mass or have calculations performed in an appropriate range of quark masses to perform chiral extrapolations. Furthermore, studies have been performed for lattice spacing a > 0.09 fm. Constrained by the analyticity and unitarity, the charge radius is correlated with the phase of form factors in the timelike region. It is proposed in Ref. [65] that high-precision determinations of the pion form factor and the charge radius have potential to shed light on the discrepancy of hadronic vacuum polarization (HVP) derived from e + +e − → hadron cross-sections and lattice calculations [66]. The aim of this paper is to study the pion form factor in a wide range of Q 2 . Therefore, we perform calculations for small lattice spacings, namely a = 0.04fm and 0.06 fm, with valence pion mass of about 300 MeV. Furthermore, to study quark-mass effect, we also perform calculations at the physical pion mass, though at somewhat larger lattice spacing, a = 0.076 fm. Unlike previous studies, we also perform calculations for highly boosted pion in order to extend them in the future to the pion GPD. II. LATTICE SETUP In this study, we use Wilson-Clover action with hypercubic (HYP) [67] link smearing on (2+1)-flavor L t × L 3 s lattice ensembles generated by HotQCD collaboration [68,69] with highly-improved staggered quark (HISQ) sea action. For the clover coefficient we use the tree-level tadpole improved value c sw = u −3/4 0 , with u 0 being the HYP-smeared plaquette expectation value. This setup is the same as the one used by us to study the valence parton distribution of the pion [9,10]. As in Refs. [9,10], we use two lattice spacings a = 0.04 fm and a = 0.06 fm and the valence pion mass of 300 MeV. The lightest pion mass for these gauge configurations is m sea π = 160 MeV and the lattice spacings were fixed with the r 1 scale [68] using the value r 1 = 0.3106(18) fm [70]. In addition, we performed calculations at a lattices spacings of 0.076 fm and valence pion mass of 140 MeV using gauge configurations that correspond to the lightest pion mass of m sea π = 140 MeV [69]. The lattice spacing was set by the kaon decay constant, f K [69]. The lattice ensembles used in this study and the corresponding parameters are summarized in Table I. Due to the HISQ action, the taste splitting in the pion sector is small for lattice spacings a ≤ 0.076 fm. For a = 0.076 the root mean square pion mass is only 15% higher than the lightest pion mass, while the heaviest pion mass is only 25% above the lightest pion mass [69]. In what follows for a = 0.076 fm ensemble, will will not make a difference between the sea and the valence pion mass and refer to this ensemble as m π = 140 MeV ensemble or the ensemble with physical pion mass. The effects of partial quenching will persist at finite lattice spacings but will go away in the continuum limit. To obtain the form factor we calculate the pion twopoint and three-point functions. We consider two-point functions defined as where π s (P, t) are either smeared or point sources, s = S, P , with spatial momentum P = 2π aL s · (n x , n y , n z ). As in the previous studies [9,10], we used boosted Gaussian sources in Coulomb gauge with boost along the zdirection k z = 2π/(aL s ) · (0, 0, j z ). The radius of the Gaussian sources r G is also given in Table I. The threepoint function is defined as (4) being the isovector component of the electric charge operator. Note that the isosinglet component of the electric charge vanishes between the pion states. The initial momentum in the above expression is P i = 2π/(aL s ) · (0, 0, n z ), while the final momentum is P f = P = P i + q. The values of the momenta used in this study as well as the corresponding boost parameter j z are summarized in Table I. We calculated the three-point functions for three values of the source-sink separations, t s for the two coarser lattices. For the finest lattice we used four sourcesink separations. The source-sink separations used in our study are also listed in Table I. The calculations of the two-and three-point functions were performed on GPUs with the QUDA multi-grid algorithm [71] used for the Wilson-Dirac operator inversions to get the quark propagators. We used multiple sources per configuration together with All Mode Averaging (AMA) technique [72] to increase the statistics. The stopping criterion for AMA was set to be 10 −10 and 10 −4 for the exact and sloppy inversions, respectively. Since the signal is deteriorating with increasing momenta, we use different number of sources and number of gauge configurations for different momenta. The number of gauge configurations and number of sources used in the analysis are given in the last two columns of Table I for each value of the momenta. Ensemble: m val π (GeV) csw rG fm ts/a nz ni (i = x, y) jz #cfgs (#ex,#sl) a = 0.076 fm, m sea π = 0.14 GeV, 0.14 1.0372 0. 59 For the study of the form-factor, it is convenient to use the Breit frame, where |P i | = |P f |. Using the Breit frame is essential when studying the GPD within LaMET [29][30][31][32], therefore we also calculated the pion form factor using the Breit frame. The parameters of this set-up are summarized in Table II. III. TWO-POINT FUNCTION ANALYSIS Since the source-sink separation values used in this study are not very large, it is important to quantify the contributions of the excited states when extracting pion matrix elements. This in turn requires a detailed study of the pion two-point functions. For a = 0.04 fm and 0.06 fm lattices and m val π = 300 MeV, the pion two-point functions have been studied for different momenta along the z-direction in Refs. [9,10]. Furthermore, this analysis was very recently extended to include momenta also along the x and y-directions for a = 0.04 fm [73]. We have extended this analysis to a = 0.076 fm and the physical pion mass. The pion two-point function in Eq. (2) has the following spectral decomposition: where E n+1 >E n , with E 0 being the energy of the pion ground state. A n is the overlap factor Ω|π s |n of the state n and the state created by operator π s from the vacuum state |Ω . Thanks to the Gaussian smearing, the excited state contribution is suppressed. So we truncate the Eq. (5) up to N state = 3 and then fit the data in a range of t ∈ [t min , aL t /2]. In the left panels of Fig. 1, we show the extracted E 0 for three different momenta. As one can see, the ground-state energies, E 0 reach a plateau when t min 10a, 5a and 2a for 1-state, 2-state and 3-state fits, respectively. The horizontal lines in the plots are computed from the dispersion relation Here the value of m π was obtained by considering the pion masses from the fits with t min ∈ [10a, 20a], and then fitting these results to a constant. The fit to a constant has there is no statistically significant t min dependence of the pion mass. The ground-state energies for different momenta agree with the horizontal lines for sufficiently large t min , i.e. follow the dispersion relation. Thus for the determination of the next energy level, we can fix the groundstate energy E 0 to be from the dispersion relation, and perform a 3-state fit. Interestingly, as shown in right panels of Fig. 1, we can also observe plateaus for E 1 when t min >5a. The energy of the first excited state also follows the dispersion relation E 1 (P) = P 2 + m 2 π with m π = 1.3 GeV. This could imply that the first excited state is single particle state, namely the first radial excitation of the pion π(1300) [73]. We cannot rule out, however, the possibility that it is a multi-pion states within the large errors. Since the first excited state energy, E 1 does not reach a plateau for t min <5a, we conclude that for t/a < 5 the contribution of higher excited states in the two-point function is significant. Therefore, we need to consider 3-state fits for these t values. To perform a 3-state fit, we fix E 0 to the dispersion relation and put a prior to E 1 using the best estimates from SS and smeared-point (SP) correlators [10] together with the errors from the 2state fit. This way we get the third excited state energy, E 2 , which does not depend on t min within the statistical errors. However, the value of E 2 is very large, about 3 GeV. This implies that E 2 does not actually refer to a single state but rather to a tower of many higher excited states. The situation is similar for other two 300 MeV ensembles [10]. Now we understand that a 2-state spectral model can describe our two-point functions well when t min 5a, while 3-state can describe t min 2a. This will be important to keep in mind when analyzing the three-point function and pion matrix elements in the next section. (1, 32) 64 × 48 3 10 a = 0.04 fm, m sea π = 0.16, 0.3 9,12, 2 ±1 ∓2 120 (1, 32) 64 × 64 3 15,18 TABLE II. Two sets of measurements in the Breit frame on the two heavy-pion ensembles are shown. Using the notation similar to Table I, the initial pion state with transverse momentum P i ⊥ = 2πn p i /(Lsa), has the same energy as the final state with momentum P f = P i + q. The lines are computed from the dispersion relation E(P) = P 2 + E(P = 0) 2 , with E(P = 0) to be 0.14 GeV for E0 and 1.3 GeV for E1. As can be observed, the E0 and E1 reach a plateau for large enough tmin. To summarize this section, in Fig. 2 we show the dispersion relation obtained from the above analysis. We also extended the analysis for a = 0.06 fm [10] by including additional momenta with non-zero components along the x and y-directions. The corresponding results are also shown in Gaussian sources. These sources have poor overlap with the scattering states. IV. EXTRACTION OF BARE MATRIX ELEMENTS OF PION GROUND STATE To obtain the bare pion form factor we consider the following standard ratio of the three-point and two-point pion correlation functions [74,75] This ratio gives the bare pion form factor in As explained in Sec. II, we calculated the three-point functions with P i along theẑ direction, and multiple values of momentum transfer q = P f − P i for each P i . Thus there is no difference for q with same magnitude of the transverse momentum transfer. In other words, there should be transverse symmetry for the three-point function data. We find that indeed our numerical results for R f i (τ, t s ) with same |n q x | and |n q y | are consistent within the error. Therefore, we average the three-point functions data with same magnitude of the transverse momentum transfer in the following analysis. Since the temporal extent of our lattices is not large, it is important to consider thermal state contaminations, also called wrap-around effects caused by the periodic boundary condition in time [10]. To remove the wraparound effects in the two-point function we replaced C 2pt (t) by C 2pt (t) − A 0 e −E0(aLt−t) using the best estimate of A 0 and E 0 from the two-point function analysis. To understand wrap-around effects in the three-point function we consider the spectral decomposition of C 3pt in Eq. (6) where m, n, k = Ω, 0, 1, . . . , with 0 being the pion ground state. In general, terms with non-zero E m will be highly suppressed by e −(aLt−ts)Em (we assume E Ω = 0). Therefore, in most studies such terms are neglected. However for the P = 0 case e −(aLt−ts)Em(P =0) = e −aLtmπ is not very small. We have e −aLtmπ ∼ 0.03, 0.003, 0.02 for a = 0.076, 0.06 and 0.04 fm lattices, respectively. On the other hand, for non-zero momenta the terms proportional to e −(aLt−ts)Em are smaller than 0.003 and can be neglected. Therefore, for a = 0.04 fm and 0.076 fm calculations we only consider non-zero momenta and limit the sum over index m in Eq. (7) to include only the vacuum state in what follows. We need, however, to consider the wrap-around effects when dealing with the renormalization, as discussed in the next section. In this work, we use multi-state fit to extract the bare matrix elements of the ground state P f |O γt |P i ≡ 0P f |O γt |P i 0 by inserting the spectral decomposition of the two-point function in Eq. (5) and the three-point function in Eq. (7) with m = Ω, and the sum over n truncated to N state terms. Furthermore, we take the best estimate of A n and E n from the two-point function analysis. and put them into Eq. (6). In the following, we will refer to this method as Fit(N state , n sk ), in which N state is the number of states in the corresponding two-point function analysis and n sk labels how many τ points are skipped on the two sides of t s . We consider N state = 2 and N state = 3 that have four and nine fit parameters, respectively. We perform multi-state fit using bootstrap method with time separations t s = 6a, 8a, 10a. The data with t s = 20a and n p i = (0, 0, 1) are used only to cross-check our analysis. Since the ratio defined in Eq. (6) is a derived quantity not defined on a single gauge configuration we used uncorrelated fits. The statistical correlation between the different data points is taken into account through the bootstrap procedure. In Fig. 3, we show the examples of ratio R f i (τ, t s ) as well as the 2-state and 3-state fit results. As one can see, for large momentum with large statistical errors, the reconstructed curves go through the data points well, and the 2-state and 3-state fit results are consistent with each other. However, this is not the case for smaller momentum, where the data are more precise. The 3-state fit is required to describe the ratio data with χ 2 /dof < 1, while the 2-state fit result in χ 2 /dof 1. Thus for the following analysis, we will take the 3-state fit results as the central value and use the corresponding statistical errors. However, even when using the 3-state fit there is no guarantee that we are free from excited state contamination. Therefore, we take the difference between the 2-state fit and the 3-state fit results as the systematic errors in the following analysis. It can be also observed that the data points of t s = 20a show plateau around t s /2 within the errors, and are also consistent with the 3-state fit results, which support our estimate of bare matrix elements. In App. B, we discuss 4. The forward matrix elements hB(P i , P i ). The P i z dependence can be described by hB(P i , P i ) = h ii B (P i = 0, P i = 0) + r(aP i z ) 2 shown as the line. the plateau fit results using t s = 20a data. V. THE PION FORM FACTORS To obtain the form factor from the bare form factor determined in the previous section it needs to be multiplied by the vector current renormalization factor, Z V . The simplest way to obtain this is to calculate the forward matrix element h B (P i , P i ) = 0P i |O|P i 0 = Z −1 V . However, one needs to keep in mind the wrap-around effect discussed in the previous section. The other issue is cutoff dependence of h B (P i , P i ) at large values of P i . In Fig. 4, we show h B (P i , P i ) for a = 0.076 fm as a function of P i . In absence of discretization effects, h B (P i , P i ) should be independent of P i since after renormalization it gives the charge of the pion. In other words, Z V should not depend on the momentum of the external state. Following Ref. [10], we model the discretization effects using the form h B (P i , P i ) = h B (P i = 0, P i = 0) + r(aP i z ) 2 . As one can see from Fig. 4 this form describes the data quite well, except for P i = 0. The anomalously large value of h B (P i , P i ) at P i = 0 is due to the wrap-around effects as discussed in the previous section. This means that h B (P i , P i ) is contaminated by a small contribution proportional to e −aLtmπ mentioned in the previous section. This contribution is also proportional to matrix elements containing two or more pion states with the appropriate quantum numbers. Constraining such matrix elements is difficult in practice. However, under some physically well-motivated assumptions it is possible to estimate the corresponding contributions and remove them from h B (P i , P i ) [10]. Therefore, we follow the procedure explained in Appendix A of Ref. [10] to remove this contribution from the matrix element. The corrected result for h B (P i = 0, P i = 0) is shown as the blue point in Fig. 4 and is not very different from the result obtained by the fit. Thus we understand the discretization effects in the forward matrix element h B (P i , P i ). We also calculated Z V for a = 0.076 fm using RI-MOM scheme and MeV) ensemble (blue points), compared with the experiment data from CERN (red points) [36] and Fπ collaboration (green points) [46]. The purle bands are the dispersive analysis results of experimental data from Ref. [48], which also included form factors in time-like region. Our fit results of a = 0.076 fm data are shown as the blue bands, in which the filled band is from z-expansion fit and the dashed band is from monopole fit. The errors in this plot have included the systematic errors. obtained Z V = 0.946 (12) which agrees with the results on h B (P i = 0, P i = 0) shown in Fig. 4 within errors. From Fig. 4 we also see that the discretization errors are smaller than 1% for P i z < 1 GeV , and are less than 2% for P i z < 1.6 GeV. Since the discretization effects as functions of P i z will be similar for off-forward matrix element it is convenient to obtain the renormalized pion form factor by simply dividing h B (P f , P i ) by h B (P i , P i ). Then we have F π (Q 2 = 0) = 1 by construction and the discretization errors for large P i z are removed. We still may have discretization errors proportional to (aQ) 2 . Assuming that these discretization errors are similar to the (aP i z ) 2 discretization errors we can neglect them. This is because other sources of errors for the form factors are significantly larger for the considered Q 2 range as we will see below. We comment further on the cutoff dependence in the form-factor in App. A. In Fig. 5, we show the renormalized pion form factors obtained for the m π = 140 MeV ensemble and compared to the experimental data from CERN [36], as well as the results from F π collaboration [46]. The purple bands are the dispersive analysis results of experimental data from Ref. [48], which also included form factors in timelike region. We see good agreement between the lattice results and the experimental data within the estimated error bars at low Q 2 . It is expected that at low Q 2 , the pion form factors can be described well by a simple monopole Ansatz motivated by the Vector Meson Dominance (VMD) model [76] F π (Q 2 ) = The monopole mass M should be close to the ρ meson mass. Therefore, in Fig. 5 we show the inverse of the pion from factor, 1/F π (Q 2 ), as a function of Q 2 . We see that in the studied range of Q 2 the inverse form factor can be roughly described by a linear function up to Q 2 = 0.4 GeV within the errors, as expected from monople form. The monopole fit of the lattice data (dashed band in Fig. 5) extended to higher Q 2 also agrees with the pion form factor obtained by F π collaboration [46], possibly indicating that the monopole form may work in an extended range of Q 2 within the current precision. At very low Q 2 , the pion form factor can be characterized in terms of the pion charge radius As mentioned in the introduction, the pion charge radius is very sensitive to the quark mass, and it is clearly seen in the lattice calculations. In fact, it appears to be challenging to obtain the correct pion charge radius from the lattice results [51][52][53][54][55][56][57][58][59][60][61][62][63]. The lattice calculations at the unphysical quark masses lead to smaller pion charge radius than the experimental results. If the monopole form (8) could describe the pion form factor for all Q 2 the pion charge radius would be related to the monopole mass as It is convenient to represent the form factors in terms of the effective charge radius defined as [51] In Fig. 6 we show the effective radius for a = 0.076 fm ensemble as well as for the two finer ensembles with m val π = 300 MeV. We see from the figure that r 2 ef f is roughly constant as a function of Q 2 for all three lattice spacings. For the smallest lattice spacing, a = 0.04 fm the results on the effective radius are Q 2 -independent for Q 2 as high as 1.4 GeV 2 . This is consistent with earlier findings [51]. We also clearly see the quark mass dependence of r 2 ef f . The effective radius is smaller for the heavier pion mass as expected. Comparing the results at a = 0.06 fm and a = 0.04 fm we see no clear lattice spacing dependence of r 2 ef f . Therefore, we conclude that for a = 0.06 fm the discretization errors for the pion form factor are smaller than the estimated lattice errors in the range of Q 2 studied by us. Finally, for the two finer lattices we also show the results from the calculations using Breit frame, which agree with the non-Breit frame results. While the monopole Ansatz seems to describe the pion form factor well and was used to obtain the pion charge radius in the past (see e.g. Ref. [51]) there is no strong theoretical reason why it should describe the pion form factor. Therefore, one has to consider an alternative and more flexible parameterization of the pion form factor. An alternative way to fit the form factors is the model independent method called the z-expansion [77]. Here the form factor is written as where t = −Q 2 , a k are the fit parameters with constrain condition F π (Q 2 = 0) = 1, and t cut = 4m 2 π is the two-pion production threshold. Furthermore, t 0 is chosen to be the optimal value t opt 0 (Q 2 max ) = t cut (1 − 1 + Q 2 max /t cut ) to minimize the maximum value of |z|, with Q 2 max the maximum Q 2 used for the fit. In the timelike region near the two pion threshold, the leading singularity of form factor should be proportional to (4m 2 π −t) 3/2 due to the P-wave nature of the π−π scattering [48,78,79], which leads to the additional constraint kmax k=1 (−1) k ka k = 0. We use AIC model selection rules to determine k max , which are 2 for a = 0.06 fm, and 3 for a = 0.04, 0.076 fm data and for the Q 2 under consideration. The z expansion results are also shown in Fig. 5 and appear to overlap with the monopole fit, but for larger Q 2 it has larger errors. We also show the fits with the z-expansion in Fig. 6. From this figure we see that this fit works well also for the valence pion mass of 300 MeV and naturally reproduces little Q 2 dependence of the effective radii. To better understand the quark mass dependence of the pion form factor as well to facilitate the comparison with the experimental results, in Fig. 7 we show all the results for the pion form factor in terms of the effective radius r ef f (Q 2 ). We see that the effective radius obtained for the physical pion mass is clearly larger than the one obtained for m val π = 300 MeV and is much closer to the CERN data. Furthermore, the fits of r ef f for m val π = 300 MeV for the two lattice spacings agree within errors. While the individual lattice data and the CERN data appear to agree within errors we also see from the figure that there is a tendency for the CERN data to lie higher than the lattice data. This leads to a slight difference in the pion charge radius as discussed below. The pion charge radius can be derived from zexpansion fit results using Eq. (9), which are summarized in Table III for the three lattice spacings used in this work. We also discuss the radius obtained from the monopole fit for comparison in App. C. As expected the calculations for the heavier quark mass give smaller pion charge radius. Since the z-expansion provides a model independent way to obtain the pion charge radius, for our final estimate of the pion charge radius at the physical point we take the result from the z-expansion fit: where we added the statistical and systematic errors (defined by the difference between the results from 2state and 3-state fit of matrix elements) in quadrature. This result is consistent the pion charge radius quoted by Particle Data Group (PDG), r 2 π PDG = 0.434(5) fm 2 [80], which is averaged from determination from t-channel πe→πe scattering data [34,36,81] and schannel e + e − →π + π − data sets [48,82]. The HPQCD determination that uses HISQ action both in the sea and the valence sectors of (2 + 1 + 1)-flavor QCD is r 2 π = 0.403(18)(6) fm 2 [63]. The most precise lattice determination of the pion charge radius in 2+1 flavor QCD using overlap action in the valence sector and domain wall action in the sea sector has r 2 π = 0.436(5)(12) fm 2 [62]. The 2+1 flavor domain wall calculation gives r 2 π = 0.434(20)(13) fm 2 [61]. Finally, the other 2+1 flavor lattice determinations of the pion charge radius have significantly larger errors [59,60]. We summarize the comparison in Fig. 8. VI. CONCLUSIONS In this paper we studied the pion form factor in 2+1 flavor lattice QCD using three lattices spacings a = 0.076, a = 0.06 and a = 0.04 fm. The calculations on the coarsest lattice have been performed with the physical value of the quark masses, while for the finer two lattices the valence pion mass was 300 MeV. We have found that the pion form factor is very sensitive to the quark mass, as expected. We showed that lattice discretization effects are quite small for lattice spacings smaller than 0.06 fm. For the physical quark masses our lattice results on the pion form factor appear to agree with the experimental determinations. Unlike other lattice studies we also considered highly boosted pions in the initial state using momentum boosted Gaussian sources. In addition we performed calculations also in the Breit frame. We demonstrated that the calculations of the pion form factor performed at different momenta of the pion as well as in the Breit frame give consistent results. This is very important for extending the calculations to pion GPDs. An important outcome of our analysis is that the monopole Ansatz can describe the pion form factor in large range of Q 2 , up to Q 2 = 1.4 GeV 2 . In the future it will be important to extend the calculations to even higher momentum transfer given the experimental efforts in Jlab and EIC. To do this we should use boosted sources that also depend on the value of Q 2 . At present the momentum boost was optimized only according to the pion momentum in the initial state. From the low Q 2 dependence of the pion form factor we determined the pion charge radius, which is one sigma lower that the experimental result. We speculated, whether this is due to the effect of partial quenching. To fully resolve this issue calculations at smaller lattice spacing with the physical value of the pion masses are needed. As is shown in Fig. 4, there are 2% discretization effects of Z −1 V (P i ) = h B (P i , P i ). We chose to divide h B (P f , P i ) by h B (P i , P i ) so that the renormalized pion form factors could reduce such effects. To estimate the impact of the discretization errors to the form factors as well as pion charge radius, instead we can renormalize the bare form factors h B (P f , P i ) by a constant Z −1 V such as Z −1 V (0.25 GeV) of a = 0.076 fm ensemble. The effective radius for a = 0.076 fm ensemble is shown in Fig. 9, and in this case we estimate the charge radius from monopole fit and z-expansion fit as 0.406(6)(25) fm 2 and 0.427(10)(22) fm 2 , which shift 2% but are consistent with the estimates in Table III. It has been observed in Sec. IV that the ratio R f i (τ, t s ) of t s = 20a shows plateau around t s /2 which is also consistent with the results from Fit(3,2) method, implying that the smallness of excited-state contribution in this region. Therefore it is reasonable to perform a one-state fit, namely plateau fit, to extract the bare matrix elements. We denote this method by Plateau(τ min , τ max ) which fit R f i (τ, t s = 20a) of τ ∈ [τ min , τ max ] to a constant. The fit results from Plateau(τ min , τ max ) are shown in Fig. 10 as the blue bands where the multi-state fit results are also shown for comparison. Clearly, the plateau fit shows good agreement with 3-state fit results. In Fig. 11, we show the distribution of difference between plateau fit and multi-state fit using bootstrap samples. In the main text, we have taken the difference between 2-state and 3-state fit as the systematic errors of excited-state contamination. It can be seen that such an estimate is larger than the difference between plateau fit and 3-state fit which should give a sufficiently conservative total error. We also determined the pion form factor from the plateau fits for t s = 20 The corresponding results in terms of the effective radius are shown in Fig. 12. Once again, consistent results between Plateau(τ min , τ max ) and Fit (3,2) can be observed. Appendix C: Model dependence of radius extraction In this work, we used z-expansion Ansatz to obtain the charge radius from the pion form factors shown in Table III. For comparison, in Table IV we also show the radius obtained from monopole fit whose statistical error are often smaller, but this fit has larger systematic errors compared to z-expansion. Both fits produce good χ 2 /df . For the a = 0.076 fm ensemble, for example we get, χ 2 /df = 0.56 for monopole fit, and χ 2 /df = 0.51 for z-expansion fit. Within the estimated errors the two fit forms give consistent results but only marginal. In Fig. 13, we show the effective radius (c.f. Eq. (11)) calculated from the z-expansion fit (blue band) as well as monopole fit (red band). Clearly the z-expansion fit is more flexible so that the effective radius is a function of Q 2 rather than a constant. At Q 2 = 0 where the charge radius is defined, the result from z-expansion fit ( r 2 Z ) is higher than monopole fit ( r 2 M ). We show the distribution of r 2 M nst3 − r 2 Z nst3 from bootstrap samples in Fig. 14, where the N-state fit is denoted by nstN. The central value of this distribution is 0.02 fm 2 .
9,102.8
2021-02-11T00:00:00.000
[ "Physics", "Education" ]
Some new inequalities for convex functions via Riemann-Liouville fractional integrals Fractional analysis has evolved considerably over the last decades and has become popular in many technical and scientific fields. Many integral operators which ables us to integrate from fractional orders has been generated. Each of them provides different properties such as semigroup property, singularity problems etc. In this paper, firstly, we obtained a new kernel, then some new integral inequalities which are valid for integrals of fractional orders by using Riemann-Liouville fractional integral. To do this, we used some well-known inequalities such as Hölder's inequality or power mean inequality. Our results generalize some inequalities exist in the literature. Introduction It is a well known fact that inequalities have important role in the studies of inequality theory, linear programming, extremum problems, optimization, error estimates and game theory. Over the years, only integer real order integrals were taken into account while handling new results about integral inequalities. However, in the recent years fractional integral operator have been considered by many scientists (see [1]- [12]) and the references therein. There are some inequalities in the literature that accelerates studies on integral inequalities. In the following part, the Hermite-Hadamard inequality which is one of the most famous and practical inequality in the literature is given: Theorem 1. Let f be defined from interval I (a nonempty subset of R) to R be a convex function on I and m, n ∈ I with m < n. Then the double inequality given in the following holds: Now we will mention about Riemann-Liouville fractional integration operator (see [6]) which ables to integrate functions on fractional orders. . J α m+ f and J α n− f which are called left-sided and right-sided Riemann-Liouville integrals of order α > 0 with 0 ≤ m ≤ x ≤ n are defined by and The results have been put forward inspiring from the following kernel obtained in [9]. Lemma 2. Let f : I ⊆ R → R be a differentiable mapping on I • and m, n ∈ I with m < n. Now firstly, we will give a new lemma including Riemann-Liouville fractional integral operator, then we will obtain new inequalities for convex functions. Results Via Riemann-Liouville Fractional Integrals Lemma 3. Let f : I ⊆ R → R be a differentiable mapping on I • and m, n ∈ I with m < n. where Γ (.) is the gamma function. Proof. Integrating by part and changing variables of integration On the other hand, with similar way and changing variables of integration By multiplying 4 with (1 − ξ ), 5 with ξ , summing them side by side and multiplying the equality with n−m 2 we get the desired result. where Γ (.) is the gamma function and Proof. By using Lemma 3 and using properties of absolute value we have Using convexity of | f | we have With simple calculations it can be seen By using necessary coefficients in (6), the proof is completed. Theorem 5. Let f : I ⊆ R → R be a differentiable mapping on I • and m, n ∈ I with m < n. If | f | q is convex on where q ≥ 1, Γ (.) is the gamma function and Proof. By using Lemma 3 and using properties of absolute value we have Using power mean inequality, it yields By taking into account convexity of | f | q we get By making necessary computations we havê Conclusions A new Lemma was proved in this study. Using this lemma, new fractional type inequalities were obtained. New theorems for different types of convex functions can be obtained by using Lemma 3, and thus, new upper bounds can be obtained. Various applications for these inequalities can be revealed. Also Lemma 2 can be generalized and new integral inequalities can be obtained through different fractional integral operators. T h i s p a g e i s i n t e n t i o n a l l y l e f t b l a n k
954.8
2020-06-30T00:00:00.000
[ "Mathematics" ]
Analytical technique for simplification of the encoder–decoder circuit for a perfect five-qubit error correction Simpler encoding and decoding networks are necessary for more reliable quantum error-correcting codes (QECCs). The simplification of the encoder–decoder circuit for a perfect five-qubit QECC can be derived analytically if the QECC is converted from its equivalent one-way entanglement purification protocol. In this work, the analytical method to simplify the encoder–decoder circuit is introduced and a circuit that is as simple as the existing simplest circuits is presented as an example. The encoder–decoder circuit presented here involves nine single- and two-qubit unitary operations, only six of which are controlled-NOT gates. 3 Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT can be well derived so that it requires a simpler network for both encoding and decoding than the original one reported by Laflamme et al Bennett et al suggested, i.e. to use a Monte Carlo search program for deriving the QECC. In realistic situations, to reduce the number of two-qubit gates necessary in the encoderdecoder circuit is significantly important for reliable five-qubit QECCs because two-qubit operations could be the more difficult ones to be implemented in a physical apparatus [10]. This work thus is motivated to derive five-qubit, single-error corrections which can be performed by using the least number of two-qubit operations in their encoder-decoder networks. The QECC presented as an example herein is derived analytically from the restricted 1-EPP proposed by Bennett et al [5] and its encoder-decoder network contains only six controlled-NOT (CNOT) gates and three single-qubit operations. The restricted 1-EPP therefore is depicted first in section 2. In section 3, we describe the systematic method for deriving 1-EPP in detail. A concrete example for the simplest quantum gate array then will be given to show the capacity of the present method. In section 4, we present the coding circuit which is converted directly from the 1-EPP and compare its efficiency with those of several existent encoder-decoder circuits. A conclusion is given in section 5. The 5-EPR-pair single-error-correcting code 3 Suppose there exists a finite block-size 1-EPP which distills one good pair of spins in a specific Bell state from a block of five pairs, and no more than one of the five pairs is subjected to noise. When this 1-EPP is combined with a teleportation protocol, two parties, Alice and Bob, can transmit quantum states reliably from one to the other. The combination of the 1-EPP and teleportation protocol therefore is equivalent to a QECC. The 1-EPP considered herein is schematically depicted in figure 1. Suppose Alice is the encoder, Bob the decoder, and the Bell state + = (|00 + |11 )/ √ 2 is the good state to be purified. Alice and Bob are supposed to be provided with five pairs of spins in the state + by a quantum source (QS). However, they actually share five Bell states in which generic errors have or have not occurred on at most one Bell state due to the presence of noise N B in the quantum channel via which the pairs are transmitted. The noise models are assumed to be one-sided [5] and can cause the good Bell state + to become one of the incorrect Bell states The good Bell state + can become one of the erroneous Bell states expressed in (1) if it is subjected to either a phase error ( + → − ), an amplitude error ( + → + ), or both ( + → − ) [2,11]. When performing the 1-EPP, Alice and Bob have a total of 16 error syndromes to deal with. The collection of error syndromes includes the case that none of the five pairs has been subjected to errors and the 15 cases in which one of the five pairs has been subjected to one of the three types of error. The strategy of Alice and Bob is to perform a sequence of unilateral and bilateral unitary operations (as shown in figure 1, U 1 and U 2 performed by Alice and Bob, respectively) to transform the collection of the 16 error syndromes to another collection that can provide information about the errors subjected by their particles. Suppose the state of the first : the channel for the 1st pairs of entangled qubits : the channel for the 2nd -5th pairs of entangled qubits pair in the block is to be recovered. After performing the sequence of their operations (U 1 and U 2 , respectively), Alice and Bob should then perform local measurements on their respective halves of the second to fifth pairs. Alice sends her result via classical channels to Bob who then performs the Pauli operation U 3 to recover the original state of the first pair conditionally on both Alice's and his results. The ultimate requirement of these results of final measurement is that each and every one of them should be distinguishable from the others. In other words, there should be 16 distinct measurements obtained from the aforementioned transformation of the error syndrome. The main issue now is that the sequence of unilateral and bilateral unitary operations performed by the two parties to transform the error syndrome should be well designed so the requirement just mentioned can be fulfilled. To arrange the sequence of operations, basic concepts of linear algebra are used. The four Bell states ± and ± are first labelled by two classical bits, namely, The right, low-order or amplitude bit identifies the / property of the Bell state, while the left, high-order or phase bit identifies the +/− property. Note that the combined result of the local measurements obtained by Alice and Bob on a Bell state is revealed by the Bell state's low or amplitude bit. In the representation of the high-low bits, each error syndrome thus is expressed as a ten-bit codeword, e.g., the error syndrome + − + + + is written as 00 11 00 00 00. Codewords of the error syndrome, denoted by e (i) r , i = 0, 1, . . . , 15, are listed in table 1. The effect of the sequence of unilateral and bilateral unitary operations performed by Alice and Bob is to map the codewords e (i) r onto another collection of ten-bit codewords w (i) . If both the codewords, e (i) r and w (i) are written as column vectors in the ten-dimensional Boolean-valued (∈ {0, 1}) space, then the mapping e (i) r → w (i) can be simply expressed by a Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT Table 1. The correspondence among the error syndrome e (i) r (E (i) r ), the codeword w (i) (W (i) ), the measurement result v (i) and the Pauli operation U (i) 3 controlled by the measurement result in the restricted 1-EPP (five-qubit QECC) applying the encoder-decoder circuit shown in figure 3 (figure 4). in accordance with relations (4) and (5). In the language of linear algebra, the action of the sequence of unilateral and bilateral unitary operations that accounts for the mapping e (i) r → w (i) is to perform a sequence of elementary row operations on the 10 × 10 identity matrix 1 to reduce it to the matrix M. In this spirit, Bennett et al [5] have undertaken a Monte Carlo numerical search program to find suitable solutions for matrix M and their corresponding encoder-decoder networks. Basically, the approach implemented by Bennett et al is a tedious numerical method of trial and error performing the transformation 1 → M subjected to a 'forward' sequence of local operations. In this work, we will present an analytical method for creating M implemented in the present QECC. The present method will be described in detail in section 3. Theory The unilateral and bilateral unitary operations performed in the 1-EPP in fact are their own inverse transformations, so if the sequence of operations is run in the reverse order, then the inverse transformations M → 1 are accomplished. In the spirit of inverse transformation, it thus allows us to derive all appropriate versions of M and the corresponding encoder-decoder networks by following an analytical way. More importantly, for a derived M, rearranging the sequence of row operations on the same inverse transformation M → 1 will help in constructing its simplest encoder-decoder network. An elementary row operation corresponds to a basic unilateral or bilateral unitary operation. In the present protocol, Alice and Bob are confined to perform only three basic unitary operations because these operations are necessary and sufficient for the elementary row operations needed to achieve the mapping M → 1, and vice versa. These basic operations are: (i) a bilateral CNOT (BXOR), which performs the bit change where the subscripts S and T denote the source and target pairs, respectively; (ii) a bilateral π/2-rotation B y , which performs The unitary Pauli operation σ x performs a π-rotation of Alice or Bob's spin about the x-axis, while the bilateral operation B x (B y ) performs a π/2-rotation of both Alice and Bob's spins about the x (y)-axis. The unilateral operations are defined as those operators performed by Alice or Bob but not both. The bilateral operations are represented by a tensor product of one part of Bob and the same part of Alice. Note that the bilateral CNOT is performed such that the source qubits of Alice and Bob belong to a common pair, and the target qubits belong to another common pair. The information obtained through local measurements and one-way communications can only deduce the low bit of a Bell pair, and the original state of the first Bell pair can only be recovered by the low-bit information. Then, for a successful 1-EPP, or its equivalent QECC, each and every measurement result v (i) is required to be distinguishable from the others, so 7 Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT the collection of v (i) in fact should contain all elements in the four-dimensional Booleanvalued space. To perform the aforementioned inverse transformation M → 1, the codewords of measurement result are first arranged according to relations (7) and the matrix M can be assumed as 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 10 b 1 b 2 b 3 b 4 b 5 b 6 b 7 b 8 b 9 It should be noted that the arrangement of the results of measurements shown in the above matrix is only one of the possible choices. By performing a sequence of row operations corresponding to the basic unitary operations, the assumed matrix M (8) actually is allowed to be reduced to one of all the alternatives akin to the identity matrix 1, and a suitable encoder-decoder network is constructed accordingly. The alternatives akin to the identity 1 are those obtained by (1) permuting column vectors within one of the five sets of two column vectors (x (3k−2) and x (3k−1) , k = 1, 2, . . . , 5), or (2) adding one column to the other within each of the groups, or (3) performing both actions. For example, an alternative could be When the derivation of M is done, the alternative akin to 1 is then converted back to the identity 1 by well rearranging its columns and the derived M is adjusted via the same column changes, in order to conform to equation (3). The procedure of reducing the matrix M to the alternative akin to the identity 1 is similar to the Gauss-Jordan elimination method for solving systems of linear equations. During the procedure of row operations, all the unknowns appearing in the assumed matrix M (8) are given or solved according to the structure of the alternative akin to 1. Details of the derivation can be found in [12]. 8 Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT A systematic scenario example There are so many solutions for the assumed M which are all suitable for the 1-EPP; however, only one of them has been adjusted and presented as: 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 Let us show the systematic scenario for accomplishing the transformation M 1 → 1 by one of the simplest networks. The matrix M 1 can be rephrased as where the matrix elements m αβ denote the 2 × 2 matrices: and so forth. The next step of our method is a procedure of elementary row operations on the matrix M 1 (10) subjected to a suitable sequence of the basic operations. When the assumed matrix M 1 is transformed into the identity matrix 1 under the series of row operations, the unknowns a r , b r , . . . , f r will be solved stepwise in accordance with the structure of 1. It is easy to show that a sequence of row operations can do the transformation on two Bell states α and β in a group enumerated by γ, namely, provided that det(m αγ ) = 1 and det(m βγ ) = 0. Here I denotes the 2 × 2 identity matrix. For example, the consecutive transformation Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT can be accomplished if the operation B y is first performed on Bell state β, then a σ x B x is performed on Bell state α followed by a BXOR performed on both states, as Bell state α being the source and Bell state β being the target. It can be found in what follows that the unknowns assumed in the matrix M 1 either will be given based on the requirement for the transformation described in (13), or will be determined according to the unique structure of the identity matrix 1. In the first stage of row operations, we are confined to performing a transformation of the matrix M 1 (11a) such that m 44 → I and m 4k , m k4 → 0, for k = 1, 2, 3 and 5, according to the structure of 1. Let det(m 44 ) = 1 and det(m 14 ) = · · · = det(m 54 ) = 0, which imply Clearly, there are totally 640 solutions for the unknowns appearing in (10) to be considered in this stage (ten for the condition a 7 b 8 ⊕ a 8 b 7 = 0, two for each of the six arbitrary Boolean valued unknowns, and thus totally 10 × 2 6 = 640 solutions). To illustrate the simplest way of creating Boolean functions, however, only one among these 640 cases is considered. Let us consider the case in which Then, by performing the operations shown in figure 2(a), we have the transformation M 1 → M 1 , in which we have chosen the following setting for the unknowns: Let us proceed to apply the second series of operations, as depicted in figure 2(b), to perform the transformations m 22 → I and m 2k , m k2 → 0, for k = 1, 3 and 5. As a result, we have Finally, if the matrix M 1 is transformed through additional two BXOR and one σ x B x operations, as shown in figure 2(c), it results in the identity matrix 1. In this stage, we have set the rest of the unknowns to be one of the alternatives: a 1 = 1, a 2 = 0, a 5 = 1, a 6 = 0, a 9 = 0 and a 10 = 0. The whole sequence of basic operations, as shown in figure 3, is obtained by combining the three subsequences as shown in figures 2(a)-(c). It will transform the matrix M 1 into the identity matrix 1. This network is the simplest one since it involves only six BXORs, M 1 1 Performed by this network, the correspondence between the error syndromes e (i) r and the combined measurement results v (i) r is also listed in table 1. Referring to table 1, or the matrix M 1 , when Bob obtains the measurement result v (2) (= 0110), for example, he knows the pair to be purified is in the state + (= 01) and thus simply performs the Pauli operation U (2) 3 = σ x to recover it to the good state + . The encoder-decoder circuit for a perfect five-qubit error correction The 1-EPP depicted above can be directly converted to a five-qubit QECC whose encoderdecoder circuit has the same configuration as the one shown in figure 4. However, in the language of QECC, the classical high-low or phase-amplitude bits used to code the Bell state in the 1-EPP are now used to code operators belonging to the Pauli group, namely, I = 00, σ x = 01, σ z = 10, σ y = 11. When acting on a single qubit, the Pauli operator produces either no error (by I), a bit flip error (by σ x ), a phase flip error (by σ z ), or a bit-phase flip error (by σ y ). Therefore, such a code is convenient because the codewords e (i) r are now replaced by E (i) r , which represent the 16 error syndromes described by five-Pauli-operartor tensor products. Furthermore, the transformation described by the matrix equation (3) is now replaced by the similarity transformation of operators described as: W (i) = UE (i) r U + , where U (U + ) represents the sequence of the basic operations performed in the decoder (encoder) circuit. Clearly, both the encoder and decoder circuits have exactly the same quantum gate arrangement but they should be run in opposite orders. In order to perform the transformation mentioned above, this time the single-qubit Hadamard transformation: H = H + = (σ x + σ z )/ √ 2, is used to perform the bit change H(x, y)H + → (y, x), the single-qubit transformation: Q = Q + = (σ y + σ z )/ √ 2, is used to perform Q(x, y)Q + → (x, x ⊕ y), and the two-qubit CNOT gate is used to perform (CNOT)(x S , y S )(x T , y T )(CNOT) + → (x S ⊕ x T , y S )(x T , y S ⊕ y T ), respectively. That is, in the five-qubit QECC to be presented, the basic single-and two-qubit operations needed to be implemented are H, Q and CNOT. For the present five-qubit QECC, the correspondence between the codewords W (i) and E (i) r is exactly the same as that between the derived matrix M 1 given in (9) and the identity 1. The QECC is performed as follows. If a state |φ = α |0 + β |1 is to be protected in a quantum computation, it is first accompanied with four extra qubits in the state |0 . Then the five-qubit state |φ |0 |0 |0 |0 is encoded by the performance of U + . After the encoded state is subjected to E (i) r , the erroneous state then is decoded by the implementation of U. The resulting state turns out to be where U (i) 3 is the single-qubit Pauli operation acting on the first qubit and is dependent on the measurement result on the four extra qubits. When the extra qubits are measured in Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT Table 2. Three efficiency criteria and the corresponding costs for four circuits have been presented. Circuit 1 is given by Bennett et al (figure 18 in [5]) and is unoptimized. The optimized circuit of Bennett et al denoted by Circuit 2, mentioned in [5], consists of six two-qubit controlled-NOT gates only. Since the number of laser pulses depends on the detailed structure of the circuit, it is not shown here for laking the detailed information. Circuit 3 is the simplification of the coding circuit of Laflamme et al proposed by Braunstein and Smolin (figure 1 in [8]). One can find that the original circuit of Laflamme et al ( figure 1 in [4]) is more complicated and requires 41 laser pulses. Circuit 4 denotes the simpest circuit which has been found by computer search (figure 3 in [8]) and by the systematic method presented in this work. Conclusion This work has presented a rather simple encoder-decoder circuit to perform the five-qubit, singleerror correction protocol. The QECC derived herein is converted directly from the restricted 1-EPP depicted above, so a major part of this work is dedicated to the depiction of the 1-EPP. The present encoder-decoder circuit is the simplest one corresponding to the derived matrix M 1 given in (20), which is derived via an analytical approach [12]. This analytical approach, as shown, can help in deriving not only the suitable matrix M for the five-qubit QECC but also the simplest version of encoder-decoder network corresponding to the derived matrix. However, many possible matrices M suitable for the QECC remain to be discovered analytically and, thus, so many candidates of encoder-decoder circuit that require only six CNOTs. The simplest network that is even simpler than the present one and the Braunstein and Smolin circuit [8] might not be found from these candidates. However, a more convincing proof which could be a numerical approach based on the analytical approach introduced in [12] is required in future work.
4,984
2004-10-01T00:00:00.000
[ "Computer Science", "Physics" ]
Seismicity supports the theory of incipient rifting in the western Ionian sea, central Mediterranean The present work focuses on earthquake locations and seismogenic stress in the eastern offshore of Sicily, a sector of the central Mediterranean region where geophysical information available is not good enough, yet, for proper geodynamic modeling. I have applied to an updated seismic database of the study area a Bayesian non-linear hypocenter location method already proven to be more effective than linear methods when the recording network geometry is poor, like in the present case. Then, I have selected from literature and official catalogs the local earthquake focal mechanisms computed by waveform inversion, and inverted them for stress tensor orientations. The results confirm the main finding of the previous investigations, i.e. that NW-trending convergence between Africa and Eurasia is a main source of tectonic stress in this area, however they also furnish evidence of additional tectonic factors locally acting together with convergence. In particular, extensional dynamics are detected inside the convergence-related compressional domain: these are characterized by a minimum compressive stress oriented SW-NE (perpendicular to convergence) and can be related to the rifting process (opening SW-NE) detected by previous investigators at the southwestern edge of the Ionian subduction slab. The findings of the present study may also concur to answer several open questions left by previous investigators. Earthquake space distributions The Figure 3a shows the earthquakes of local magnitude over 2.5 occurring between 1985 and 2018 at depths less than 70 km in the calabro-sicilian region, according to the Italian national seismic database (http://istituto.ingv.it/lingv/archivi-e-banche-dati/). Hypocenter locations of the used database are estimated with a 1-D velocity structure of all Italy, which is a reasonable approach for bulletin activity. I use these locations here (i) to furnish an introductory overall view of seismicity in the study region and (ii) as starting data for more accurate locations to be performed in the sector of our greatest interest in the present study (shadowed internal rectangle). I have focused my attention on the western Ionian because, as explained in the previous Sections, the geophysical knowledge in this offshore sector is still relatively poor and its exploration can be decisive for answering several open geodynamic questions in the region. I have relocated the hypocenters of earthquakes of the shadowed sector of Figure 3a, after integration of P-and S-wave readings of the Italian database with those available from databases of local seismic networks operating in Sicily and Calabria during 1985. For hypocenter relocations, I have selected the events for which a minimum of 12 P+S arrival times were available, and used the Bayesian non-linear location algorithm named Bayloc [Presti et al., 2004[Presti et al., , 2008. As well known from the literature [Lomax et al. 1998[Lomax et al. , 2000Lomax and Michelini, 2001;Husen and Smith, 2004;Presti et al. 2004Presti et al. , 2008Lippitsch et al., 2005; among others], the non-linear probabilistic location methods furnish more accurate estimates of hypocenter locations and relative errors compared to linearized methods when the network geometry is not optimal: this is the situation of our offshore study sector (Figure 3b). Starting from seismic phase arrival times at the recording stations, Bayloc computes for an individual earthquake a probability cloud marking the hypocenter location uncertainty. Then, Bayloc estimates the spatial distribution of probability relative to a set of earthquakes by summing the probability densities of the individual events. This method has been shown to help detection of seismogenic structures through better hypocenter location and more accurate estimation of location errors compared to linearized methods [Presti et al., 2008]. 5 Seismicity supports rifting in the West Ionian Bayloc's locations have been performed in a 3D velocity structure estimated for the study region by Orecchio et al. (paper in preparation) who applied the method used by Orecchio et al. [2011] and previously proposed by Waldhauser et al. [1998Waldhauser et al. [ , 2002. This method is based on: (i) integration of different types of velocity data available from the literature (seismic profiles, earthquake tomography, surface wave inversion, Moho depth maps, etc.); (ii) LET inversion where this is allowed by the available P-and S-wave arrival data. Velocity data from literature were taken from Kennet et al. [1995], Tesauro et al. [2008], Orecchio et al. [2011, and references therein], Neri et al. [2012], Scarfì et al. [2018]. The 3D velocity structure used here for hypocenter locations covers the depth range 0-300 km in the whole area concerned by seismic rays travelling from hypocenters of the study sector to recording stations ( Figure 3b). Epicenter distributions relative to different hypocentral depth ranges and a SW-NE vertical section of hypocenters computed by Bayloc for earthquakes occurring during 1985-2018 in the study area are displayed in Seismogenic stress inversion With the purpose of analyzing seismic faulting styles and related tectonic stress in the area of interest of the present study, I have collected from literature and international catalogs the waveform-inversion focal mechanisms of the earthquakes occurring in the area of Figure 4. I have limited the selection to seismic events of magnitude over 2.5 occurring in the period 1977-2018 at depths less than 70 km. The map of these focal mechanisms is shown in Figure 5, the list of their parameters is furnished in Table 1 In the dataset of Figure 5 and Table 1, the focal mechanisms computed by the CAP method [Li et al., 2007;D'Amico et al., 2010;Presti et al., 2013;Orecchio et al., 2014;Polonia et al., 2016;Totaro et al., 2016] are affected by errors of the order of 8-10 degrees. The literature and the bibliographic sources of the other focal mechanisms of Figure 5 and Table 1 indicate that these are typically characterized by fault parameter errors of the order of 10-15 degrees [see, e.g., Helffrich, 1997;Frohlich and Davis, 1999;Pondrelli et al., 2006;Hjörleifsdóttir and Ekstrom, 2010]. Then, the fault parameter errors of the solutions displayed in Figure 5 are, in general, smaller than those of focal mechanisms computed by inversion of P-onset polarities in areas of critical network geometry like ours [D'Amico et al., 2011;Presti et al., 2013;Scarfì et al., 2013;Musumeci et al., 2014]. The overall level of uncertainty of focal mechanisms of the dataset of Figure 5 makes it suitable for application of the method by Gephart and Forsyth [1984] for calculating seismogenic stress directions in the study region (Figure 5a-d). This method searches for the stress tensor showing the best agreement with the available focal mechanisms (FMs). Four stress parameters are calculated: three of them define the orientations of the main stress axes; the other is a measure of relative stress magnitudes, R = ( 2 -1 )/( 3 -1 ), where 1 , 2 and 3 are the values of the maximum, intermediate and minimum compressive stresses, respectively. In order to define discrepancies between the stress tensor and observations (FMs), a misfit variable is introduced: for a given stress model, the misfit of a single focal mechanism is defined as the minimum rotation about any arbitrary axis that brings one of the nodal planes, and its slip direction and sense of slip, into an orientation that is consistent with the stress model. Searching through all orientations in space by a grid technique operating in the whole space of stress parameters, the minimum sum of the misfits of all FMs available is found. The confidence limits of the solution are computed by a statistical procedure described in the papers by Parker and Mc Nutt [1980] and Gephart and Forsyth [1984]. The size of the average misfit corresponding to the best stress model provides a guide as to how well the assumption of stress homogeneity is fulfilled [Michael, 1987]. In the light of results from a series of tests carried out by Wyss et al. [1992] and Gillard et al. [1996] to identify the relationship between FM uncertainties and average misfit in the case of uniform stress, I will make the following assumptions. I assume that the condition of homogeneous stress distribution is fulfilled if the misfit, F, is smaller than 6°, and that it is not fulfilled if F>9°. In the range 6°<F<9°, the solution is considered as acceptable, but may reflect some heterogeneity. The advantage of using Gephart and Forsyth's [1984] method instead of other more recent stress inversion methods [such as, for example, Arnold and Townend, 2007;Vavrycuk, 2014;Karagianni et al., 2015] is that the former is more conservative concerning the relative orientation of seismogenic stress and seismic dislocation surface. In this connection, caution is appropriate in the present study because I do not make any assumption here concerning the date of formation of the faults which generated the earthquakes of the dataset. The tectonic stress orientation may have changed since the date of formation of the fault producing the study earthquake, therefore I must consider a relatively wide range of possible angles between the today-acting stress and the fault surface. This more conservative approach is better represented by the method of Gephart and Forsyth [1984]. The effectiveness of this method in several conditions, also compared to more recent methods, is well documented in the literature [see Hardebeck and Hauksson, 2001;Maury et al., 2013;Karagianni et al., 2015; among others]. I report in Figure 6 i.e. same sector of plot c, depth range 30-70km. See Table 2 for numerical values of stress inversion results. Table 2. Stress tensor inversion of earthquake focal mechanisms performed for the earthquake sets indicated in Figure 5 and described in the text. N is the number of earthquakes (= focal mechanisms) belonging to the inversion set. Plots (c) to (e) display the stress inversion results obtained for the dataset of plot (b) by using the stress inversion methods by Gephart and Forsyth [1984], Vavrikuk [2014] and Arnold and Townend [2007], respectively. Discussion In the most recent analysis of regional seismicity and stress fields, Totaro et al. [2016] proposed a geodynamic scheme highlighting extensional processes of the Apennine-Maghrebian chain occurring inside the overall compressional domain due to Africa-Eurasia convergence (see, in particular, the Figure domain [Totaro et al., 2016, among others]. Figure 5c and Table 2 (set c) display the results of stress inversion performed in the area of incipient rifting proposed by Polonia et al. [2016Polonia et al. [ , 2017, approximately located between the Alfeo-Etna and Ionian Fault Systems. In this case, the F-value drops to 6.7° that is quite smaller than the values of the larger datasets of Figures 5a and 5b, but still larger than the value of 6° assumed as approximate upper bound of values corresponding to stress homogeneity (see previous Section). Therefore, stress is moderately heterogeneous in the area between the Alfeo-Etna and Ionian Fault Systems. Here, the best model of stress coming from inversion is characterized by a NW-trending, horizontal 1 matching well with the direction of convergence of Africa and Eurasia in this part of the plate margin. However, the 95% confidence limits of stress orientations reveal that 1 orientation is practically unconstrained from horizontal NW-SE to vertical. This may suggest that some extensional process opening SW-NE acts together with NW-trending plate convergence in this sector. A close analogy can be noted between the stress confidence limits of Figure 5c (real earthquakes) and those obtained by inversion of synthetic focal mechanisms (Figure 6c). This suggests that the focal mechanisms available between the Alfeo-Etna and Ionian Fault Systems (Figure 5c (ii) an extensional stress with SW-NE opening direction which can be plausibly related to the rifting process hypothesized by Polonia et al. [2017]. I have reported in Figure 7 the individual misfits of the earthquakes of Figure 5c with respect to the corresponding stress solution, as a function of focal depth. The best model of stress in 5c is compressional with SE-NW 1 , which recalls regional stress related to Africa-Eurasia convergence. According to the plot of Figure 7 plate convergence seems to be the only geodynamic process active at depths between 30 and 45 km, approximately. In fact, at these depths, nearly all the individual misfits are lower than 10° (average error of focal mechanisms in the dataset). Conversely, the individual misfits of the earthquakes shallower than 30 km (in half cases exceeding 10°; Figure 7) suggest some degree of stress heterogeneity in the depth range 0-30 km. Based on the information furnished by the plot of Figure 7, I have performed a stress inversion run excluding from the dataset of Figure 5c all the earthquakes shallower than 30 km. The results of this additional run are shown in Figure 5d and Table 2 set d. The value of F (3.3°) and the small confidence limits of stress orientations indicate that stress is homogeneous and well constrained at depths between 30 and 45 km in the sector of Figure 5d. In this framework, the best model of stress characterized by a NW-SE sub-horizontal 1 , indicates that the action of plate convergence is dominant in this depth range in the area of incipient rifting identified by Polonia et al. [2017]. According to the results of Figure 5c-d and to the plot of Figure 7, the extensional processes associated to rifting are to be confined to the upper ca. 30 km in this area. Unfortunately, the low number of focal mechanisms available above the depth level 30 km (only 10) does not allow a separate stress inversion and this operation has to be postponed to a moment when additional data will be available. It is also worth mentioning that the analysis of the individual misfits of the earthquakes of Figure 5c as a function of magnitude (not reported graphically for conciseness) evidences that all the events of magnitude over 4.0 (maximum magnitude of 4.5) can be imputed to the convergencerelated regional stress (misfits less than 10°). On the other hand, the weaker events (magnitude less than 4.0) show in several cases misfits larger than 10°, what means that they in part reflect stress heterogeneities due to more localized processes, such as rifting. In this case, too, the relatively low number of focal mechanisms available does not allow stress inversion of proper subsets partitioned according to earthquake magnitude. I look at a future availability of additional data in the study area to explore in a greater detail the local space variations of stress and the contributions by the different tectonic factors. On the other hand, I evidence the new contribution to knowledge of regional geodynamic processes given by the present study in comparison Debora Presti to the most recent stress inversion analyses carried out in the same area [Totaro et al., 2016]. In fact, although the primary seismogenic role attributed to plate convergence by Totaro et al. [2016] is confirmed here, the use of a more conservative stress inversion method allows me to better detect in the present study the heterogeneity of stress in the study area (see, e.g., Figure 5c). In particular, signatures of a rifting-related extensional component have, here, been detected into the compressional domain produced by plate convergence (Figure 8) (approximate average error of focal mechanisms in the dataset). This indicates that plate convergence is likely the only geodynamic process active at these depth levels. A certain degree of stress heterogeneity is revealed by individual misfits at depth shallower than 30 km. Conclusion Earthquake relocations and stress inversion of focal mechanisms in the still poorly resolved geodynamic domain of western Ionian lead to the following conclusions: (i) the main finding of previous investigations indicating the primary tectonic action of Africa-Eurasia convergence in this part of the Mediterranean region [Montone et al., 2012, Montone andMariucci 2016;Totaro et al., 2016] is confirmed; (ii) the level and style of stress heterogeneity detected in the study area furnish, however, evidence of additional tectonic factors acting together with convergence. In this regard, an additional factor can be recognized in the rifting process at the southwestern edge of the subducting slab recently hypothesized by Polonia et al. [2017]. SW-NE opening in the NW-trending belt comprised between the Alfeo-Etna and Ionian Fault Systems would add an extensional stress component to convergencerelated compression in the offshore of Eastern Sicily: this is really observed in the plot of stress orientations obtained by inversion of focal mechanisms (Figure 5c). Also, the hypocenter relocations evidence that the dislocation processes between the subducting slab (located northeast of the Ionian Fault System) and the adjacent lithosphere (southwest) are distributed over a relatively wide zone, probably because the subduction kinematics are slow and do not mimic fast STEP dynamics [Gallais et al., 2013;Orecchio et al., 2014]. This wide zone located between the Alfeo-Etna and Ionian Fault Systems (Figures 3d and 4d) corresponds to the NW-trending highly fractured zone of rifting where serpentinite diapirs rise from deeper depths [Polonia et al., 2017]. The stress inversion results of Figure 5c-d and the analysis of earthquake individual misfits in Figure 7 suggest that the seismic effects of rifting are confined to the upper 30 km of the area between the Alfeo-Etna and Ionian Fault Systems (a final sketch of the results into the geodynamic context is given in Figure 8c). Data and sharing resources. Data used in the present study were collected from the databases of Istituto Nazionale di Geofisica e Vulcanologia (http://istituto.ingv.it/it/archivi-e-banche-dati) and from catalogs and bibliographic sources indicated in detail in the article.
4,004
2019-12-19T00:00:00.000
[ "Geology" ]
An Experimental Study of Chemical Desorption for Phosphine in Interstellar Ice Phosphine (PH3), an important molecule for the chemistry of phosphorus (P)-bearing species in the interstellar medium (ISM) is considered to form primarily on interstellar grains. However, no report exists on the processes of PH3 formation on grains. Here, we experimentally studied the reactions of hydrogen (H) atoms and PH3 molecules on compact amorphous solid water, with a particular focus on the chemical desorption of PH3 at 10–30 K. After exposure to H atoms for 120 minutes, up to 50% of solid PH3 was lost from the icy surface. On the basis of experiments using deuterium atoms, it was concluded that the loss of PH3 resulted from chemical desorption through the reactions PH3 + H → PH2 + H2 and/or PH2 + H → PH3. The effective desorption cross-section was ∼5 × 10−17 cm2, which is three times larger than that of hydrogen sulfide measured under similar experimental conditions. The present results suggest that the formation of PH3, and possibly PH2 and PH, followed by their desorption from icy grains, may contribute to the formation of PN and PO in the gas phase, and thus may play a role in the P chemistry of the ISM. Introduction Phosphorus (P) chemistry in the interstellar medium (ISM) has attracted increasing attention from astrochemical communities. This is because P-bearing species such as nucleic acids and phospholipids are essential for life on Earth and have been detected in the ISM as PO and PN in recent decades (e.g., Turner et al. 1990;Agúndez et al. 2014;Fontani et al. 2016;Rivilla et al. 2020). Previous astronomical observations indicated the depletion of P-bearing species in the gas phase of the dense and cold ISM by a factor of >10 2 relative to the cosmic abundance of P (the missing phosphorus problem; Turner et al. 1990;Fontani et al. 2016;Lefloch et al. 2016). Such significant depletion suggests that the bulk of P-bearing species are locked on interstellar grains. In meteorites, phosphorus is identified as inorganic minerals such as schreibersite and hydroxyapatite (Fuchs 1969;Pasek & Lauretta 2005) and alkyl phosphonic acids (Cooper et al. 1992). However, to date no infrared observations of solid-state P-bearing species exist in any interstellar sources; hence, there is little information currently available about P chemistry on interstellar grains. Modeling studies predict that phosphine (PH 3 ) is an important P-bearing species for the surface chemistry on interstellar grains (Charnley & Millar 1994;Aota & Aikawa 2012;Chantzos et al. 2020). The formation of PH 3 is not easy in the gas phase of the ISM (Millar 1991). Accordingly, PH 3 has been proposed to form via the following reactions on grains (Chantzos et al. 2020;Rivilla et al. 2020): Because reactions (1)-(3) are barrierless, they can effectively proceed on grains, even at temperatures as low as 10K where hydrogen (H) atoms can diffuse and encounter other species on the grain surface (Hama & Watanabe 2013). However, neither PH 3 nor other P-bearing species have been observed in a solid state, as noted earlier. Turner et al. (2015Turner et al. ( , 2018Turner et al. ( , 2019 experimentally studied the conversion of solid PH 3 into other chemical species such as diphosphane (P 2 H 4 ), phosphoric acid, and methylphosphonic acid by the energetic processes applied to PH 3 -containing ices at low temperatures. In addition to its conversion into other species, PH 3 may also be lost from grains by desorption without decomposition. Because thermal desorption takes place at roughly 60K (Turner et al. 2015), PH 3 may desorb during warming-up phases toward star formation even when formed at lower temperatures. Non-thermal desorption can, in principle, occur even at temperatures below its thermal desorption temperature, which can be further divided into two types of processes: desorption by energetic processes such as photon and ion bombardment, and non-energetic chemical (or reactive) desorption. The former utilizes photon or ion energy to cause desorption, and has been extensively studied experimentally involving various molecules such as water, carbon monoxide, and methanol (Öberg et al. 2009;Fayolle et al. 2011;Bertin et al. 2016;Cruz-Diaz et al. 2016). The latter, i.e., chemical desorption, has also been experimentally studied involving various species (Dulieu et al. 2013;He et al. 2017;Chuang et al. 2018). Because chemical desorption has a strong advantage in that it does not require any external energy, it can occur even in the dense and cold regions of the ISM, where external photons cannot penetrate. However, unlike energetic desorption processes, it is not easy to evaluate the efficiency of chemical desorption experimentally because of technical difficulties, giving rise to significant uncertainty for the quantification of its efficiency. Recently, we were successful in quantifying the efficiency of chemical desorption using Fourier-transform infrared (FTIR) spectroscopy (Oba et al. 2018). This method applies to specific reaction systems only, i.e., where the initial reactant is the same as the final product. We estimated the chemical desorption efficiency of hydrogen sulfide (H 2 S) via the following successive H-abstraction and H-addition reactions: where 60% of the initial solid H 2 S was lost by chemical desorption with an effective desorption cross-section of 2×10 −17 cm 2 (Oba et al. 2018(Oba et al. , 2019. In the case of PH 3 , the interaction with H atoms on grain surfaces may result in a similar scheme to that of H 2 S as follows: The produced PH 2 radical will further react with an additional H atom to again yield PH 3 via reaction (3). Because reaction (6) is exothermic with a moderate activation barrier of14 kJ mol −1 (Yu et al. 1999), it is expected that reaction (6) can proceed via quantum tunneling at low temperatures. If reactions (6) and (3) proceed, they may induce chemical desorption of PH 3 and/or PH 2 , as in the case with H 2 S. So in this Letter, we present experimental results on the interactions of solid PH 3 with H atoms on icy surfaces at low temperatures to study the possible loss of PH 3 from icy surfaces by chemical desorption. The obtained results will be helpful for interpreting the abundance of PH 3 toward various astronomical sources, and will provide a better understanding of P chemistry in the ISM. Experiments All experiments were performed using the Apparatus for Surface Reaction in Astrophysics (ASURA) system. The ASURA system is described in previous studies (Watanabe et al. 2006;Nagaoka et al. 2007). In brief, the ASURA comprises a stainlesssteel vacuum chamber with a basic pressure of 10 −10 torr, multiple turbo molecular pumps, an aluminum (Al) reaction substrate attached to a closed-cycle helium cryostat, a quadrupole mass spectrometer (QMS), and an FTIR with an incident angle of 83°f rom the surface normal. The Al substrate was covered with amorphous silicates (Mg 2 SiO 4 ) with a thickness of 10-30nm, which were prepared by magnetron sputtering to sintered polycrystalline Mg 2 SiO 4 . The surface temperature was controlled between 5 and 300K. Chemical processes of PH 3 were surveyed on compact amorphous solid water (c-ASW), which was prepared by vapor deposition of H 2 O via a capillary plate with an incident angle of 45°to the substrate and maintained at 110K. The thickness of the c-ASW was estimated to be approximately 30 monolayers (ML; 1 ML=1×10 15 molecules cm −2 ). The substrate was cooled to the reaction temperature (10-30 K). We anticipated the silicate surface to be fully covered with c-ASW under our experimental conditions, thereby indicating that all chemical reactions of PH 3 took place on the surface of the c-ASW. Gaseous PH 3 was produced by the reaction of calcium phosphide (Ca 3 P 2 , 97%; Mitsuwa Chemicals Co., Ltd) with H 2 O (Ca 3 P 2 + 6H 2 O→2PH 3 +3Ca(OH) 2 ) in a separate vacuum chamber, followed by its cryogenic purification to remove by-products of the reaction such as P 2 H 4 and molecular hydrogen (H 2 ) (Huang et al. 1977). The purified PH 3 gas was introduced through the same capillary plate onto the c-ASW layer at different temperatures (10, 20, and 30 K) to produce a solid PH 3 with a thickness of approximately 0.5ML, which was estimated using the peak area of the P-H stretching band at 2320cm −1 (Francia & Nixon 1973) with an absorption coefficient is 7.0×10 −18 cm molecule −1 (Turner et al. 2015). The deposition rate of PH 3 was 1MLminute −1 . The H atoms were generated through the dissociation of H 2 in a microwave-discharged plasma in a Pyrex tube. The H atoms were cooled to 100K before reaching the substrate by multiple collisions with the inner wall of the Al tube at 100K (Nagaoka et al. 2007). Using the method of Oba et al. (2014), the flux of H atoms was estimated as 1.1×10 13 atoms cm −2 s −1 . The deposited solid PH 3 was exposed to H atoms at each temperature for 120minutes. Molecules on the surface were analyzed in situ using FTIR spectroscopy in the spectrum range of 4000-700cm −1 at a resolution of 2cm −1 . Reactants and products desorbed from the substrate were also monitored by the QMS via the temperatureprogrammed desorption (TPD) method with a ramping rate of 4K minute −1 . Loss of PH 3 by Reactions with H Atoms on Compact Amorphous Solid Water at Different Temperatures Figure 1(a) shows the FTIR spectrum of solid PH 3 (0.5 ML) on c-ASW at 10K. The inset shows an enlarged spectrum at 2250-2400 cm −1 . The PH 3 solid has mainly three bands in this region: 2320 cm −1 for the stretching band (highlighted in the inset) and 1107 and 985cm −1 for the bending bands (Francia & Nixon 1973;Turner et al. 2015); however, the latter two peaks were not observed in Figure 1(a) due to their low absorption coefficients (Turner et al. 2015). Further deposition of PH 3 gas did not show any other peaks attributable to possible contaminants such as P 2 H 4 (2294 and 1061 cm −1 ; Turner et al. 2015), indicating that the purity of the PH 3 gas was sufficiently high for the present experiment. Figure 1(b) displays variations in the difference spectra of PH 3 focusing on the P-H stretching region after exposure to H atoms for up to 120minutes at 10K, in addition to the initial spectrum for comparison. The intensity of the P-H stretching band was reduced with exposure to H atoms. In contrast, when solid PH 3 was exposed to H 2 molecules only under the same experimental conditions, the PH 3 decrease was negligible. In addition, PH 4 cannot form by the addition of H to PH 3 due to its endothermicity (Howell & Olsen 1976). Hence, a decrease in PH 3 should result from desorption triggered by reaction (6). PH 2 radicals formed by reaction (6) do not readily react with H 2 O and H 2 at 10-30K due to the endothermicity of each reaction. The formation of P 2 H 4 may be possible if two PH 2 radicals are present nearby on the icy surface; however, under the present experimental conditions, where the surface coverage of PH 3 is much less than unity, its formation is less expected to occur. In fact, any traces relevant to other P-bearing species were not confirmed by the FTIR spectra. The loss of PH 3 was also confirmed via the TPD-QMS experiment. Figure 2 shows the TPD spectra of the PH 3 with and without exposure to H atoms at 10K for 120minutes. A single desorption peak was observed at roughly 68K for both TPD spectra, which can be attributed to PH 3 desorbed from the surface of c-ASW. The TPD-QMS measurements showed that the loss of PH 3 was 50% at 10K, which is in agreement with the estimation from the FTIR spectra (see Section 3.2). Desorption of all other P-bearing species such as P 2 H 4 (m/z=66) was not confirmed during the TPD-QMS measurement. These results indicated that PH 3 was not converted to other P-bearing species but was lost from the substrate following interaction with H atoms. To test the hypothesis that the loss of PH 3 occurred because of chemical desorption through reactions (6) and/or (3), we carried out an additional experiment in which solid PH 3 interacted with D atoms on c-ASW retained at 10K. We anticipated that the following successive H-abstraction and D-addition reactions would proceed on the substrate, resulting in the formation of a singly deuterated phosphine (PH 2 D): Figure 3 shows the FTIR spectra after exposure of PH 3 to D atoms for up to 120minutes at 10K. The P-H stretching band at 2320cm −1 was reduced with atom exposure time and, simultaneously, a new peak appeared at 1686cm −1 , representing the P-D stretching band of deuterated phosphine (Francia & Nixon 1973). In contrast, the P-D stretching band did not appear when the PH 3 solid was exposed to D 2 molecules only. These results indicate that PH 2 D formed via reactions (7) and (8). Further deuteration would be possible; however, to investigate the process in more detail is beyond the scope of the present study. Therefore, in the reaction of PH 3 with H atoms, we concluded that reactions (6) and (3) proceeded, resulting in the loss of PH 3 from the surface of c-ASW by chemical desorption via reactions (6) and/or (3). Figure 4 shows variations in the relative abundance of PH 3 after exposure to H atoms at 10, 20, and 30K with relevance to atom exposure times. The loss of PH 3 was also observed at 20 and 30K. Roughly 50%, 40%, and 30% of the initial PH 3 solid was lost after exposure to H atoms for 120minutes at 10, 20, and 30K, respectively. The observed desorption fraction variations based on temperature can be explained as described in our previous paper (Oba et al. 2019). In brief, at higher temperatures such as 30K, where most H atoms desorb immediately from the surface, the reaction causing chemical desorption will mainly take place at deeper absorption sites. Contrastingly, solid PH 3 adsorbed at shallower sites will be less likely to react with H atoms, which will suppress the desorption of PH 3 at higher temperatures. Quantification of the PH 3 Chemical Desorption By simply assuming that the chemical desorption proceeds through a single process (e.g., photodesorption), the variations in the relative abundance of PH 3 (Figure 4) can be fitted to a single exponential decay curve defined by where Δ[PH 3 ] 0 and Δ[PH 3 ] t represent the abundance of PH 3 at time=0 and t, respectively, A is a saturation value for the desorption fraction of PH 3 , σ is the effective cross-section of chemical desorption in cm 2 , and j represents the flux of H atoms (1.1×10 13 atoms cm −2 ). By fitting the plots in Figure 4 into Equation (9) we obtained the effective desorption crosssection at each temperature: (5.3 ± 0.5)×10 −17 cm 2 at 10 K, (5.6 ± 0.5)×10 −17 cm 2 at 20 K, and (5.4 ± 0.5)×10 −17 cm 2 at 30 K. Note that the obtained effective cross-section can be considered as the lower limit of the actual cross-section because most of the impinged H atoms will be consumed by H-H recombination before interacting with PH 3 . The obtained effective cross-section appeared to show little dependence on temperature, as in the case of the chemical desorption of H 2 S (Oba et al. 2019). However, the values of the effective crosssection derived from Equation (9) are likely to have been underestimated, particularly at higher temperatures, where H atoms should contribute less to surface reactions (as explained earlier in this section). Hence, it should be considered that the actual efficiency of chemical desorption per reactive event should increase with temperature. Unfortunately, desorption efficiency for each reaction (i.e., reactions (3) and (6)) could not be determined under the present experimental conditions; nonetheless, some modeling studies have assumed that no desorption occurred in the case of two-product reactions (i.e., reaction (6)) (Garrod et al. 2007). Further studies are thus necessary for elucidating the chemical desorption of PH 3 . The obtained effective desorption cross-section (5.3× 10 −17 cm 2 ) was larger by a factor of 3 than that for H 2 S obtained under similar experimental conditions (1.6×10 −17 cm 2 ; Oba et al. 2019), despite the thermodynamic parameters for each reaction system being similar to one another ( Table 1). According to Garrod et al. (2007), the desorption probability and fraction are constrained by multiple parameters including the heat of the reaction; hence, there may be other parameters besides the reaction to heat that can cause a large difference. Recent theoretical studies focused on new factors such as energy dissipation upon chemical reactions and the adsorption state of molecules on surfaces, thus extending the discussion regarding the efficiency of chemical desorption in more detail (Fredon et al. 2017;Korchagina et al. 2017;Fredon & Cuppen 2018;Kayanuma et al. 2019;Pantaleone et al. 2020). To elucidate why PH 3 is more effectively desorbed than H 2 S is beyond the scope of the present study. We believe a combination of experiments and computational calculations is necessary to fully understand chemical desorption in dense clouds. Astrophysical Implications PH 3 has been astronomically observed in the circumstellar envelope of a carbon star, IRC +10216 (e.g., Agúndez et al. 2014). Alongside other P-hydrides (PH and PH 2 ), it has never been detected in the dense and cold regions of the ISM, where only PO and PN have to date been found as P-bearing species (Fontani et al. 2016;Lefloch et al. 2016). Modeling studies of phosphorus chemistry in star-forming regions predict that PH 3 could be a major molecular reservoir of P-bearing species in ice mantles (Charnley & Millar 1994;Aota & Aikawa 2012;Chantzos et al. 2020). In relatively diffuse clouds (<5 mag), solid PH 3 is thought to desorb mainly due to the photodesorption by interstellar UV photons (Chantzos et al. 2020). In a protostellar envelope, mapping observations of PN and PO suggest that PH 3 would be released from grains into the gas phase by protostellar shocks ). However, little is known about the desorption of PH 3 from grains in quiescent and dense molecular clouds without prominent star formation activity. Our experimental study of chemical desorption suggests the presence of PH 3 gas (and possibly PH 2 ) in the dense, cold, and quiescent regions of molecular clouds, if reactions (1)-(3) take place on grains. Following the method of Oba et al. (2019), we estimate the desorption efficiency of PH 3 per incident H atom to be 2.3% based on the present results. If the PH 3 desorbed via reaction (3) only, the desorption efficiency would be doubled. This is a lower limit on the desorption efficiency per reactive event, since a considerable amount of incident H atoms will be used for recombination into H 2 . Astrochemical simulations involving grain surface chemistry of P-bearing molecules assume 1% for the desorption efficiency per reactive event (Chantzos et al. 2020). The efficiency is, however, much lower than the value estimated in the present study, and thus needs to be updated. In addition, the implementation of the surface H-abstraction reaction for PH 3 in the modeling can further enhance its chemical desorption. Future astrochemical simulations with updated chemical desorption efficiency and surface reaction network of P-bearing species, along with astronomical observations of PH 3 toward quiescent dense cores such as TMC-1, are required for comprehensive understanding of the missing phosphorus problem in star-forming regions. After the desorption, PH 3 may be destroyed by a series of processes such as photolysis and H-abstraction (e.g., reaction (6) in the gas phase), followed by the formation of PN and PO via the following reactions: Because PH is an important precursor for PN and PO in the gas phase (Charnley & Millar 1994), the formation of PH is also a key process that requires in-depth understanding as it relates to P-bearing chemistry in the ISM. In addition to PH 3 and PH 2 , PH could also be desorbed from grains by chemical desorption during reaction (1) in dense, cold clouds. Note that the formation of PH from PH 3 via photolysis may not be efficient in such regions where the ultraviolet field is very weak (Prasad & Tarafdar 1983). In addition, H-abstraction from PH 3 , which requires sufficient energy to overcome a moderate activation barrier (e.g., reaction 6), will not effectively proceed by quantum tunneling in the gas phase at the typical temperatures of dense clouds (∼10 K). Hence, chemical desorption of PH from grains by reaction (1) may present the most promising pathway for the possible presence of PH in the gas phase of dense, cold clouds. The formed PH can potentially be further used for the formation of PN and PO through reactions (10) and (11), respectively, even in such cold regions. Unfortunately, we could not estimate the efficiency of chemical desorption by reaction (1) in the present study. However, if PH can desorb upon formation at an efficiency comparable with or higher than PH 3 via reaction (3), the aforementioned hypothesis becomes plausible. Because the present study provides promising results on the presence of P-hydrides as a gas and/or solid in various astronomical sources, additional observation of P-hydrides is anticipated in the future.
5,019
2020-08-03T00:00:00.000
[ "Environmental Science", "Chemistry", "Physics" ]
Light‐Induced Pulsed EPR Dipolar Spectroscopy on a Paradigmatic Hemeprotein Abstract Light‐induced pulsed EPR dipolar spectroscopic methods allow the determination of nanometer distances between paramagnetic sites. Here we employ orthogonal spin labels, a chromophore triplet state and a stable radical, to carry out distance measurements in singly nitroxide‐labeled human neuroglobin. We demonstrate that Zn‐substitution of neuroglobin, to populate the Zn(II) protoporphyrin IX triplet state, makes it possible to perform light‐induced pulsed dipolar experiments on hemeproteins, extending the use of light‐induced dipolar spectroscopy to this large class of metalloproteins. The versatility of the method is ensured by the employment of different techniques: relaxation‐induced dipolar modulation enhancement (RIDME) is applied for the first time to the photoexcited triplet state. In addition, an alternative pulse scheme for laser‐induced magnetic dipole (LaserIMD) spectroscopy, based on the refocused‐echo detection sequence, is proposed for accurate zero‐time determination and reliable distance analysis. Electron paramagnetic resonance (EPR) pulsed dipolar spectroscopy (PDS) is an important biophysical technique for studying complex biological assemblies. [1][2][3] PDS groups a series of pulse EPR techniques that allow the measurement, via the dipolar electron-electron coupling between two paramagnetic species, of distances and distance distributions. Structural information in the range between 1.6 and 8 nm is obtained with high precision and reliability, while the limit of 16 nm is reached under full deuteration of the sample and solvent. [4][5][6] Among the PDS techniques, double electron-electron resonance (DEER), also known as pulsed electron double resonance (PELDOR), is the most frequently used due to its robustness. [7,8] Other EPR techniques for measuring electron-electron dipolar couplings include Double-Quantum Coherence (DQC) [9] and relaxationinduced dipolar modulation enhancement (RIDME). [10,11] Conventionally, PDS measurements are performed between two nitroxide spin labels, attached to proteins by site-directed spin labelling (SDSL) of a cysteine residue or of a non-native amino acid, which has been genetically encoded. [12][13][14] The most commonly used spin label is (1-oxyl-2,2,5,5-tetra-methylpyrroline-3-methyl)-methanethiosulfonate (MTSSL), which specifically reacts with the thiol group of cysteine residues. [15] Triarylmethyl (trityl) radicals are emerging as carbon-centered spin labels with interesting spectroscopic properties [16][17][18] while, among metal-based tags, Gd(III) has proven to be an attractive alternative to radicals for PDS applications at high field. [19] Recently, Cu(II) and high spin Mn(II) tags have also been successfully employed. [20][21][22][23] The search for alternative spin labels is an active area of research. [24] One important new development is the demonstration that the triplet state of porphyrin chromophores can be exploited to determine inter-spin distances. The first work in this area was conducted on a peptide-based molecular ruler containing a nitroxide probe and porphyrin moiety. [25,26] The large electron spin polarization of the photoexcited triplet state, [27,28] and the consequently high sensitivity of the experiment, furthermore allowed light-induced PDS methodology to be applied to a photosynthetic protein, containing an endogenous carotenoid triplet state probe. [29] The dipolar measurements were performed with light-induced DEER (LiDEER), [25,26] a variation of the conventional 4-pulse DEER sequence where a laser pulse is used to generate the triplet state before the application of dichromatic microwave pulses: the detection frequency is resonant with the photo-induced porphyrin triplet and the pump is resonant with the stable nitroxide radical. In the meantime, a new technique: laser-induced magnetic dipole (LaserIMD) spectroscopy, based on optical switching of the dipole-dipole coupling, was proposed as an alternative for triplet-nitroxide dipolar spectroscopy on the same porphyrinbased model system. [30] Comparison between the two techniques was carried out both at X-band and at Q-band. [31][32][33] It was found that the relative signal-to-noise of the two techniques depends strongly on the degree of excitation that can be achieved by the pump pulse used in LiDEER, the laser excitation, the relative relaxation times of the two species being investigated and the inter spin distance range that needs to be probed. LaserIMD and LiDEER can therefore be seen as complementary to one another. Intrinsic paramagnetic centers in biomolecules are ideal spin probes for PDS applications. They are usually fixed rigidly within their parent biomolecule resulting in very accurate and narrow inter-spin distance distributions. In parallel, combining the nitroxide and the endogenous probe in an orthogonal labelling approach has proven to be very effective since the spectroscopically non-identical labels can be addressed selectively during the PDS experiment. [34] Traditionally the research of native paramagnetic probes has been focused on metalbased centers involving Cu(II), low-spin Fe(III), iron sulfur and manganese clusters. [35][36][37][38] Recently, it has been shown that the RIDME experiment is better suited than DEER for distance measurements between spin active moieties with different spin-lattice relaxation times or species with very broad spectra such as metal ions like low-spin Fe(III). [39] Many biological macromolecules, photosynthetic proteins in primis, and also proteins belonging to other classes, like hemeproteins and flavins, contain a photoactive cofactor, which, in principle, can be exploited as an endogenous paramagnetic center. Tentatively, the hemeprotein cytochrome C, spin labelled with MTSSL at the free cysteine position, was investigated in order to demonstrate that LaserIMD could be employed for distance measurements between the endogenous prosthetic group and a nitroxide label. [30] However, no triplet state was observed by EPR spectroscopy, as expected for a low-spin ferric heme. In this work, human neuroglobin was chosen as a benchmark hemeprotein to demonstrate the feasibility of the dipolar spectroscopy experiment between a triplet state, photogenerated on the porphyrin-derivative group, and a nitroxide probe attached to one of the native cysteines of the protein via SDSL. Human neuroglobin is a good model system in this respect because both a high resolution X-ray structure [40] and DEER data [41,42] are available. On the same protein, M. Ezhevskaya et al. [41] reported DEER measurements exploiting the lowspin Fe(III) ion of the heme group as an endogenous probe. Here, we replaced the heme cofactor with the Zn(II) protoporphyrin IX (ZnPP) [43] in order to introduce a photo-generated triplet state spin label. Following the nomenclature by M. Ezhevskaya et al. the mutant G19 of neuroglobin has been prepared (see the Supporting Information for details). The mutant after substitution of the heme cofactor and SDSL with the MTSSL probe is referred to as ZnG19 (see Figure 1). In parallel, an alternative pulse scheme for LaserIMD, based on the refocused-echo detection sequence (ReLaserIMD), is proposed in this work in order to ensure accurate zero-time determination and a more reliable distance analysis. The versatility of the light-induced dipolar methodology is proven by extending its applicability to this important class of proteins and employing different PDS techniques. In addition to LiDEER and the novel 4PLaserIMD variant, light-induced dipolar modulation enhancement (LiRIDME) is applied for the first time. Optimization of the pulse sequences is crucial to broaden the scope of light-induced PDS. For this purpose we employed an α-helix peptide, used in previous studies, [25,26,30] labeled with a tetra-phenylporphyrin moiety and with the unnatural amino acid TOAC (4-amino-1-oxyl-2,2,6,6-tetra-methylpiperidine-4-carboxylic acid). The chemical structure of the model peptide is shown in Figure 1. In order to analyze the dipolar oscillations accurately and relate this information to an inter-spin distance distribution, it is fundamental to pinpoint the zero-time of the experiment precisely. The correct determination of the zero-time is particularly important for short inter-spin distances which give rise to high frequency dipolar oscillations. The absence of symmetry in the complete LaserIMD trace (see the Supporting Information) does not allow the symmetry-based procedure for zero determination to be used in the analysis our experimental data as proposed by Hintze et al. [30] For this reason, in a technique we dub ReLaserIMD (Figure 2 right), we employ the same principle as in the 4-pulse DEER scheme, in which a refocused echo detection sequence is utilized, to yield a symmetric zero-time. [8] The performance of LaserIMD and ReLaserIMD for the model peptide (Figure 1(a)) is compared in Figure 2. [20] The distance between the center of the tetraphenylporphyrin and the NÀ O midpoint is indicated. (b) Structure of human neuroglobin (PDB: 4MPM). [40] The distance between the center of the ZnPP and the average position of the MTSSL rotamers computed with the software MMM (Multiscale Modelling of Macromolecules), [44] is indicated. Details are reported in the Supporting Information. In order to demonstrate convincingly that the ReLaserIMD sequence allows an accurate determination of the zero-time, the distance analysis of both experimental traces was performed repeating the Tikhonov regularization procedure, implemented in DeerAnalysis, [45] for a selected set of zero times. In the LaserIMD trace several points could be picked as potential zero times in the zone where the change of slope between the baseline and the drop of the first modulation occurs, preventing the procedure being free from bias. Instead, in ReLaserIMD they can be reasonably restricted to a much smaller range at the top of the first modulation based on the symmetry of the first modulation. This important parameter affects the output of the distance analysis: the different distributions obtained from LaserIMD have their maxima spread over a range of distances of about 0.1 nm, whereas this interval is limited to about 0.01 nm for ReLaserIMD. Furthermore, spurious peaks appear in the distance distribution plot in the case of the standard LaserIMD experiment. For the LaserIMD data set, the result which gave the closest agreement to the ReLaserIMD result was obtained by selecting a zero time in a region where the drop of the first modulation has already started (yellow lines in left panels of Figure 2). This indicates that the experimental zero-time does not occur when the light flash coincides with the start of the first microwave pulse but rather at some time after this, the exact value of which will depend on the length of the microwave pulse and laser pulse. Thus while the LaserIMD experiment is free from experimental dead-time due to pulse overlap, [30] there is still a shift in the zero time, which could be considered a zero-time artefact, arising from the finite length of the pulses. Next, the ReLaserIMD sequence was employed, together with LiDEER, to study the dipolar interaction between the triplet state of ZnPP and the nitroxide radical in ZnG19 and prove the feasibility of the light-induced PDS experiment on heme proteins. Additionally, for the first time, the LiRIDME sequence, in the five-pulse dead-time free version, is applied to a triplet probe providing evidence that the longitudinal relaxation properties of the triplet state can be favorable for the application of this technique. The pulse sequences are reported in Figure 3 alongside the corresponding experimental time traces and distance distributions. The ReLaserIMD data set is good, characterized by a modulation depth of 18 % and a signal to noise S/N ' 49. This allows more than two well-resolved periods of the dipolar modulation to be observed, as seen in Figure 3 violet trace. By comparison the LiDEER experiment gives a very poor result, with a high level of noise and a low modulation depth (see the Supporting Information). Each of the two methods has its own specific factors influencing the value of the modulation depth as previously discussed: it depends on the excitation efficiency of the pump pulse for LiDEER and on the laser excitation and quantum yield for (Re)LaserIMD. [31,32] RIDME has previously been shown to be more sensitive than DEER for measuring inter-spin interactions between paramagnetic species with different longitudinal (T 1 ) relaxation times and in the presence of broad EPR spectra. [39] To this end, LiRIDME (see Figure 3 for pulse sequence) detecting on the nitroxide signal and allowing the broad triplet species to relax was also measured. This set up was favourable as the nitroxide T 1 is longer than the triplet state relaxation/lifetime. The relaxation and kinetics behaviour (at 20 K as for the PDS experiment) was characterized in detail and it is reported in the Supporting Information. The LiRIDME time trace features a modulation depth of 11 % and a S/N ' 18, azure trace Figure 3. The presence of the overtones in the data set, seen as a faster oscillation, particularly evident in the first modulation period, originate from Δm s > 1 transitions of the triplet state and have been considered in the analysis of distance distributions. [46] Distance analysis, together with the validation procedure, was performed for all data sets recorded on the ZnG19 protein using DeerAnalysis [45] or OvertoneAnalysis [46] in the case of the LiRIDME datasets, and the same most-probable distance (2.4 nm) and similar distance distributions were obtained in all cases. The excellent agreement of the experimental results with the distance predicted by MMM [44] analysis based on the X-ray structural data on the protein (see Figure 1) demonstrates that the triplet state, photo-generated on the prosthetic group after DeerAnalysis. [45] The experimental conditions and the parameters of the data analysis are reported in the Supporting Information. the ZnPP-substitution protocol, can be successfully exploited to determine accurate inter-spin distances in heme-proteins. Moreover, the availability of diverse pulse schemes that can be applied in systems containing photoexcited triplet states allows one to select, case-by-case, the technique that warrants the best performance in term of S/N. The performance of the three different PDS sequences can be rationalized in terms of the relaxation behavior of the triplet state and the nitroxide probes. The relative phase memory times of the stable radical and triplet state make either LiDEER or (Re)LaserIMD the most suited experiment in terms of S/N. While LiDEER uses the triplet signal for detection and thus depends on the transverse relaxation time of the triplet, (Re)LaserIMD, using the stable radical for observation, is influenced by the phase memory time of this species. This is the reason why, in the specific case of neuroglobin, where the phase memory time of ZnPP triplet state is of the order of 500 ns only, the use of the LiDEER is almost precluded, despite the favorable spin polarization of the triplet (see Figure S2 in the Supporting Information). [47] LiRIDME works favorably when the two species under investigation have different longitudinal relaxation times and the slower relaxing species is used for detection. [39] The longitudinal relaxation time/lifetime of the ZnPP triplet state is faster than that of the nitroxide, similar to the relationship found between metal centers and nitroxides, leading to satisfactory performance of the LiRIDME technique on the neuroglobin sample. However, it should be noted that when using a high spin paramagnetic center as the fast-relaxing species, as is the case for the triplet state, overtones of the dipolar frequencies are present. This makes the modulations in the dipolar trace corresponding to the dipolar frequency less clearly distinguishable as higher frequency overtone contribu-tions are also present and distance analysis must take into account these overtone contributions. In conclusion, in this work we demonstrate that an accurate determination of distance distributions can be achieved using the triplet state of ZnPP coupled to a nitroxide spin label in human neuroglobin. This is the first time that the feasibility of the dipolar experiment has been demonstrated for a paradigmatic protein belonging to the class of the hemeproteins, making clear use of the photoexcited triplet state. Our results have proven that LiRIDME can provide reliable information on the distance between nitroxides and triplet state chromophores in a similar fashion to LaserIMD. Both single-frequency techniques become advantageous compared to LiDEER when the chromophore in the triplet state is characterized by short relaxation times. Light induced PDS techniques should be seen as complementary to PDS techniques using stable radical spin centers. In particular, they are likely to be important for applications in spin systems which contain multiple spins as they enable a spin-label to be turned on or switched off. In this way proof that it is possible to substitute the iron heme, which is spin-active in its ground state, for the spin inactive ground state ZnPP in order to perform light-induced PDS experiments is also a valuable result. An important requisite to broaden the scope of the triplet spin labels in biological macromolecules is the availability of different light-induced PDS techniques and the optimization of such pulse sequences, for example ReLaserIMD. The different techniques complement each other and, depending on the nature of the triplet spin label, can be used interchangeably, thereby taking advantage of specific properties of the stable radicals and triplet state present in a particular system, allowing . PDS data measured on ZnG19: (a) LiRIDME and ReLaserIMD pulse schemes, (b) form factors (grey) and best fits to the LiRIDME (azure) and the ReLaserIMD (violet) data and (c) corresponding distances distributions. The distance analyses have been performed with DeerAnalysis, for ReLaserIMD, and with OvertoneAnalysis for LiRIDME (with 50 % contribution of the second harmonic overtone). The error bars have been obtained using the validation procedure, implemented in both softwares, varying the starting point for the background fitting between 300 and 500 ns and adding 50 % of the original noise. The experimental conditions are reported in the Supporting Information. Experimental Section The pulsed EPR measurements were carried out at Q-band (34 GHz) on a Bruker ELEXSYS E580 spectrometer using a Bruker TII resonator. The experiments were performed at 20 K on glassy frozen solutions of ZnG19 (~400 μM in deuterated Tris-HCl buffer + 66 % deuterated glycerol) and of the model peptide (~100 μM in 98 % d-methanol, 2 % D 2 O). All further experimental details are given in the Supporting Information.
4,015.2
2019-03-21T00:00:00.000
[ "Chemistry", "Physics" ]
A Study on the Design and Pricing of Adolescent Mental Health Insurance Products Based on Adjusted Rates . In recent years, with the frequent occurrence of adolescent suicide, the topic of adolescent mental health has attracted special attention. The insurance products against this risk need to be studied. In this paper, based on an investigation of students and parents in five junior and senior high schools in Shandong Province, we design the adolescent mental health insurance product that meets the market demand and design additional insurance to protect against the risk induced by sudden accidents. The premium pricing system based on mental health status (MHSBI) is constructed with adjusted rates. The total premiums are in line with the applicants’ willingness to pay. This paper improves the deficiencies of insurance research about adolescent mental health, and provides theoretical support for the introduction of such insurance products. Introduction The recent incident of Hu Xinyu, a high school student in Jiangxi, who hanged himself, touched the heartstrings of millions of netizens and once again aroused the public's concern about the mental health of adolescents [1]. The Chinese National Mental Health Development Report (2019-2020) released by the Institute of Psychology of the Chinese Academy of Sciences shows that the detection rate of major depression is 7.4%, and adolescent psychological problems show a trend of low age [2]. In July 2021, the General Office of the Ministry of Education issued the Notice on Strengthening the Management of Students' Mental Health, clearly proposing "to do a good job of mental health assessment " [3], it is urgent to ensure the mental health of adolescents with the power of all sectors of society. According to our survey, Chinese insurance industry has formed two main types in the market, personal health insurance and property insurance. However, adolescent mental health insurance products are still in a blue ocean market. Considering that the current domestic psychological counseling fees are too confusing and an industry standard system has not yet been formed [4]. It is important to design adolescent mental health insurance to reduce the risk of families bearing high psychological treatment costs and to provide diversified insurance services to enhance policyholders' awareness and prevention of mental illness [5]. Investigation implementation This paper conducted an in-depth questionnaire survey on students and their parents in five middle and high schools, including Gaozhuang Central Middle School in Laiwu District, Jinan City, Shandong Province, Experimental Middle School, Fengcheng High School, and Gaotang County Second Middle School in Liaocheng City, Shandong Province, respectively. Among them, 713 questionnaires were collected from the investigation of adolescents, and 710 were valid, with an effective rate of 99.58%. A total of 564 questionnaires were collected from the survey of parents, and 559 were valid, with an effective rate of 99.11%. Investigation results and analysis The survey used SPSS 10.0 software to clean the questionnaire data, and used descriptive statistics, correlation coefficient and variance to conduct statistical analysis. According to the investigation results, one quarter of adolescents have poor psychological selfawareness. And the treatment of psychological discomfort is limited to self-healing with low effectiveness. Meanwhile, parents pay insufficient attention to the mental health of adolescents. And the mental health education in schools fails to achieve full coverage. In terms of parents' views on mental health insurance, parents have a strong desire to adolescent mental health insurance, expecting to pay 270 RMB. Design inspiration In terms of premium setting, based on parents' willingness to take out mental health insurance and the expected payment, the basis of premium design is derived as 250-300 RMB. In terms of insurance service content design, based on the research results, the insurance is designed around "daily psychological care and aftercare psychological treatment" [6], and this insurance considers home school-enterprise cooperation to jointly support insurance to create value for adolescents. Applicable objects According to the World Health Organization's definition of adolescents (i.e. 10-19 years old) [7] which shows that the prevalence of mental illness is significantly higher in junior and senior high school individuals than in other stages, 13 to 18 years old is chosen as the applicable age range. Also, the insured should mental healthy tested after MMHI-60. Coverage Considering that there are many major risk events that may induce mental health disorders in addition to illnesses, the insurance is divided into primary and additional insurance to cover medical expenses and unexpected event losses respectively for one year. Pricing of adolescent mental health insurance product Based on the actuarial principles of non-life insurance, according to the principle of matching insurance companies' underwriting costs, risk taking and insurance rates, and with the introduction of the pricing idea of rate adjustment by UBI [8] (commercial auto insurance based on driving behavior), the MHSBI (Mental health status based insurance) pricing model is used for adolescent mental health insurance. Premium components: total premium = adolescent mental health medical premium + adolescent accident premium. Determination of MHSBI premium Youth mental health medical premium = base premium × rate adjustment factor. The formula is , where is the youth mental health medical premium, is the base premium, and is the rate adjustment factor. Base premium The base premium is the amount of insurance multiplied by the base premium rate, i.e. , where represents the base premium, represents the amount of insurance, and represents the base premium rate. (1) Amount of insurance The insurance amount consists of the medical benefit and the service value , i.e. . Among them, the medical benefit can be divided into two parts: the cost of psychological testing and the cost of treating psychological problems . The cost of psychological testing is determined by looking up the package cost of psychological counseling for adolescents by Qingdao psychological counseling institutions on the Meituan software and taking the average value of 570 yuan as the cost of a single test, i.e. . For the cost of treatment of psychological problems, a total of 1666 cases with complete information on the first visit to a hospital's child and adolescent psychological clinic [9] between 2001 and 2010 were selected by searching the literature, and those cases aged 12 to 18 were selected for diagnostic classification by the International Classification of Diseases standard ICD-10 [10]. Then according to the serious problem of mental illness, all the symptoms are divided into severe, moderate, mild three categories. 6.9 per cent were classified as severe, 34.2 per cent as moderate and 58.9 per cent as mild. The formula for calculating the cost of treatment for psychological problems is , which is the sum of the product of the probability of each disease category and its average treatment cost, and the result is calculated as RMB 15,329.77. Then the medical benefit is RMB 15,899.77. The cost of special services provided by this product mainly includes real-time psychological assessment, two psychological counseling sessions, hospital green channel and school lectures during the one-year insurance period. It is estimated that the cost of two psychological counseling sessions is roughly RMB 1,000 and the cost of other services is RMB 200 per person, totaling RMB 1,200. Then the total insurance amount is RMB 17,099.77. (2) Determination of the benchmark premium rate The benchmark premium rate is determined by the benchmark pure premium rate and the additional cost rate, i.e. , where is the benchmark pure premium rate and is the additional cost rate. The benchmark pure premium rate can be calculated from the formula , where is the prevalence of psychological problems among adolescents and is the proportion of adolescents with psychological problems who seek medical treatment. By looking at the results of the National Child and Adolescent Mental Disorders Flow Survey Report for 2021 and data from related literature, we can obtain . Then the benchmark pure premium rate is 0.0148. The additional cost rate is uniformly 0.35 during the pilot period. Therefore, the calculated benchmark premium rate is determined to be 0.0228. Determination of rate adjustment factor The rate adjustment factor is multiplied by the no-claims preference factor, the independent underwriting factor and the independent channel factor, i.e. . The no-claims preference factor is risk-graded according to the historical number of teenagers' insurance occurrences, and each level corresponds to the corresponding no-claims preference factor as shown with a floating range of [0.8,2.0]. No claims preference factor set to 0.8 if no previous claims have been incurred; in case of one previous claim, the no-claims preference factor is set to 1.0; in case of two previous claims, the no-claims preference factor is set to 1.5; for more than two prior claims, the no-claims preference factor is set to 2.0. The independent underwriting coefficient was determined by the hierarchical analysis method to assess the psychological state of the insured. Among them, the psychological scale was Chinese Middle School Students' Mental Health Scale (MMHI-60). The weights of each index were determined by the hierarchical analysis method as shown in Table 1. The above indexes of the insured are scored with 1, 2 and 3 points respectively. The higher the score is, the better the situation is. The comprehensive score of the insured is the sum of the product of each index score and weight. The mental health status of the applicant is determined according to the comprehensive score, so as to determine the independent underwriting coefficient. Table 2 shows the corresponding relationship between the comprehensive score and the independent underwriting coefficient. The independent channel coefficient is related to the sales channel of the insurer. If the insured buys through the school group, then ; If the insured purchases it by himself, then . Determination of adolescent accident premium The additional insurance covers psychological problems caused by major accidents of teenagers. Its premium pricing can refer to other one-year children's accident insurance on the market. For this product, the one-year quotation of children's accident insurance issued by some insurance companies with similar insurance liability is selected as reference to determine the premium. And the average value of 65.25 yuan is taken as the premium of additional insurance of this product. In summary, the total premium is determined as shown in Table 3. Conclusions Based on an investigation of students and parents, this paper designs the content of adolescent mental health insurance products. Combining the clinic data of Xiao Lijun et al. from 2001 to 2010, a mental health status-based premium pricing system (MHSBI) is constructed with a setting of adjusted rates to determine the total premium for each category.
2,430.4
2023-06-08T00:00:00.000
[ "Economics", "Medicine" ]
An improved denoising method for eye blink detection using automotive millimeter wave radar With the development of radar technology, the automotive millimeter wave radar is widely applied in the fields including internet of vehicles, Artificial Intelligence (AI)-based autonomous driving, health monitoring, etc. Eye blink, as one of the most common human activities, can effectively reflect the person’s consciousness and fatigue. The contacted eye blink detection often leads to uncomfortable experience and the camera-based eye blink detection has privacy issues. As an alternative, the non-contacted eye blink detection based on automotive millimeter wave radar resolves the aforementioned issues and has been received much attention. This paper proposes an eye blink detection method using the frequency modulated continuous wave radar. Firstly, the position of the person’s head is estimated by carrying out fast Fourier transform on the intermediate frequency signal, and the signals of the range bins at the head are extracted. Then, the complete ensemble empirical mode decomposition with adaptive noise algorithm is applied to decompose the eye signals into a series of intrinsic mode functions (IMFs), and the singular value decomposition is adopted to constrain the selection and reconstruction of the useful IMFs related to the eye blink signal. Finally, the short-time Fourier transformation and cell average constant false alarm rate are applied to detect the eye blink behavior. Experiments are carried out to validate the effectiveness of the proposed eye blink detection method. requirements, so as to realize corresponding functions [7,8]. And with the increasing demand for health monitoring [9,10], physiological signs detection based on automotive millimeter wave radar has been received a lot of attention. Physiological signals, such as heart signal, breath signal, blink signal, etc., can reflect the fatigue, attention, stress, or consciousness level of the person [11,12]. Since the eye blink motion is one of the most natural and frequent human activities, the eye blink detection is an effective way to measure fatigue and concentration. Hence, the eye blink detection has been researched widely [13,14]. Electro-Oculogram [15], as a common blink detection method, mainly relies on the contacted devices. The eye blink motion is detected by attaching electrodes to the human skin near the eyes to measure potential changes between the electrodes. However, attaching electrodes to the skin causes abrasion, leading to uncomfortable user experience. On the other hand, non-contacted eye blink detection usually relies on vision devices [16,17]. It applies the camera to capture the image sequences that contain eye blink motion, and achieves eye blink detection by using computer vision technology. AL-Gawwam et al. [17] use facial feature trackers to localize the contours of the eyes and eyelids. They measure the range between the eyelids to obtain the opening state of the eyes, and the rapid change of the range between the eyelids is detected as an eye blink. Although the vision-based eye blink detection method improves user's natural experience, the expensive cost, light sensitivity and privacy issues should be addressed. As an alternative, the non-contacted eye blink detection based on radar resolves the aforementioned issues. The Doppler sensors are widely applied for eye blink detection [18][19][20]. Specifically, Tamba et al. [18] apply the Doppler sensor with setting the thresholds of the blink width and height for each person to reduce the influence of individual differences. Kim [19] adopts the principal component analysis to distinguish conscious and unconscious eye blink using a 5.8 GHz Doppler sensor radar. Yamamoto et al. [20] estimate the eye blink duration time by analyzing the eyelids, closing and opening behavior on the spectrograms. Compared to Doppler sensors, the frequency modulated continuous wave (FMCW) radar at the millimeter frequency range has significant advantages in the fields of target detection [21], vital signs detection [22], driver's behavior detection [23], hand gesture recognition [24], and so on. Cardillo et al. [25] apply a 120 GHz FMCW radar to realize the head motion and eye blink detection. However, the authors only use the range information of the eye blink motion, the Doppler information is not mentioned. Therefore, in this paper, we focus on the non-contacted eye blink detection and propose an eye blink detection method based on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) with singular value decomposition (SVD) denoising using FMCW radar. The main contributions of this paper are concluded as follows: Firstly, the intermediate frequency (IF) signal is obtained by mixing and filtering the eye blink data. Then, the person's head position is estimated by performing fast Fourier transform (FFT) on the IF signal, and the signals in the position interval are extracted. Secondly, we propose an eye blink signal reconstructed method by the CEEMDAN algorithm combining with the SVD. The extracted signal is decomposed into a series of intrinsic mode functions (IMFs) by CEEMDAN algorithm. Then, SVD is applied to constrain the selection and reconstruction of the useful IMFs for eye blink signal reconstruction. Thirdly, the short-time Fourier transformation (STFT) is performed on the reconstructed eye blink signal, and the eye blink detection is realized by the cell average constant false alarm rate (CA-CFAR). Finally, we carry out a series of experiments to verify the effectiveness of the proposed eye blink detection method. Experimental results show that the proposed method can successfully detect the eye blink motion. The rest of this paper is organized as follows. In Sect. 2, the principle of FMCW radar is described. In Sect. 3, the proposed eye blink detection method is introduced in detail. The experimental results are analyzed and discussed in Sect. 4. The conclusion is drawn in Sect. 5. FMCW radar principle In this section, the principle of FMCW radar is first described (shown in Fig. 1). The radar system mainly contains several parts such as signal source, transmitting antenna (TX), receiving antenna (RX), mixer, low-pass filter (LPF), ADC sampler (A/D), etc. The signal source is responsible for generating the FMCW signal. The TX and RX are responsible for the transmission and reception of the signal. The received echo signal and the transmitted signal are mixed by the mixer, and passed through a LPF to obtain the IF signal. Finally, the IF signal is sampled by A/D sampler for further processing. In this paper, we adopt sawtooth modulation and the transmitter transmits modulated sawtooth microwave (shown in Fig. 2) [23], which is expressed as where f c is the carrier frequency, f T (τ ) = S · τ indicates that the frequency of the transmitted signal within a period of time, S = B T c is the slope of the chirp signal, B is the maximum bandwidth of the signal, T c is the pulse width of the chirp signal, and A T is the amplitude of the transmitted signal. Let R be the range from the FMCW radar to the person's head. The received echo signal can be expressed as where A R is the amplitude of the received signal, t d = 2R c is the signal delayed time from transmission to reception, c is the light speed, f R (t) = S · (t − t d ) + �f d , and f d is Doppler shift. The received signal S R (t) and the transmitted signal S T (t) are sent to the mixer and passed through a LPF, then the IF signal can be expressed as where f IF = S · t d . Specifically, the eye blink signal is contained in the IF signal. Therefore, to detect the eye blink, the IF signal needs to be processed and analyzed. Methods In this section, we present the proposed eye blink detection method in detail. The flowchart of the proposed method is shown in Fig. 3. Firstly, the range FFT is performed on the IF signal to determine the position interval of the human head, and the signals in this position interval are extracted. Then, the extracted signal is decomposed into a series of IMFs by CEEMDAN algorithm. Next, the SVD is applied on each IMF signal to constrain the selection and reconstruction of the useful IMFs to reconstruct the eye blink signal. Finally, the STFT is performed on the reconstructed eye blink signal and the CA-CFAR is applied to detect the eye blink. Signal extraction We place the FMCW radar directly in front of the person's head and eyes, so the received signals mainly include the eye blink signal, the signals from other facial parts and noise signals. Therefore, we need to obtain the all signals located at the distance R for extracting the eye blink signal. Then, we firstly need to estimate the position information of the head. According to t d = 2R c and f IF = S · t d , the corresponding relationship between R and f IF is Therefore, to obtain the range information of the head, it is necessary to analyze the IF signal and estimate the frequency f IF . Firstly, the IF signal is sampled by A/D and the FFT is performed on sampling points of each chirp to obtain range information of the head. We assume for simple calculation, the sampled signal and transformed signal after FFT are [24] where M is the number of transmitted chirps, N is the number of sampling points, p = 0, 1, . . . , N − 1 , the T s is the sampling time and T c = N T s . Therefore, when the absolute value |S 1 | wants to reach the maximum, it needs to satisfy Because of f IF ≫ f d , we can roughly obtain f IF ≈ p N T s after FFT and the range value of the head can be obtained according to Eq. (4). Then, the signals located at the estimated range are extracted as the signal x(t) that includes the eye blink signal, the signals from other facial parts and noise signals. where t = mT c , m = 0, 1, . . . , M − 1 , and p = n 1 while |S 1 | reaches the maximum. In addition, considering the sampling of FMCW radar signal, the maximum measuring range can be expressed as The maximum measuring range is 8.55 m when the parameters of FMCW radar are adopted in the experiments. CEEMDAN algorithm Due to the interference of the face and the environmental noise, the eye blink signal is too weak to be detected. Therefore, it is necessary to remove the interference and the noise to enhance the eye blink signal. The empirical mode decomposition (EMD) algorithm [26] is usually applied to denoise the weak vital signals. EMD decomposes the raw signal into several IMFs, and it realizes denoising by removing the noisy IMFs [26]. However, the EMD algorithm usually leads to the problem of the modal aliasing. Towards this end, the ensemble empirical modal decomposition (EEMD) algorithm [27] adds different white noises into the raw signal, and performs multiple EMD on the noisy signal and averages the IMFs of multiple EMD to obtain the final IMFs. Furthermore, the SVD is often used in conjunction with EEMD for constraining the selection and reconstruction of the useful IMFs [28]. However, the EEMD algorithm cannot completely eliminate the influence of white noise on the decomposition results. The CEEMDAN algorithm [29] adds a finite number of adaptive white noises at each stage of the decomposition process of EMD, which effectively solves the problems of EEMD. Therefore, we apply the CEEMDAN algorithm to decompose the extracted signal for signal denoising. The white noise n g (t) , g = 1, 2, . . . , G with a standard normal distribution is added to the extracted signal x(t) , and the EMD is applied on the added signal to obtain the IMF g 1 (t) . Then, IMF 1 (t) can be computed as where IMF 1 (t) represents the first IMF component, and the first remaining component r 1 (t) can be derived as Then, the white noise is continuously added the remaining component r 1 (t) for obtaining IMF 2 (t) . This step is repeated until the remaining component is a monotonic where x(t) is the denoised signal, and J u is the number of the useful IMFs. Eye blink signal reconstruction using SVD After CEEMDAN decomposing, the SVD [28] is adopted to constrain the selection and reconstruction of the IMFs. By applying the SVD, the IMFs related to the eye blink are selected and reconstructed, so as to obtain the eye blink signal from the extracted signal. In addition, the interference is removed in the selection process of IMFs, and each selected IMF is denoised during the reconstruction process. Since the extracted signal is first decomposed into several IMFs by the CEEMDAN algorithm, and each IMF can be expressed as where N is the temporal sampling number of the IMF. 3. Repeat the above decomposition process, until r J (t) is a monotonic function. Output: x(t) = J j=1 IMF j (t) + r J (t). where m = N 2 + 1 , n = N − m + 1 , T H is the Hankel matrix construction operator, and └┘ means Floor operation. By applying the SVD to H j , H j can be expressed as where U and V are the orthogonal matrices, = diag(σ 1 , σ 2 , . . . , σ r ) , r and σ i are the rank and the singular value of H j , respectively. The singular values represents the signal coherence in each IMF. The larger singular value represents the effective signal with better coherence. Therefore, the difference of the signal coherence is used as the criterion to select useful IMFs to reconstruct the eye blink signal. The normalized singular spectrum energy p j of the j-th IMF is where E H j = r i=1 σ 2 i is the singular spectrum energy. Based on the energy probability theory [28], the energy probability of the singular spectrum q j is defined as follows: Since the singular spectrum energy of the useful IMFs is different with the noisy IMFs [28], it can be regarded as the criterion for selecting the useful IMFs. Moreover, due to the influence of white noise, the selected IMFs may retain the noise which may have the same frequency as the useful signals. In generally, the r singular values are sorted in descending order, and the k largest singular values are remained to reconstruct the Hankel matrix, that is where the rank of is k. Then, the new Hankel matrix Ĥ j can be reconstructed as where T k represents the reconstruction operator of the new Hankel matrix Ĥ j . Therefore, the final reconstructed eye blink signal s(t) after denoising can be written as where T −1 H is the inverse of the Hankel matrix construction operator. Furthermore, the process of the selection and reconstruction of IMFs using SVD is shown in Table 2. It can be seen from Table 2 that the SVD is applied to all IMFs, and the useful IMFs related to the eye blink are selected through the energy probability of the singular spectrum q(j) . The eye blink signal is finally reconstructed. Eye blink detection by STFT combining with CA-CFAR Although the eye blink signal has been reconstructed, it is still difficult to detect the eye blink in the time domain. Fortunately, the time-frequency analysis is usually performed for achieving the eye blink detection [30][31][32]. In this paper, the STFT is applied on the reconstructed eye blink signal s(t) , that is where h(τ − t) is the window function. After performing STFT, the time-frequency spectrum can be obtained. The Doppler feature of eye blink can be observed on the time-frequency spectrum, but it does not mean that the eye blink detection has been realized. Therefore, to achieve the eye blink detection, the CA-CFAR algorithm [33] is adopted to perform on the time-frequency spectrum. For the CA-CFAR detector, noise samples are extracted from both leading and lagging cells around the cell under test (CUT). The noise power can be estimated as [34] where P n is the estimated noise power, I is the number of training cells and y i is the sample in each training cell. Then, the detection threshold can be given by Table 2 IMFs selection and reconstruction using SVD Input: The IMFs M j , j = 1, 2, · · · , J Process: 1. For each IMF, constructing the Hankel matrix H j , and applying SVD to H j , that is, H j = U m×m m×n V T n×n , and = diag(σ 1 , σ 2 , . . . , σ r ). 2. According to the , calculate E H j , p(j) and q(j) of each IMF. 3. According to q(j) , select the J u useful IMFs to be reconstructed. Remain the k largest singular values to reconstruct the Hankel matrix, that is = diag(σ 1 , σ 2 , . . . , σ k ) , and the new Hankel matrix is reconstructed as Ĥ j = U m×m m×n V T n×n = T k H j 5. Obtain the reconstructed IMFs. where T represents the detection threshold and α is a scaling factor. Then, we compute the power result for each CUT. If the calculation result of one CUT exceeds the threshold T, this CUT is considered as an eye blink motion [35]. Experimental results and discussion In this section, we first introduce the FMCW radar and radar parameters used in the experiment. We carry out a series of experiments to verify the effectiveness of the proposed eye blink detection method. Experimental specification The experimental scene is shown in Fig. 4, where the person's head and eyes are located at R ( R ≈ 20 cm) directly in front of the FMCW radar. The radar adopted in the experiment is Texas Instruments AWR1642 as shown in the red device in Fig. 4, and it has two transmitting and four receiving antennas. The green equipment behind AWR1642 is DCA1000 and it is used for data acquisition. The central frequency of the FMCW radar is 77 GHz, and the bandwidth is 4 GHz. The sampling frequency of the A/D sampler is set to 2 MHz. The pulse width of chirp is set to 114 microseconds ( µ s) and the time of one frame is set to 100 milliseconds (ms). We use 30 frames to collect the data of eye blink motion and the collection time is 3 seconds (s). Moreover, each frame contains 255 chirps and each chirp contains 200 sampling points. The parameters of FMCW radar used in the experiments are listed in Table 3. Experimental results and discussion During the eye blink data collection stage, the head of the person is located stably in front of the FMCW radar, and the eyes are looking at the antennas of the radar. In the experiment, the IF data of eye blink in 3 s is collected. Then, by performing FFT on the IF signal, the range between the head and FMCW radar is estimated. The estimated result of range is shown in Fig. 5. It can be seen from Fig. 5 that the head is approximately 20 cm in front of the radar, and keeps stable, which is consistent with the set conditions. Since the signals at the range bin with strongest spectrum contain the eye blink motion, they are extracted for eye blink detection. In order to reduce the time complexity of CEEMDAN decomposition, the extracted signal is averaged on every continuous five chirps. It should be noted that the time of one chirp is 114 μs, so the time of five chirps is 570 μs. The signals remain relatively stable in a short time. Then, the processed signal is decomposed by the CEEMDAN algorithm. The decomposition result is shown in Fig. 6. It can be seen from Fig. 6 that the IF signal is decomposed into seven IMFs and one residual signal. These IMFs are sorted from high frequency to low frequency. Subsequently, the Hankel matrix of each IMF is constructed and SVD is applied to obtain the corresponding singular values, and the singular values of each IMF are as shown in Fig. 7. We also show the energy probability of the singular spectrum of each IMF in Fig. 8. It can be observed from Figs. 7 and 8 that the singular values and the singular spectrum energy probability of IMF5 to IMF7 are larger than other IMFs, and the respective frequencies of IMF5 to IMF7 are calculated which are close to 0 Hz. This is because the low frequency signals reflected from the face are more intense and the eye blink is so weak that the singular spectrum energy probability caused by the eye blink motion is small. Therefore, according to the principle of the CEEMDAN algorithm, we want to remove IMF5 to IMF7 for removing interference from the other facial parts, and select IMF1 to IMF4 as the useful IMFs to reconstruct the eye blink signal. In fact, the IMFs are selected for reconstruction with the rule that the energy probability of the singular spectrum is less than the threshold, where the threshold is set as the averaged value of the sum of the energy probabilities of the singular spectrum of all IMFs. In addition, it is noted that in the reconstruction process, the k largest singular values are selected for each useful IMF reconstruction. The selected IMFs are denoised during the reconstruction process. Furthermore, the reconstructed eye blink signal, the IF signal and useful IMFs are shown in Fig. 9. The blue line represents the IF signal and the red line represents the reconstructed eye blink signal. The yellow line represents the useful IMFs which is the summation of IMF1 to IMF4. It can be seen from Fig. 9 that the IF signal has two obvious changes in amplitude, which are located at about 40 chirps and 1060 chirps, respectively. The amplitude change may be caused by the hardware device, the noise or the eye blink motion. By observing the IF signal and useful IMFs, it can be found that CEEMDAN algorithm can effectively denoise the signal by removing the noisy IMFs. However, the two amplitude changes are still obvious, which means that the useful IMFs still contain the noise. Compared with the useful IMFs in Fig. 9, the amplitude change located at about 1060 chirps can be more clearly observed in the reconstructed eye blink signal, while the amplitude change at about 40 chirps is weakened. This is because the noise contained in each selected IMF is further removed during the reconstruction of the selected IMFs, so as to enhance the eye blink signal. Therefore, the amplitude change at about 40 chirps may be affected by the hardware device and noise, while the amplitude change at about 1060 chirps may be caused by the eye blink motion. Then, STFT is performed on the reconstructed eye blink signal to obtain the timefrequency spectrum. We denote the result of STFT performing on IF signal by IF + STFT. The CEEMDAN + STFT represents the result of STFT performing on the reconstructed useful signal of IMF1 to IMF4 by CEEMDAN algorithm, and the result of STFT performing on the reconstructed eye blink signal by CEEMDAN algorithm with SVD is denoted by CEEMDAN + SVD + STFT. To validate the effectiveness of the proposed algorithm, we compare the time-frequency spectrum with eye blink once and twice in 3 seconds, and the results are shown in Figs. 10 and 11, respectively. It can be observed from Figs. 10a and 11a that the strong static interferences with zero frequencies from the face cause the eye blink motion difficult to observe. Since the CEEMDAN algorithm can remove the noise and interference of the signal, the strong static interference from the face can be eliminated and the eye blink signal can be reconstructed. Therefore, the time-frequency spectrum of CEEMDAN + STFT can be easy to detect the eye blink motion as shown in Figs. 10b and 11b. Although the zero-frequency components can be effectively removed, the noise contained in each IMF cannot be eliminated. The CEEMDAN algorithm combining with SVD is proposed to reconstruct the eye blink signal, so as to remove the noise in each selected IMF. In fact, we can see from Figs. 10c and 11c that the interference is effectively eliminated, and the eye blink motion can be observed clearly. Finally, the CA-CFAR is performed on the time-frequency map to realize the blink detection. The detection results are shown in Figs. 10d and 11d. It can be clearly observed that there is one blink motion in 3 s in Fig. 10d and there are two blink motions in 3 s in Fig. 11d. The experimental results show that the results of the eye blink detection are consistent with the actual situation, so as to validate the effectiveness of the proposed method. Conclusion In this paper, we proposed an eye blink detection method using 77 GHz FMCW radar. Firstly, the FFT was performed on the IF signal to obtain the position of the head. The signals located in the position were extracted and the extracted signal was averaged every continuous five chirps. Then, the CEEMDAN algorithm was applied to decompose the averaged signal into several IMFs, and the eye blink signal was reconstructed by using SVD to constrain the selection and reconstruction of the useful IMFs. Finally, the eye blink detection was realized by performing STFT and CA-CFAR on the reconstructed eye blink signal. Furthermore, the experimental results proved the effectiveness of the proposed eye blink detection method. In the future, we will combine the eye blink detection with machine learning to achieve eye blink classification or fatigue detection.
6,025.6
2022-02-09T00:00:00.000
[ "Computer Science", "Engineering" ]
Analysis of Chemical Compound Content and Magnetic Properties of Iron Sand in Rondo Woing Village, East Manggarai - The mine sand extraction process used the Methanol Soap Bathed (MSB) method. The results of sample analysis using XRF and VSM (Vibrating Sample Magnetometer) obtained that the most dominant compound content was Silica (SiO2), Iron (Fe2O3), Quicklime (CaO), and Alumina (Al2O3). At the same time, the sample results from the Vibrating Sample Magnetometer (VSM) show that Rondo Woing East Manggarai iron sand has soft magnetic properties. The iron sand sample with code 001 has Hc, Mr, and Ms of 324.83 Oe, 0.31 emu/gram, and 1.61 emu/gram, respectively. The iron sand sample with code 002 has Hc, Mr, and Ms of 319.91 Oe, 0.31 emu/gram, and 1.6 emu/gram, respectively. The results of the analysis show that four dominant elements are the same; namely, Fe is the most dominant element in Fe2O3 compounds, with 32.3%, Al elements in Al2O3 compounds, with levels of 13%, Si in SiO2 compounds with levels of 31.1% and Ca in a CaO compound with a content of 17.7% and Ti in a TiO2 compound of around 2%, so that the iron sand at Rondo Woing shows magnetic material properties. INTRODUCTION Manggarai Timur Regency is a result of the division of Manggarai Regency, which was established at the end of 2007. Manggarai Timur Regency stretches from the north, bordering the Flores Sea, to the south, bordered by the Sawu Sea. Meanwhile, in the eastern and western parts, it is bordered by Ngada Regency and Manggarai Regency. In 2021, there was a division in three subdistricts of Manggarai Timur Regency: Kota Komba, Sambi Rampas, and Lamba Leda. The additional sub-districts are Kota Komba Utara, Congkar, and Lamba Leda Utara, resulting in 12 sub-districts in Manggarai Timur Regency. The agricultural sector remains the main field of employment that absorbs the most workforce in Manggarai Timur Regency (BPS, 2022) (Ruth & Gustina, 2022). The topography of East Manggarai Regency is a mountainous region with sandy and clayey soil structures. However, using sand as a building material in that area is still limited. Iron sand from East Manggarai Regency can be utilized as a mixture for building materials because the sand color in that area is dark. Iron sand is a type of sand that contains Iron (magnetite). It is often found along the coast, appearing shiny and black, with darker color indicating a higher iron mineral content. Iron sand is formed through weathering, surface water, and wave transformation from its original rock, which is basaltic to andesitic. Iron sand contains the main mineral magnetite (iron oxide) associated with titanomagnetite, with a small amount of magnetite and hematite, accompanied by impurity minerals such as quartz, pyroxene, biotite, and rutile. Other common impurities found in iron sand are phosphorus and sulfur. It is relatively easy to test whether it is iron sand. Prepare a magnet and bring it close to the sand. If any iron minerals are attracted to the magnet, it is confirmed that the sand is iron sand. Iron sand is formed through the weathering process, surface water, and wave action on the original rock containing iron minerals, and then it accumulates and is washed by ocean waves. Iron sand is used in the cement industry and can be developed as a raw material for steel production according to technology. The prospect of iron sand in Indonesia has been explored and even exploited for utilization. Iron sand mining is widely conducted along the west coast of Sumatra, the south coast of Java and Bali, and the north coast of Papua. The majority of iron sand reserves are scattered in the coastal waters of Indonesia, from the west coast of Sumatra, the south coast of Java and Bali, the coast of Sulawesi, the coast of East Nusa Tenggara (NTT), and the coast of Papua. The total reserves for iron ore are 173,810,612 tons, and for metal, 25,412,652.62 tons. However, its utilization is not yet optimal because of PT. Krakatau Steel and PT. Krakatau Posco only produces 24,000 to 36,000 tons of steel plates per year. Meanwhile, the shipbuilding industry requires 900,000 tons of steel plates annually. The need for raw materials for steel plates in the form of sponge iron with Fe ≥ 60%, PT. Krakatau Steel still imports from foreign countries. The evidence is that PT. Krakatau Steel imported 3,500,000 tons of iron ore pellets per year from Sweden, Chile, and Brazil before and during the 2000s. This condition hinders the competitiveness of the national steel industry against foreign steel industries due to the import duties imposed on raw materials. There is an opportunity to establish a raw material company for steel production since, currently, there are only two such companies in Indonesia. This situation has prompted sponge iron production with a production capacity adjusted to the installed capacity. Research on the analysis of sponge iron production using Cipatujah iron sand as a raw material has yielded sponge iron with the highest iron content of Fe ≥ 60.44%. It can be used for the raw material needs of PT. Krakatau Steel in steel production, as PT. Krakatau Steel has claimed that the local sponge iron product has less than 60% Fe content. It can drive the self-sufficiency of raw materials for the steel industry, which will impact the self-sufficiency of the defense industry. However, the government should also implement protection measures and prioritize domestic raw materials for national steel production. One approach could be for the government to establish a state-owned national steel industry that fosters a consortium of raw material suppliers (sponge iron) to ensure the quality and continuous supply of sponge iron (Aritonang et al., 2019). With its extensive coastline, Indonesia possesses a complete iron sand resource of 4,280 million tons and reserves of 750 million tons, with a magnetization degree of 65% and a Fe content reaching 45%, as reported by the Ministry of Energy and Mineral Resources in 2018 (Firjatullah et al., 2022). The utilization of iron sand as a natural resource is still not optimal due to Indonesia's geological position at the convergence of three tectonic plates: the Indo-Australian Plate, the Indo-Eurasian Plate, the Eurasian Plate, and the Pacific Plate. This geological condition creates a complex technical arrangement that inherently supports the potential of rich mineral deposits (Fitri, 2016). Besides its high economic value, iron sand benefits the industry and mining sector. Some of the benefits of iron sand are as follows: Firstly, support for the steel industry: Iron sand, which consists of iron impurities, can be separated to obtain pure Iron. Once the impurity-free Iron is obtained, it can be directly processed into steel. The implementation of appropriate iron sand processing technology is expected to supply raw materials for the national steel industry, thus realizing its self-sufficiency of the national steel industry. Steel is a vital raw material in various industries, and many everyday tools are predominantly steel. Secondly, raw material for cement production: Iron sand mining in coastal areas of Indonesia is carried out to obtain iron sand, which is then processed into cement and concrete. Cement and concrete made from iron sand have better compressive and tensile strength than other materials. Thirdly, the production of antibacterial materials: Another benefit of iron sand is its use as a raw material for antibacterial products. Materials derived from iron sand can be used to create products that offer protection against bacteria, such as antiseptic soap and antibacterial soap. Iron sand in the medical field has significant economic value for industries and society. Fourthly, enhancement of concrete compressive strength and tensile strength: Iron sand is used to enhance concrete's compressive and tensile strength in the concrete industry. Using 80% iron sand by weight of the total sand will provide a maximum compressive strength of 42.65 MPa and increase the compressive strength capability by 28.41%. Furthermore, additional iron sand can increase the compressive strength by 3.07 MPa and the splitting tensile strength by 4.84%. Concrete plays a crucial role in supporting the loads applied to structures. Its superior compressive strength and ease of procurement make it a continuous solution for infrastructure challenges. However, the largescale use of materials can deplete natural resources if not properly managed. Iron sand with magnesium (Mg) content can be used as a substitute for fine aggregates in concrete mixtures. This content improves the bond between cement and coarse aggregates and enhances the quality of concrete, such as its compressive strength, tensile strength, and modulus of elasticity (Aji, 2014). Research conducted by Hilman (2014) states that the Sungai Opak sand contains a mixture of non-metallic particles such as quartz, calcite, feldspar, amphibole, pyroxene, biotite, and tourmaline, which are elements found in iron sand. The main iron content in the sand deposit consists of tetanomagnetic minerals, which consist of fine sand grains with diameters between 0.074 -0.075 mm (fine grains) and 3-5 mm (coarse grains). Iron sand also contains 2 3, 2, , which have characteristics that make them suitable as a substitute for fine aggregate in producing high-quality concrete (Aji, 2014). The Iron (Fe) content in iron sand is widely used as a raw material for steel production. Additionally, iron sand contains magnetic minerals such as magnetite (Fe3O4), hematite (α-Fe2O3), and maghemite (γFe2O3), which can be applied in various fields (Juharni, 2016); (Losa, 2013); (Widianto & Fauji, 2018). Magnetite (Fe3O4) can be applied as a magnetic recording media, high-density digital recording disk, magnetic fluids, data storage, MRI, drug delivery system, biosensor SPR, microwave device, and magnetic sensing (Ghandoor et al., 2012). Magnetite (Fe3O4) nanoparticles can be obtained through synthetic or natural materials. The consideration for using natural materials such as Iron or black sands is due to the abundance of existing natural resources that have not been utilized and the lower production costs (Indrayana, 2019). The exploration of iron sand as nanoparticles is still limited compared to the exploration of iron sand as a raw material. It poses a challenge for research in the field (Widianto & Fauji, 2018). Iron sand contains the main mineral, magnetite (iron oxide), associated with titanomagnetite with small amounts of magnetite and hematite, accompanied by impurity minerals such as quartz, pyroxene, biotite, and rutile. Other impurities commonly found in iron sand are phosphorus and sulfur. Iron sand is grey to black, very fine-grained, with sizes ranging from 75 to 150 microns, a density of 2-5 gr/cm3, specific gravity (SG) of 2.99 -4.23 gr/cm3, and magnetization degree (MD) of 6.4 -27.16%. These minerals have a Mohs hardness value of 5 6.5 (Adi, 2018). Iron sand containing the main magnetite mineral is characterized by magnetite grains that are always bonded to other magnetite grains, forming a chain-like structure. The mineral grains have an isometric crystal system, causing iron sand (magnetite) to tend to be rounded or subrounded in shape. This research aims to analyze the chemical composition of iron sand and investigate its magnetic properties. RESEARCH METHODS The The variables in this research are as follows: the independent variable is the sand sample, the control variables are the extraction methods (Methanol Soap Bathed and Hydrochloric Acid), and the dependent variables are the chemical compound content and magnetic properties. The research sample analysis, the sand taken from Rondo Woing, involved cleaning the sand with water to separate it from impurities like soil (Sari & Manurung, 2019). The purification process using the Methanol Soap Bathed (MSB) method aims to increase the purity level of the obtained sand. It involves preparing a container filled with sand and water mixed with detergent, stirring until soap bubbles appear, and then adding methanol to the solution. The iron sand is then extracted using a permanent magnet. In this research, the sample preparation of East Manggarai Iron Sand at Rondo Woing involves grinding the sample with a mortar and sieving it with an 80-mesh sieve. The sample is then extracted using the Methanol Soap Bathed (MSB) method to separate impurities from the Rondo Woing sand and obtain pure sand. Afterward, the sample is washed with hydrochloric acid (HCl). The extracted sample is soaked in hydrochloric acid solution for 2 hours to remove salt impurities. It is then washed with distilled water until the pH reaches 7, as the sample is acidic. The sample is dried, and the analysis process begins. The washed sample, previously treated with hydrochloric acid, is dried at 100°C to remove moisture content. The next step involves the separation (extraction) of magnetic materials from the sand using a permanent magnet. The magnetic sand is analyzed using X-Ray Fluorescence (XRF) equipment to determine the mineral content of the Rondo Woing iron sand. The magnetic analysis is conducted using the VSM method to determine its magnetic properties, and the characterization results from VSM will yield a hysteresis curve. RESULTS AND DISCUSSION Iron sand is a material formed from metal mines through the transportation and sedimentation process of sand materials containing Iron (Tellu et al., 2020). The collected samples underwent treatment by filtering using an 80-mesh sieve to ensure uniform particle size. Each sample was weighed at 100 grams in separate containers. The Methanol Soap Bathed (MSB) method was used to clean the sand from dust and other deposits. The MSB method is a faster and cost-effective extraction method that maintains the purity of magnetic minerals without damaging the magnetic grains (Rifai et al., 2010). Each sample was washed using 4 grams of soap powder and stirred until clean. After that, it was left for 5-10 minutes, and the water was drained. Next, 200 ml of methanol with concentrations of 2M and 4M was added to each sample, stirred with a spatula, and dried in an oven at 100°C for 2 hours to reduce the moisture content. In the next step, the samples were washed using 200 ml of HCl in three containers with concentrations of 2M, and magnetic stirring was performed for 2 hours. This washing process made the samples acidic. Therefore, the samples were washed with distilled water until reaching a pH scale of 6-7. The samples were then dried in an oven at 100°C for 2 hours. Each sample was weighed at 7 grams for characterization using XRF and VSM. The results of the Iron Sand Rondo Woing Magnetic Property Test using XRF. Analysis of Rondo Woing iron sand using XRF, as shown in Table 1 and In general, the composition of the iron sand in Rondo Woing shows economic value in the cement industry due to its Fe content of approximately 32.3%. Based on a study conducted by Aji (2014) on the mining and processing of iron sand in the Kulonprogo Regency area, it is a mining project with a large investment, ranging from IDR 5.4 to 6 trillion, covering a Contract of the Work area of 2,987.79 hectares. The total iron sand resources in Kulonprogo amount to 605 million tons, with a Fe content of approximately 10.8%, requiring a workforce of 2,100 people. It is also supported by research findings the composition of economically valuable iron sand in the cement and steel industries consists of minerals such as magnetite (Fe3O4), ilmenite (FeTiO3), hematite (Fe2O3), and limonite (Moon et al., 2006;Dipatunggoro, 2012), mixed with non-metallic mineral grains such as quartz, calcite, feldspar, amphibole, pyroxene, biotite, and tourmaline (Aji, 2014). The washing process with Methanol Soap Bhated (MSB) and the particle size of the sand grains affect the composition of the iron sand. The factors influencing this are the sample preparation with varying sizes, specifically 80 mesh, and the washing with MSB, as this extraction method is faster, cost-effective, and can maintain the purity of magnetic minerals without damaging the existing magnetic grains. The chemical compounds Fe2O3 and SiO2 have higher percentage concentrations than other components. It is due to the influence of the concentration of methanol used during the extraction process. The purpose of using methanol in this research is to break the bond between magnetic and nonmagnetic materials. To determine the magnetic properties of Rondo Woing iron sand, a Vibrating Sample Magnetometer (VSM) is needed. VSM is one of the instruments used to understand and study the magnetic properties of materials. Magnetic properties in materials occur due to temperature changes and magnetic properties as a function of the measurement angle or the anisotropic conditions of the material (Tebriani, 2019). This study will discuss the analysis results of Rondo Woing iron sand using the Vibrating Sample Magnetometer (VSM). The VSM analysis produces a curve known as the hysteresis curve, as shown in Figures 1 and 2. In determining the magnetic properties based on the hysteresis curve, several important parameters need to be considered, including magnetic saturation (Ms), coercivity (Hc), and remanent magnetization (Mr). The value of magnetic saturation, also known as saturation magnetization, indicates the ability of nanoparticles to maintain their magnetic domain alignment when subjected to an external magnetic field (Tebriani, 2019). From Figures 1 and 2 Based on the shape of the magnetic moment change with the applied external field, the analyzed magnetite with methanol exhibits paramagnetic properties, showing symmetric hysteresis curves when subjected to an external magnetic field, resulting in a very narrow hysteresis curve width. A narrow hysteresis curve width indicates that the sand material is easily magnetized or requires low energy, which falls into the category of soft magnetic materials and exhibits ferromagnetic properties. From the graph, it can be seen that the iron sand from Rondo Woing exhibits ferromagnetic properties. The magnetic properties were tested on three samples with different treatments, namely using a 2M Methanol Soap Bhated solution for sample 1 and a 4M Methanol Soap Bhated solution for sample 2. The magnetic domains of the iron sand initially have zero values. When a current is applied, a magnetic field appears as the magnetic domains within the iron sand become aligned. This process is called magnetization until all magnetic domains of the iron sand align in the same direction as the external magnetic field, called the saturation point (Ms). Therefore, the higher the value of Ms, the higher the magnetic properties of the iron sand. It also applies to the value of Mr. When the electric current is turned off, the magnetic domains in the iron sand do not automatically become zero; there will be residual magnetism known as Mr or permanent magnet. The larger the value of Mr, the higher the magnetic properties of the iron sand. The same applies to the coercivity value (Hc). A demagnetization process is carried out to demagnetize the iron sand and return it to a non-permanent state (zero value). This process involves applying an opposite electric current to the zero point to obtain the coercivity value. The results of this research are expected to provide information about the mineral resources in the form of iron sand in the Rondowoing area. It can impact the economic value and empowerment of the community in processing natural resources. However, the impacts of iron sand mining should also be considered. There are various responses from the local community regarding iron sand mining, ranging from pro-mining to antimining. For example, iron sand mining in the Binangun District has caused conflicts among the community due to differing arguments regarding this mining activity. Mining proponents argue that iron sand mining can improve the economic conditions of the community and the local government. On the other hand, opponents argue that iron sand mining has more negative impacts than positive ones, such as environmental damage and displacement of agricultural land belonging to the community. CONCLUSION In the Rondo Woing Manggarai Timur sand sample, the Fe element content increases with methanol at 2M and 4M concentrations. The analysis of the iron sand reveals four dominant elements: Fe, which is the most dominant element in the form of Fe2O3 with a concentration of 32.3%; Al, in the form of Al2O3 with a concentration of 13%; Si, in the form of SiO2 with a concentration of 31.1%; and Ca in the form of CaO with a concentration of 17.7%. Additionally, Ti is present in TiO2 with a concentration of approximately 2%. It indicates that the iron sand in Rondo Woing exhibits magnetic properties and holds economic value. The analysis using Vibrating Sample Magnetometer (VSM) and the hysteresis curve shows that the Rondo Woing sand sample possesses soft magnetic characteristics, known as ferromagnetic.
4,715.2
2023-06-24T00:00:00.000
[ "Environmental Science", "Geology" ]
Extreme gaps between eigenvalues of Wigner matrices This paper proves universality of the distribution of the smallest and largest gaps between bulk eigenvalues of generalized Wigner matrices, from the symmetric and Hermitian classes. The assumptions on the distribution of the matrix elements are a subexponential decay and some smoothness of the density. The proof relies on the Erd{\H o}s-Schlein-Yau dynamic approach. We exhibit a new observable that satisfies a stochastic advection equation and reduces local relaxation of the Dyson Brownian motion to a maximum principle. 1.1 Extreme statistics in random matrix theory. The study of extreme spacings in random spectra was initially limited to integrable models. Vinson [40] showed that the smallest gap between eigenvalues of the N × N Circular Unitary Ensemble, multiplied by N 4/3 , has limiting distribution function e −x 3 , as the size N increases. In his thesis, similar results for the smallest gap between eigenvalues of a generalization of the Gaussian Unitary Ensemble were obtained. With a different method Soshnikov [38] computed the distribution of the smallest gap for general translation invariant determinantal point processes in large boxes: properly rescaled the smallest gap converges, with the same limiting distribution function e −x 3 . Vinson also gave heuristics suggesting that the largest gap between eigenvalues in the bulk should be of order √ log N /N , with Poissonian fluctuations around this limit, a problem popularized by Diaconis [13]. Ben Arous and the author addressed this problem in [2] concerning the first order asymptotics for the maximum gap, and described the limiting process of small gaps, for CUE and GUE. These results were extended by Figalli and Guionnet to some invariant multimatrix Hermitian ensembles [24]. The convergence in distribution of the largest gap was recently solved by Feng and Wei, also for CUE and GUE [23]. Finally, Feng and Wei also investigated the smallest gaps beyond the determinantal case, characterizing their asymptotics for the circular β ensembles [21]. For the Gaussian orthogonal ensemble, they proved that the smallest gap rescaled by N 3/2 converges with limiting distribution function e −x 2 [22]. The intuition for all results above are (i) the Poissonian ansatz, namely the eigenvalues gaps are asymptotically independent, (ii) weak convergence of the spacings holds with good convergence rate, so that the finite N gap density asymptotics at 0 + and ∞ are close to the limiting Gaudin density asymptotics. The above limit theorems and heuristic picture holds beyond invariant ensembles. Indeed, the gap universality for Wigner matrices by Erdős and Yau [17] extends to submicroscopic scales. We informally state it as follows (see Theorem 1.2 for details, in particular the smoothness assumption). Theorem. Let λ 1 < · · · < λ N be the eigenvalues of a symmetric Wigner matrix with entries satisfying some weak smoothness assumption. Then there exists c > 0 such that The same result holds for the Hermitian class, with rescaling N 4/3 and limiting distribution function e −x 3 . Our work also applies to universality of the largest gaps (see Theorem 1.4), under similar assumptions. Does the above theorem require our slight smoothness hypothesis on the matrix entries? For the largest gaps, which are essentially on the microscopic scale 1/N , this assumption is unnecessary as shown by Landon, Lopatto and Marcinek in the simultaneous work [30] . The scale of the smallest gaps is much harder to access: the current best lower bound on separation of eigenvalues for Wigner matrices with atomic distribution is N −2+o (1) , by Nguyen, Tao and Vu [34] (see also [33] for the case of sparse matrices). Motivations for the extreme eigenvalues gaps statistics include relaxation time for diagonalization algorithms [2,12], conjectures in analytic number theory (e.g. the extreme gaps between zeros of the Riemann zeta function [2,10]), conjectures in algorithmic number theory (the Poisson ansatz for large gaps suggests the complexity of an algorithm to detect square free numbers [4]), and quantum chaos in the complementary Poissonian regime [3]. Another motivation for extreme value statistics in random matrix theory emerged after the work of Fyodorov, Hiary and Keating [25]: the maximum of the characteristic polynomial of random matrices predicts the scale and fluctuations of the maximum of the Riemann zeta function on typical intervals of the critical line. Recent progress about their conjecture verified the size of the maximal maximum of the characteristic polynomial, for integrable random matrices [1,11,29,36]. We expect that the observable (1.10) will also help understanding universality for such extreme statistics. Indeed it was an important tool in the recent proof of fluctuations of determinants of Wigner matrices [7]. 1.2 Main results. We will use the notation In this work, we consider the following class of random matrices. Definition 1.1. A generalized Wigner matrix H = H(N ) is a Hermitian or symmetric N × N matrix whose upper-triangular elements H ij = H ji , i j, are independent random variables with mean zero and variances σ 2 ij = E(|H ij | 2 ) that satisfy the following two conditions: (i) Normalization: for any j ∈ 1, N , (ii) Non-degeneracy: σ 2 ij ∼ N −1 for all i, j ∈ 1, N . In the Hermitian case, we assume Var Re(H ij ) ∼ Var Im(H ij ) and independence of Re(H ij ), Im(H ij ) 1 . We also suppose for convenience (this could be replaced by a large finite moment assumption) that the matrix elements √ N H ij satisfies the following tail estimate: there exists constants c > 0, such that for any i, j, N and x > 0 we have In some of the following results, we additionally assume non-atomicity for the matrix entries. A sequence (H) N of random matrices is said to be smooth on scale η if √ N H ij has density e −V , where V = V N,i,j satisfies the following condition uniformly in N, i, j. For any k 0 there exists C > 0 such that (1.2) Finally, we define the process of small gaps and their position as where β = 1 for the generalized Wigner symmetric ensemble and β = 2 for Hermitian one. The following theorem generalizes (and relies on comparison with) the GUE and GOE cases [2,23]. Theorem 1.2 (Small gaps process). Let (H N ) be generalized Wigner matrices satisfying (1.1). Let κ > 0. (i) Symmetric class. Assume (H N ) is smooth on scale η = N −1/4+ε for some fixed ε > 0, in the sense of (1.2). The point process χ (N ) converges as N → ∞ to a Poisson point process χ with intensity given, for any measurable sets A ⊂ R + and I ⊂ (−2 + κ, 2 − κ), by (ii) Hermitian class. Assume (H N ) is smooth on scale N −1/3+ε for some fixed ε > 0 The point process χ (N ) converges to a Poisson point process χ with intensity N −1 be the reordered gaps between eigenvalues, i.e. {t We abbreviate t k the k-th smallest gap and t N −k the k-th largest. Corollary 1.3 (Smallest gaps). Assume (H N ) N is as in Theorem 1.2, κ > 0 and consider a non-empty interval I ⊂ (−2 + κ, 2 − κ). There are at least two ways to understand the above scaling of the smallest spacings ( = N −3/2 for β = 1, = N −4/3 for β = 2). First, in the Gaussian integrable case, the eigenvalues interaction i<j |λ i −λ j | β suggests P(N (λ i+1 − λ i ) < x) ∼ x β+1 uniformly in small x and i, so that decorrelation of spacings would give N (N ) β+1 ∼ 1. Second, the resolvent method gives Wegner estimates for Wigner matrices with smooth entries [16]. For example, [6,Corollary B.2] shows P(N (λ i+1 − λ i ) < x) CN ε x 2 . A union bound on these level repulsion estimates provides a lower estimate on the smallest gaps, which matches our order. For the largest gaps, Gumbel fluctuations are expected, with heuristics also relying on decoupling, and the asymptotics e −cx 2 for the upper tail distribution of N (λ i+1 − λ i ). However, for the integrable Gaussian ensembles these facts have been established only for β = 2, thanks to the determinantal structure. We therefore only state the following theorem for the Hermitian class. It proceeds by comparison with the GUE case from [22]. May the analogue for GOE be known, the universality would follow. As in [22], we denote S(I) = inf I √ 4 − x 2 and rescale the kth largest gaps as For any interval J, we have Moreover, the rate of convergence is given bounded by d TV (τ * k (H), τ * k (GUE)) N c /(N η 2 ) for any c > 0. As explained in Subsection 1.3, the previous theorems rely on a short and quantitative proof of relaxation of Dyson Brownian motion. Therefore this gives new results also for typical fluctuations, illustrated below with Gaussian fluctuations of eigenvalues. Theorem 1.5 (Eigenvalues fluctuations close to the edge). Let (H N ) be generalized Wigner matrices satisfying (1.1) and γ i be defined through (2.5). Consider where c = (3/2) 1/3 πβ 1/2 , with β = 1 for the symmetric class, 2 for the Hermitian one. Fix δ ∈ (0, 1). Then for any sequence i = i N → ∞, with i N δ , we have X i → N (0, 1) in distribution. Let m 1 and k 1 < · · · < k m satisfy Then (X k1 , . . . , X km ) converges to a Gaussian vector with covariance matrix These anomalous small Gaussian fluctuations were first shown in [27] for GUE and [35] for GOE. Our proof proceeds by comparison with these results. Fluctuations of eigenvalues around their typical locations are known in the bulk of the spectrum for Wigner matrices [7,31]. Theorem 1.5 extends to any δ ∈ (0, 1) a previous result from [5] which was limited to δ < 1/4, and therefore completes the proof of eigenvalues fluctuations anywhere in the spectrum 2 . More generally, the proof sketch below explains edge statistics for general obervables of and eigenvalues with indices in 1, N 1−ε , i.e. almost up to the bulk. As another example, for any fixed ε > 0 and diverging converges to the Gaudin distribution, a result proved in [5] for i < N 1/4 . Sketch of the proof. In this paper the small and large constants c, C do not depend on N but may vary from line to line. We denote κ(z) = inf(|z − 2|, |z + 2|) and a subpolynomial error parameter, for some fixed large enough C 0 > 0. Finally, we restrict the following outline and the full proof to the symmetric class, the Hermitian one requiring only changes in notations. As already mentioned, our work and proceeds by interpolation with the integrable models, following the general method from [15]. This dynamic approach requires (i) a priori bounds on the eigenvalues locations, (ii) local relaxation for the eigenvalues dynamics after a short time, (iii) a density argument based on the matrix structure, to show that eigenvalues statistics have not changed after short-time dynamics. In this work, the necessary estimate for (i) is the rigidity from [20]. Concerning the density argument (iii), for Theorem 1.5 we follow the Lindeberg exchange method [39] for Green's functions [19]. For theorems 1.2 and 1.4, (iii) is obtained through the inverse heat flow from [15] (this is where smoothness is required). Our contribution is about (ii), for which we give a short and quantitative proof. The Dyson Brownian motion dynamics are defined as follows. Let B be a N × N matrix such that B ij (i < j) and B ii / √ 2 are independent standard Brownian motions, and B ij = B ji . Consider the matrix Ornstein-Uhlenbeck process If λ 1 (0) < · · · < λ N (0), the eigenvalues λ(t) of H t are given by the strong solution of the system of stochastic differential equations (the β k 's are some Brownian motions distributed as the B kk 's) The coupling method introduced in [6] proceeds as follows. Consider y(t) the solution of the same SDE (1.4) with another initial condition y(0) = {y 1 (0) < · · · < y N (0)}, the spectrum of a GOE matrix. Then the differences δ k (t) = e t/2 (λ k (t) − y k (t)) satisfy the long-range parabolic differential equation . Smoothing of this equation for bulk indices means that for t Such estimates were proved in [17,32], with a weak error term (N −1−ε with some non-explicit ε > 0). In this work, we obtain the essentially optimal (up to subpolynomial orders) estimate This gives the relaxation step (ii) for the smallest gaps. The proof for the large gaps proceeds identically and only requires t 1/N . More importantly than the error estimate (1.5), its short proof reduces Hölder regularity to a simple maximum principle, and it also applies to edge universality. In details, for any ν ∈ [0, 1], let be interpolating between the Wigner and GOE initial conditions, as in [32]. Define From now we set ν ∈ (0, 1) and omit it from the notations. Let (1.10) The above function is the main idea in this work. A key observation is that the quadratic singularities from the denominator in (1.9) disappear when combined with the Dyson Brownian motion evolution itself, so that the time evolution of f has no shocks. This is reminiscent of a similar argument in [8, Lemma 6.2], for a different observable. More precisely, f follows dynamics close to the advection equation and suggest the approximation This estimate holds with a small error term (see Proposition 3.3) because there are no eigenvalues shocks in the equation guiding f , contrary to (1.8). The approximation (1.13) has two applications. First application: relaxation at the edge. (1.14) Edge universality follows from the shape of the characteristics (1.12), which take points around the edge further away from the bulk. More precisely, we choose z = z 0 = E + iη with E ∈ [−2, 0] and η > 0. By a straightforward calculation based on the rigidity estimates from [20] In particular, integrating the above equation in 0 ν 1, we obtain so local edge relaxation is proved for any t > ϕ C N −1/3 , with an optimal error term. Such quantitative bounds can be similarly extended to any Second application: relaxation in the bulk. For relaxation in the bulk, we directly work with f instead of f . Fix some time scales t 1 < t 0 , a length scale t 1 < r < t 0 and bulk index k 0 . We are interested in evaluating u i (t) for some t 0 t t 0 + t 1 and |i − k 0 | N r. Assume that for any such t the maximum value of u t occurs at some index k = k(t) with |k − k 0 | N r (this is generally wrong but the conclusion will remain thanks to a finite speed of propagation estimate from [8]). We follow the maximum principle as in the analysis of the the eigenvector moment flow from [8]: for any η > 0 to be chosen, denoting z = x k (t) + iη, we have In the bulk of the spectrum, (1.13) holds with the good error term ϕ C /(N η) (see Proposition 3.3), so that the previous equation behaves similarly to If r is small enough, Im f 0 (z t ) ≈ Im f 0 ((γ k0 + iη) t0 ) (remember z a priori depends on k). Denote m = Im f 0 ((γ k0 + iη) t0 )/(N Im m sc (γ k0 + i0 + )). The above equation therefore behaves as so that for any η much smaller than t 1 we obtain max |i−k0| N r (u i (t 1 ) − m) = O( ϕ C N 2 η ). The same estimate holds for the minimum, so that max |i−k0| N r |u i (t 1 ) − m| = O( ϕ C N 2 t1 ), and in particular The estimate (1.5) follows by integration over ν ∈ (0, 1). Acknowledgement. The idea developed in this paper benefited from discussions with students of graduate classes at the Courant Institute in 2015, 2018, the Saint Flour summer school in 2016 and the IHES summer school in 2017. The author also thanks the organizers and participants of the workshop [41] where the questions of universality of extreme gaps, and rate of convergence in universality, were raised. This work is partially supported by the Poincaré chair and the NSF grant DMS-1812114. where our branch choice will always be Im √ z 2 − 4 > 0 for Im(z) > 0, above and in (1.12). More generally than (1.7), consider x(t) the strong solution of where the B k 's are standard Brownian motions and x(0) is still given by (1.6), and β = 1 (resp. β = 2) corresponds to the spectral dynamics with equilibrium measure GOE (resp. GUE). We still define u (ν) Then the function (1.10) satisfies the following dynamics. Lemma 2.1. For any Im z = 0, we have Proof. It is a simple application of Itô's formula. We omit the time index. First, Applying again the Itô formula d( . Combining with (II), we obtain All singularities have now disappeared and we have proved To estimate f t or f t (see (1.14)), we first need some estimates on the characteristics (z t ) t 0 from (1.12), and the initial values f 0 , f 0 . For this, we define the curve Lemma 2.2. For 0 < t < 1 and z = z 0 satisfying η = Im z > 0, we have In particular, for any Proof. Without loss of generality assume that Re z > 0. Let w = z − 2. We have (z 2 − 4) 1/2 ∼ 2w 1/2 so that On S , we always have b(z) ∼ κ(z) and a(z) ∼ η so the last estimate follows immediately. We now define the typical eigenvalues location and the set of good trajectories such that rigidity holds: wherek = min(k, N + 1 − k). The following important a priori estimates were proved in [20], for fixed t and ν = 0 or 1. The extension in these parameter is straightforward, by time discretization in t and ν first, then by Weyl's inequality to bound increments in small time intervals, and the fact that |u (ν) k (t)| < u (ν) (0) ∞ to bound increments in some small ν-intervals. Lemma 2.3. There exists a fixed C 0 > 0 (remember ϕ = ϕ(C 0 )) large enough such that the following holds. For any D > 0, there exists N 0 (D) such that for any N > N 0 we have Moreover, we have the following estimates on the initial condition f 0 , f 0 . Lemma 2.4. For any λ ∈ A and z = E + iη ∈ R, we have Im f 0 (z) Cϕ 1/2 if η > max(E − 2, −E − 2), and Im f 0 (z) Cϕ 1/2 η κ(z) otherwise. The same upper bound (naturally) holds for f 0 . Proof. The rigidity estimate on A easily implies that and the estimates follow. Note that we used z ∈ R to justify approximation of eigenvalues by typical location: in R the imaginary part of z is always greater than the eigenvalues fluctuation scale. Finally, the following is an elementary calculation. We write z t = r(z, t), for r given by the right hand side of (1.12). Relaxation at the edge. For the following important estimate towards edge universality, remember the notation (1.14). Proof. For any 1 , m N 10 , we define t = N −ε−10 and z (m We also defie the stopping times (with respect to F t = σ(B k (s), 0 s t, 1 k N )) We will prove that for any D > 0 there exists N 0 (ε, D) such that for any N N 0 (ε, D), we have We first explain why the above equation implies the expected result by a grid argument in t and z. On the one hand, we have the sets inclusion Indeed, for any given z and t, chose t , z (m) such that t t < t +1 and |z − z m | < 5N −10 . Then | f t (z) − f t (z (m) )| < N −2 , say, as follows directly from the definition of f t and the estimate |v k (s)| < ϕN −2/3 (obtained by maximum principle). Moreover, under the event ∩ k A ,m,k , we have | f t (z (m) )− f t (z (m) )| < N −2 as follows easily from (2.2). On the other hand, for some fixed universal c > 0 and arbitrary small a > 0, for any martingale M we have (see e.g. [37, Appendix B.6]) . Together with the deterministic estimate We now prove (2.6). We abbreviate t = t , z = z (m) for some 1 , m N 10 . Let g s (z) = f s (z t−s ). From lemmas 2.2 and 2.4, so that we only need to bound the increment of g. Using lemmas 2.1 and 2.5, Itô's formula gives where we used Lemma 2.2, κ(E) = κ(z) = b(z) on S , and the fact that, for s < t ∧ τ , we have We also have Finally, we want to bound sup 0 u t |M u | where For some fixed universal c > 0 and arbitrary a > 0, we have P sup with overwhelming probability. Let k j = jϕ 2 and I j = k j , k j+1 ∩ 1, N , 0 j N/ϕ 2 . Then (2.9) For each 0 j N/ϕ 2 , pick a m = m j such that |z (m) − γ kj | < N −9 . First, as v k (s) 0 for any k and s, for s t ∧ τ we have k∈Ij v k (s) η m Im f s (z (m) ). To estimate Im f s (z (m) ), introduce such that t s < t +1 . On the event ∩ k A ,m,k , we have | f s∧τ (z (m) ) − f t ∧τ (z (m) )| < N −2 as seen easily from (2.2). We therefore proved and in particular the same estimate holds for sup k∈Ij v k (s). Lemma A.1 allows us to bound sup k∈Ij where for the last inequality, we evaluate this deterministic integral in Lemma A.2 We now state quantitative relaxation of the dynamics at the edge. Remember that λ and y satisfy the same equation (1.7), with respective initial conditions a generalized Wigner and GOE spectrum. Theorem 2.7. Let ε, ε > 0 be fixed small constants. For any D > 0 there exists N 0 and such that for any Proof. Assume first that k ∈ ϕ 5 , N 1−ε . Then define Note that κ(γ k ) 1/2 ∼ (k/N ) 1/3 . Therefore, by Lemma 2.3 and Proposition 2.6, In particular, for any k ∈ ϕ 5 , N 1−ε and t ∈ [0, N − ε ], we have Note that all our estimates have been uniform in ν ∈ (0, 1), so that the above equation holds for any N greater than some N 0 independent of ν. The above equation easily implies that for any p 1, E(|v Markov's inequality concludes the proof, when k ∈ ϕ 5 , N 1−ε For k ∈ 1, ϕ 5 we repeat the same reasoning with z = z 0 = γ k0 + i yields to the same estimate up to the deteriorated ϕ 10 exponent, say. 2.3 Proof of Theorem 1.5. For a test function F , we rely on [27,35] so that we only need to prove for any diverging k ∈ 1, N 1−ε . From Theorem 2.7, for t > (k/N ) 1/3 ϕ 11 , we have so that (2.10) holds for any Gaussian divisible ensemble of type H t = e −t/2 H 0 + (1 − e −t ) 1/2 U , where H 0 is any initial generalized Wigner matrix and U is an independent standard GOE matrix. We now construct a generalized Wigner matrix H 0 such that the first three moments of H t match exactly those of the target matrix H and the differences between the fourth moments of the two ensembles are less than N −c for some c positive. This existence of such a initial random variable is give for example by [18,Lemma 3.4]. By the following Theorem 2.8, we have E Ht F (X k ) = E H F (X k ) + o(1). The previous two equations conclude our proof of (2.10), and therefore Theorem 1.5 (the proof in the multidimensional case is analogue). The following theorem is a slight extension of the Green's function comparison theorem from [19], (see for example [8, theorem 5.2] for an analogue statement for eigenvectors). Compared to [19], we include the following minor modifications: (1) We state it for energies in the entire spectrum. (2) We allow the test function to be N -dependent. Theorem 2.8 can be proved exactly as in [19], so we don't repeat repeat it. Note that at the edge, the 4 moment matching can be replaced by 2 moments. For our applications, this improvement is not necessary. for all 1 i j N and 1 k 3. Assume also that there is an a > 0 such that Then there is ε > 0 depending on a such that for any integer k, any choice of indices 1 j 1 , . . . , j k N and smooth bounded Θ : R k → R, 3 Relaxation from a maximum principle 3.1 Result. The main result of this section is the following. Again, remember that λ and y satisfy the same equation (1.7), with respective initial conditions a generalized Wigner and GOE spectrum. We denotē Theorem 3.1. Let α, δ > 0 be fixed, arbitrarily small, and N −1+δ < t < N −δ . For any fixed (small) ε > 0, (large) D > 0, there exists N 0 such that for any N N 0 and any k ∈ αN, (1 − α)N we have Corollary 3.2. Let α, δ > 0 be fixed, arbitrarily small, and N −1+δ < t < N −δ . Then for any fixed (small) ε > 0, (large) D > 0, for large enough N for any k ∈ αN, (1 − α)N we have Proof. Note that From Theorem 3.1, the above two terms do not exceed N ε /(N 2 t) with probability 1 − N −D . For the third term, we haveδ (3.1) As u k (0) = y k (0) − λ k (0), using Lemma 2.3 we obtain that the main contribution from (3.1) is of order with overwhelming probability, where we used that γ k is int he bulk. This concludes the proof. 3.2 Approximation along characteristics. Proposition 2.6 gave some useful a priori bounds on f t (z), especially useful for universality at the edge of the spectrum. The following estimate, in the bulk, goes further by justifying (1.13) in the bulk of the spectrum. Proposition 3.3. Consider the dynamics (2.2) for some fixed β > 0. Let ε, κ > 0 be fixed (small) constants. Then for any D > 0 there exists N 0 (ε, D) such that for any N N 0 (ε, D) we have Proof. We strictly follow the proof of Proposition 2.6. Actually, the only differences are (i) the observable, now f instead of f (but the equations are the same), (ii) simplifications, as subtle edge estimates are not required anymore for our bulk estimates. Details are left to the reader. 3.3 Localized maximum principle. The maximum principle was used in random matrix theory for the relaxation of eigenvector statistics along the Dyson Brownian motion dynamics in [8,9]. We follow the method these works, with the following analogue of [9, Propostion 3.7], which we will use iteratively. We first need to introduce a few notations analogous to [9]. Let k 0 be a fixed index in the bulk, ψ = N ω be an error parameter where w > 0 is small and fixed. We also definē Proof. The proof proceeds as [9, Propostion 3.7], with only substantial difference the a priori estimate in the approximation by short range dynamics: we have the following analogue (3.6) of [9, Lemma 3.5]. Let c jk = 1/(N (x j − x k ) 2 ) and write B = S + L , Denote by U S (s, t) the semigroup associated with S from time s to time t, i.e. ∂ t U S (s, t) = S (t)U S (s, t) for any s t, and U S (s, s) = Id. The notation U B (s, t) is analogous. Then, for large enough N , the following approximation holds with overwhelming probability: for any t/2 < u < v < t and |k − k Compared to [9,Lemma 3.5], note that the above estimate is simpler because it consists in only one particle, but it contains the extra term d N t + |u−v| N t due to the error between the local meansū k (s) andū k0 (v). The proof of (3.6) is the same as [9, Lemma 3.5], with main tool Proposition 3.3. With the a priori estimate (3.6), the rest of the proof is identical to that of [9, Propostion 3.7]. 3.4 Proof of Theorem 3.1.The quantitative relaxation of Dyson's Brownian motion in the bulk will follow from the proposition below, itself obtained from iterations of the maximum principle from the previous subsection. Proposition 3.5. Let α, δ > 0 be fixed, arbitrarily small, and N −1+δ < t < N −δ . For any fixed (small) ε > 0, (large) D > 0, there exists N 0 such that for any N N 0 and any k ∈ αN, (1 − α)N we have P |u Proof. Our time horizon parameter t is fixed, and we consider t = t/ψ 10 , d = t/ψ. Remember that the definition (3.3) depends on a fixed bulk index k 0 . From Proposition 3.4, for any m 1 we have Iterations of this equation give, at m 0 = min{m : t/2 m < ψ 100 /N }, This concludes the proof, as the center k 0 is arbitrary. From Proposition 3.5, the proof of Theorem 3.1 is straightforward: we just notice that evaluate the 2p-th moment of δ k (t) −δ k (t) and use Markov's inequality as in the proof of Theorem 2.7. 4 Extreme gaps 4.1 Reverse heat flow. We first state a quantitative analogue of [15,Proposition 4.1]. Its proof is essentially the same as in [15]. In the following dγ denotes the standard Gaussian measure which is reversible for the Ornstein-Uhlenbeck dynamics with generator A = 1 2 ∂ xx − x 2 ∂ x . Lemma 4.1. Let 0 < 2a < b < 1. Assume e −V is a centered probability density, with V smooth on scale η = N −a in the sense of (1.2) and [−x,x] c e −V θ −1 e −x θ for some θ > 0. Denote u = de −V /dγ. Let t = N −b . Then for any D > 0 there exists C > 0 and a probability density g t w.r.t. γ such that (i) |e tA g t − u|dγ CN −D (ii) g t dγ is centered, has same variance as udγ, and satisfies [−x,x] c g t dγ θ −1 e −x θ for some θ > 0. Still using (1.2), we easily have t k |A k u|dγ C k t k η −2k . We assume H is smooth on scale η. From Corollary 3.2, there exists a generalized Wigner matrix H such that if H t denotes under the Dyson Brownian Motion dynamics with initial condition H, the total variation distance between H t and H is of order N −D for any D, provided t N −ε η 2 . In particular, the total variation distance between their spectra is also at most N −D . On the other hand, for such t, from Corollary 3.2 the gaps between eigenvalues of H can all be coupled with some GUE gaps up to an error at most N ε /(N 2 t). The total variation between the bulk spectra of H and GUE is therefore at most N ε /(N 2 t). The choice t = N −ε η 2 concludes the proof.
7,831.6
2018-12-26T00:00:00.000
[ "Mathematics" ]
VOLTAGE AND POWER LOSSES CONTROL USING DISTRIBUTED GENERATION AND COMPUTATIONAL INTELLIGENCE Original scientific paper The paper analyzes the possibility of reducing active power losses in power system, constrained by regulated voltage levels, by implementing appropriate distributed generation capacity. The objectives of this paper were achieved by developing hybrid methods based on artificial neural network and genetic algorithm. Methods have been developed to determine the impact of different distributed generation power on all terminals in the observed system. The method that uses artificial neural network and genetic algorithm is applicable for radial distribution networks, and method using load flow and genetic algorithm is applicable to doubly-fed distribution network. For comparison purposes, additional method was developed that uses neural networks for the decision-making process. Data for training the neural network was obtained by power flow calculation in the DIgSILENT PowerFactory software on a part of Croatian distribution network. The same software was used as an analytical tool for checking the correctness of solutions obtained by optimization. Introduction Distributed generation (DG) caused changes in distribution network management paradigm.Increased DG presence has made distribution network active with all new technical challenges that lay the foundations for smart-grid development. Consumers cease to have dominant influence on the current-voltage conditions while DG production has an increasingly important impact in distribution network performance.If the local guidelines for DG implementation are taken into account, DG can improve distribution network performance by losses reduction, transmission and distribution congestion alleviation or network reliability and power quality improvement [1]. Each distribution network operating condition has its challenges needed to be overcome in order to determine DG impact in observed distribution system.System assessment can be performed if technical assumptions and measurements are available, but usually power flow analysis needs to be performed.Periodic DG production, such as those from some types of renewable energy power plants, can create additional problems and challenges for the distribution system operator which recognizes the need for advanced power distribution system management solution.In order to achieve such solution, it is crucial to develop precise mathematical optimization algorithms that could be effectively implemented in the distribution network management system.According to [2] automated distribution network, which represents prerequisite for smart-grid, must contain fast and accurate solution for power flow and current-voltage conditions control. Respecting listed demands, artificial neural networks (ANN) are suggested as a universal solution because they have a significant ability for nonlinear problems solving in a short period of time and acceptable precision.In order to achieve full benefit of ANN it is needed for ANN to be well organized and quality made, sensible enough so it can perform a real-time optimization of the distribution network but robust enough to perform in traceable order. ANN learning and adaptation characteristic is recognized to have a great potential in control systems because it gives them the opportunity to approximate nonlinear functions, be suited for parallel and distributed processing and model multivariable systems naturally [3].Since ANN is based on human experience and on functional links between input and output variables, they can be used in various learning mechanisms and selforganization concepts, pattern recognition, trend determination, forecasting, function fitting etc. The ANN can be designed, trained and fine-tuned for the purpose of control parameters assessment which can then be used for power losses minimization by DG implementation.This goal can be achieved, as presented in this paper, by using ANN as an alternative method for losses calculation presented in [4]. Optimization process implies determination of optimal DG size and location using one of the many optimization methods.Tan et al. [5] review some of the most popular optimization methods and promote computational intelligence applicability in distributed generation optimization and implementation. El-Ela et al. [6] and Yang et al. [7] presented successful genetic algorithm (GA) usage in different types of optimization problems in the power system.This paper proposes the optimization method based on mutual operation of ANN and GA (ANN-GA) when possible or iterative power flow calculations in combination with GA (PF-GA) when necessary.Necessity of power flow calculations manifests itself in the inability of fast and accurate ANN performance for specific operational scenarios.Although more computational and time demanding than ANN-GA method, PF-GA method remains acceptable in terms of requirements for performance for planning and operation of the power system.Authors evaluate another method consisting of one ANN responsible for losses estimation and another control ANN developed for decision-making process (ANN-cANN). Overall, ANN shows good behaviour and great robustness along with satisfactory solutions if provided with quality training data and can be used in combination with GA for dynamic determination of DG size and location in some power system operation conditions. Optimization of power flow and voltage levels Optimization problem is usually presented as a system of objective function and constraints equations [8]: where vector u represents control variables vector, x represents state variables vector; scalar f(x) implies objective function representing optimization problem. Constraints of observed problem are given by system of equations g(x,.u) and inequalities h(x,.u).Objective function of optimal power flow is primarily given by minimization of active power losses that can be achieved by adjusting voltage levels in generator nodes within predetermined limits. Objective function Main objective function could be described as: where P losses are losses of active power in observed system. Constraints Objective function of active power losses minimization is not suitable enough without technical constraints and correct formulation. Active power constraints Active power constraints are given by expression [8]: where: i  n − Number of nodes (terminals) in network P Gi − Active power production in the i th node P ti − Active power consumption in the i th node θ ij − Angle of mutual admittance ij Y of nodes i and j G ij − mutual conductance of nodes i and j B ij − mutual susceptance of nodes i and j G ii -own conductance of node i B ii − own susceptance of node i. Reactive power constraints Reactive power constraints are given by expression: ( ), where: i  n − Number of nodes in network Q Gi − Reactive power production in the i th node Q ti − Reactive power consumption in the i th node. Voltage levels constraints Voltage level constraints are given by expression: where: Power production constraints for generator node Power production constraint for generator node arises from generator capability curve and technical operational limits, and can be described by expression: where: P Gi−min , P Gi−max − Power production limits in the i th node N pv − Number of PV nodes N 0 − DG node. During the optimization process, voltage levels and loss of stability risk have to be taken into account along with main goal, the power losses reduction, since the objective function formed with the purpose of active power losses reduction only could provide the technically unsustainable solution without predicting sufficient amount of reactive power reserves in case of one or more elements failing in observed system. Artificial neural network design and implementation Aforementioned optimization problem causes for convenient solution development.Complexity and nonlinear interdependences of observed optimization problem cause difficulty in providing the fast and correct solution using classical (exact) optimization techniques such as linear programming, interior point method or mixed integer programming [3]. Instead of exact techniques, new methods for complex nonlinear problems solving is imposed by using ANN and other computational intelligence methods. Different types of ANN will perform differently and provide very distinct solutions so typology and structure determination is crucial for ANN proper behaviour.Feedforward neural networks represent most common type of ANN, but occasionally observed mathematical problem demands radial basis function (RBF) network usage or Kohonen self-organizing network usage.Also, some uncommon types, such as bi-directional RNN, recurrent neural network (RNN) or stochastic neural network could be used, usually assisted by another computational intelligence method.When there is an easy way to generate significant amount of input and target examples and when there is a clear solution for a seemingly complex problem which can be described by flow chart, back-propagation (BP) ANN can be used.Calculation of losses is such a problem and ANN can successfully replace calculation process.ANN consists of neuron layers which can be organized as required by observed problem.BP ANN has an obligatory input layer, a mandatory output layer, and at least one hidden layer which comprises the largest number of neurons.Numbers of hidden layers are theoretically infinite but usually one to four layers is adequate to solve any kind of complex problems [9]. Each layer has to be fully connected to the vicinal layer by every neuron, as shown in Fig. 1.Connections between neurons can include weight factors which determine their behaviour. Relationship between input and output values of multi-layer ANN can be represented as [10]: where: Learning process, a variant of the Delta Rule Correct design and successful testing in training process cause for ANN to provide reasonable outputs for every new set of inputs.The layers and neurons in that case behave as equation variables and connections between them represent nonlinear interdependencies.Usually the precision of ANN outputs is increased as the training data increases.For the purpose of this paper ANN is designed with the aim to substitute analytical approach of losses calculation presented in papers [4,11,12] and trained by simulation data as described later in this paper. Acharya et al. [4] presented analytical formulation based on exact loss formula, and Wang et al. [11] and Gözel et al. [12] presented loss sensitivity formulation which enabled power loss minimization by an analytical method without usage of complex calculations involving admittance matrix, inverse of admittance matrix or Jacobian matrix.ANN design and implementation in this paper continues line of research, but instead of presented analytical approach, the development of ANN passes to the computer intelligence approach.Once designed and trained correctly, ANN disposes sufficient opportunities to substitute analytical approach, thus relieving computing requirements and reducing execution time. Neural network training As mentioned before, larger training data increases the ANN implementation success rate.Sufficient quantity of training data is determined by ANN behaviour; when the ANN outputs are in accordance with calculation outputs the desired behaviour may be declared as achieved. The input training data for the purpose of this paper consists of: DG active power production varying from 0 kW, representing no production, to 1 MW representing full production, by 10 kW step; injected current from corresponding DG production given in kA; voltage level on low-voltage side when DG is active and the voltage level on medium-voltage side when DG is active, given in per-unit (p.u.) values. Target training data for the ANN learning are total feeder losses for each observed scenario.Designed ANN has four input neurons and one output neuron connected with one hidden layer consisting of 15 neurons.The training of ANN is performed by Levenberg-Marquardt algorithm for nonlinear least squares problems.Calculations necessary for training data generation, power losses, voltages and currents are performed using DIgSILENT PowerFactory software.The results of observed case are introduced into tables. Power losses evaluation and identification is necessary due to the lifetime impact of the equipment included and due to power system economic operation [7].ANN training process performance is shown in Fig. 2. Total installed peak power in the observed system is 2,59 MVA with an average power factor of 0,9.Peak loaded network with load diversity factor of one, as defined as worst case scenario, is considered as an operating condition studied by the research for the purpose of this paper.Training data for ANN simulation and performance evaluation of the proposed ANN-GA method has been conducted for time-independent loads and time-independent generation, of equal values in every observed scenario and method performance. DG types differ by their energy source and timedependent production [13].For the purpose of this research, DG is modelled as a PQ node, with a power factor of cos φ = 1, and power that can vary by technical limitations from 100 kW to 1350 kW.The reason for such modelling is based on a real type of synchronous Stamford generator widely used in DG applications in Croatia with nominal power of 1350 kW and 1500 min −1 . Figure 3 Voltage drop in fully loaded distribution network Normal operating conditions for observed distribution network are not fully loaded terminals and it is never doubly-fed, but it is important to notice what happens to voltage values in that possible operation scenario.One possible solution for voltage values increase is planning of an adequate distributed generation on the convenient terminal.In this case, the continuous electric power production would be an adequate type as the stable source the network operator could rely on. ANN results in correspondence with DIgSILENT PowerFactory calculation results are shown in Fig. 5. Calculation results provided by DIgSILENT PowerFactory power flow calculation are considered actual operating values since aforementioned software has proven its reliability and precision. ANN is firstly tested on one terminal, randomly selected for DG implementation.Performance of the ANN, results correspondence, is acceptably good; the comparison of losses results given by DIgSILENT PowerFactory and by ANN after proper training shows that ANN manages to determine the valid value of power losses thus successfully replacing expressions proposed by [4,11,12].Voltage levels are respected in such a way that the DG power is limited to those values that do not cause voltage level exceeding.DG power limitations are obtained by previous power flow calculation.Results generated by ANN are generally matching the results provided by DIgSILENT PowerFactory calculations, except in the case of 400 kW DG production, where significant result difference is evident.This is consequence of insufficient sensitivity to nonlinear changes of the ANN and of specific distribution network After fine-tuning of ANN losses estimation performs appropriate when applied in radial, single-fed, networks, but sometimes remits false results when applied to doubly-fed distribution networks. Since most rural areas of Croatian distribution network are single-fed, ANN-GA method may be usable in real-life conditions.For doubly-fed networks PF-GA method is developed which consumes more computing power and performance time but provides erroneous results risk-free environment. Optimal solution finding by DG implementation is conducted by appropriate node and DG size assessment.This could be done analytically by comparing the results and finding the lowest values or by using optimization algorithms.In this paper GA is used for optimization purposes. Optimal solution finding Once the ANN is designed and successfully tested, the optimal solution needs to be found.Optimization process could be described as decision-making process with goal of finding global best solution.In order for that process to perform correctly, optimization algorithm has to be quality designed.Increasing research efforts have been directed at applying various types of decisionmaking ANNs to optimization problems.Opinions about ANN performance vary from considering ANN highly effective for unstructured decision-making to emphasized reservations towards ANN decision-making by proposing other methods.The ANN for decision making and optimal solution finding is not primarily used in this paper, although authors developed and used such with the aim of testing the performance and comparison of results. Singh et al. [14] successfully developed GA for optimum allocation of distributed generation based on technical and economic constraints.GA was also successfully used in noteworthy papers by Biswas et al. [1], El-Ela et al. [6], López-Lezama et al. [15] and Harrison et al. [16]. In this paper GA is used in a specific manner, partially different from previous authors, since the starting population is created by the same active power for every individual in population, where individuals differ by connected terminals.This approach lines with the usage of particle swarm optimization (PSO) technique and provides good basis for future research. The arrangement of the population and individual coding is shown in Fig. 4 [9]. Another approach of GA coding could employ each terminal as a population, where individuals of that population could be represented with different active power.In both coding approaches the result will be the same; individual that best meets the fitness function will be named as the best individual. Coding of DG and has to be a fixed-length bit string in order for GA to function properly [17].Each position in a string is presumed to represent a particular feature of an individual in a population, DG power and location in this case.Feature evaluation is determined by values stored in particular positions in coding [9].Advanced approach to problem coding and formulation, such as tree encoding, could be applied if results are not satisfactory.For the purpose of this paper binary encoding was performed and satisfactory results obtained. As mentioned, ANN-cANN method was developed for comparison purposes in order to evaluate control ANN performance in decision-making process.Algorithm of ANN-cANN method is shown in Fig. 6. Results obtained from additional cANN were compared with results obtained by GA.Inefficient and sometimes improper behaviour of cANN is concluded, mostly because of local minimum pinning.Therefore, the approach that uses the ANN for losses estimation, thereby bypassing analytical approach, and GA for optimization purposes shows better implementation and usefulness. Immediately after drawing conclusion that ANN-cANN method does not result in the desired effect, the said method was abandoned and replaced with ANN-GA and PF-GA methods respectively. ANN-GA and PF-GA methods are almost identical with the significant difference in how the impact of distributed generation is evaluated.In ANN-GA method, ANN is used for losses estimation instead of losses calculation and GA is used for optimization purposes.PF-GA method differs from aforementioned in the calculation of losses; in PF-GA method values for losses are obtained by power-flow calculations instead of ANN estimation.Proposed method algorithm is shown in Fig. 7. Power flow calculations are performed in MATLAB using MATPOWER package. After conducting all simulations and implemented calculations using ANN-GA method in an observed radial distribution network, the optimal power of distributed generation proved to be 700 kW on terminal 8, located in the middle of the distribution feeder.Control analysis by analytical approach was established using DIgSILENT PowerFactory software which led to the best suitable solution; 800 kW DG on the terminal 8.The difference in results is a consequence of ANN imperfection so for future work the more accurate ANN will be developed.When 700 kW DG is placed on the terminal 8, the voltage characteristic is improved as shown in Fig. 8 Figure 8 Voltage levels with 800kW DG on Terminal 8 In order to evaluate third PF-GA method, the same optimization process is conducted.Optimization result is in case of PF-GA method similar to the expected known best result. With a view to further verify the proposed methods, authors tested the ANN-GA and PF-GA algorithms on multiple network models.MATPOWER package was used for power flow calculations and for purpose of multiple power system modelling.In case of PF-GA usage it is not necessary to perform constraints satisfaction since power flow calculation is based on constraint equations. In any given case both ANN-GA and PF-GA method took acceptably short execution time regarding short-term management of distribution network so it can be concluded that the proposed methods could prove usefulness of future challenges in the power system. Conclusion Distributed Generation (DG) is increasingly represented in electrical distribution network so the influence of DG needs to be properly evaluated and rated.Misjudging the effects of DG influence could be hazardous for power system.The fast and the correct solution for the DG influence on distribution network evaluating can be provided using Artificial Neural Networks (ANN) because of their ability to solve nonlinear mathematical problems quickly with great precision.Back-propagation ANN is designed for estimating the active power losses in the electric power system, thus replacing analytical approach.ANN is trained by power flow calculation results provided by DIgSILENT PowerFactory software.Optimization process was concluded using genetic algorithm (GA). Algorithm comprising ANN for losses estimation and GA for optimization (ANN-GA) proved usefulness in radial distribution networks.For doubly-fed networks additional algorithm consisting of MATPOWER power flow calculations and GA for optimization (PF-GA) was developed.In order to investigate the ANN possibility in decision making process, additional method was developed consisting of ANN for losses estimation, and control ANN for decision making (ANN-cANN), but did not show consistency and traceability of results.Optimization results were controlled for correctness analytically by DIgSILENT PowerFactory software. ANN-GA method and PF-GA method were tested on multiple distribution network models and in every observed scenario both methods prove usefulness regarding execution time and optimization accuracy thereby indicating the direction for developing power system management solutions. Figure 1 Figure 1 Structure of Artificial Neural Network [3], is next step in ANN design since it evaluates the efficacy of ANN creation.Learning starts with determining the relationships between layers and neurons by error assessment.Error represents the difference between the target examples and ANN actual outputs.Based on errors the weighting factor and biases are changed accordingly.Weight factors are changed according to training data and based on expression [10]: rate t pj -j component of the p th target output o pj -j component of the p th computed output i pi − i component of the p th input pattern δ pj − error of target and computed output. Figure 2 Figure 2 Performance of ANN training Distribution network observed and modelled is based on a part of Croatian Grid Company distribution network.Nominal voltages of observed network are 35(20) kV and 0,4 kV, consisting of 48 nodes, 23 transformers and 25 different low-voltage loads.The observed distribution network is connected to the parent network on two sides, two major junctions in reality, but usual operating conditions are never doubly-fed due to operator technical conditions.If network is fully loaded, the voltage values drop under 0,85 p.u. as shown in Fig.3.Total installed peak power in the observed system is 2,59 MVA with an average power factor of 0,9.Peak loaded network with load diversity factor of one, as defined as worst case scenario, is considered as an operating condition studied by the research for the purpose of this paper.Training data for ANN simulation and performance evaluation of the proposed ANN-GA method has been conducted for time-independent loads Figure 6 Figure 7 Figure 6 Algorithm for finding the best solution by ANN-cANN methodSeparate-type ANN for the determination of the best suitable terminal and DG size for the voltage value regulation and active power losses minimization is designed and set as a control ANN for the primary ANN, the one for estimating the active power losses.cANN is based on pattern recognition in pattern vector consisting Validation Performance is 9.571e-05 at epoch 4 Mean Squared Error (mse) 6 Epochs Identified problem is solved and ANN behaviour is improved by managing the neuron connections weight factors, biases between neurons and hidden neurons number or hidden layers number.Not all mentioned activities are necessary, number of neurons could be reduced or some neurons could be designated as not achievable by increasing weight factors. DIgSILENTtopology and operation. Table 1 Results of simulation in DIgSILENT and by ANN
5,262.6
2016-08-16T00:00:00.000
[ "Computer Science", "Engineering" ]
Improved bridge constructs for stochastic differential equations We consider the task of generating discrete-time realisations of a nonlinear multivariate diffusion process satisfying an Itô stochastic differential equation conditional on an observation taken at a fixed future time-point. Such realisations are typically termed diffusion bridges. Since, in general, no closed form expression exists for the transition densities of the process of interest, a widely adopted solution works with the Euler–Maruyama approximation, by replacing the intractable transition densities with Gaussian approximations. However, the density of the conditioned discrete-time process remains intractable, necessitating the use of computationally intensive methods such as Markov chain Monte Carlo. Designing an efficient proposal mechanism which can be applied to a noisy and partially observed system that exhibits nonlinear dynamics is a challenging problem, and is the focus of this paper. By partitioning the process into two parts, one that accounts for nonlinear dynamics in a deterministic way, and another as a residual stochastic process, we develop a class of novel constructs that bridge the residual process via a linear approximation. In addition, we adapt a recently proposed construct to a partial and noisy observation regime. We compare the performance of each new construct with a number of existing approaches, using three applications. Introduction Diffusion processes satisfying stochastic differential equations (SDEs) provide a flexible class of models for describing many continuous-time physical processes.Some application areas and indicative references include finance, e.g.Kalogeropoulos et al. (2010), Stramer et al. (2010), reaction networks, e.g.Fuchs (2013), Golightly et al. (2015) and population dynamics, e.g.Heydari et al. (2014).Fitting such models to data observed at discrete-times can be problematic since the transition densities of the diffusion process are likely to be intractable.A review of inferential methods for diffusions can be found in Fuchs (2013).A widely adopted solution is to approximate the unavailable transition densities either analytically (Aït-Sahalia, 2002, 2008) or numerically (Pedersen, 1995;Elerian et al., 2001;Eraker, 2001;Roberts and Stramer, 2001).Within the Bayesian paradigm, the numerical approach can be seen as a data augmentation problem.The simplest implementation augments low-frequency data by introducing intermediate time-points between observation times.An Euler-Maruyama scheme is then applied by approximating the transition densities over the induced discretisation as Gaussian.Computationally intensive algorithms such as Markov chain Monte Carlo (MCMC) are then used to integrate over the uncertainty associated with the missing data.The key challenges of designing such an MCMC scheme include overcoming dependence between the parameters and missing data (first highlighted as a problem by Roberts and Stramer (2001)) and overcoming dependence between successive values of the missing data.Dealing with the latter requires repeatedly generating realisations known as diffusion bridges from an approximation of the conditioned process.Methods built upon exact simulation, that avoid use of the Euler-Maruyama approximation and the associated discretisation error, have been proposed by Beskos et al. (2006) (see also Beskos et al. (2009)).However, these exact methods are limited to diffusions which can be transformed to have unit diffusion coefficient, known as reducible diffusions. Designing bridge constructs for irreducible, multivariate diffusions is a challenging problem and has received much attention in recent literature.The simplest approach (see e.g.Pedersen (1995)) is based on the forward dynamics of the diffusion process and generates a bridge by sampling iteratively from the Euler-Maruyama approximation of the unconditioned SDE.This myopic approach induces a discontinuity at the observation time (as the discretisation gets finer) and is well known to lead to low Metropolis-Hastings acceptance rates.The modified diffusion bridge (MDB) construct of Durham and Gallant (2002) (see also extensions to the partial and noisy observation case in Golightly and Wilkinson (2008)) pushes the bridge process towards the observation in a linear way and provides the optimal sampling method when the drift and diffusion coefficients of the SDE are constant (Stramer and Yan, 2006).However, this construct is less effective when the process exhibits nonlinear dynamics.Several approaches have been proposed to overcome this problem.For example, Lindström (2012) (see also Fearnhead (2008) for a similar approach) combines the Pedersen and MDB approaches, with a tuning parameter governing the precise dynamics of the resulting sampler.Del Moral and Murray (2014) (see also Lin et al. (2010)) use a sequential Monte Carlo scheme to generate realisations according to the forward dynamics, pushing the resulting trajectories towards the observation using a sequence of reweighting steps.Schauer et al. (2016) combine the ideas of Delyon and Hu (2006) and Clark (1990) to obtain a bridge based on the addition of a guiding term to the drift of the process under consideration.The guiding term is derived using a tractable approximation of the target process. Contributions and organisation of the paper Our contribution is the development of a novel class of bridge constructs that are computationally and statistically efficient, simple to implement, and can be applied in scenarios where only partial and noisy measurements of the system are available.Essentially, the process is partitioned into two parts, one that accounts for nonlinear dynamics in a deterministic way, and another as a residual stochastic process.A bridge construct is obtained for the target process by applying the MDB sampler of Durham and Gallant (2002) to the end-point conditioned residual process.We consider two implementations of this approach.Firstly, we use the bridge introduced by Whitaker et al. (2015) that constructs the residual process by subtracting the solution of an ordinary differential equation (ODE) system based on the drift, from the target process.Secondly, we recognise that the intractable SDE governing the residual process can be approximated by a tractable process.We therefore extend the first approach by additionally subtracting the expectation of the approximate residual process and bridging the remainder with the MDB sampler.In addition, we adapt the guided proposal proposed by Schauer et al. (2016) to a partial and noisy observation regime. We evaluate the performance of each bridge construct (as well as the constructs proposed by Durham and Gallant (2002) and Lindström (2012)) using three examples: a simple birth-death model, a Lotka-Volterra system and a model of aphid growth. The remainder of this article is organised as follows.Section 2 provides a brief introduction to the problem of sampling conditioned SDEs and examines two previously proposed approaches.In Section 3 we describe a novel class of bridge constructs and adapt an existing approach to a more general observation regime.Applications are considered in Section 4 and a discussion is provided in Section 5. Sampling conditioned SDEs Consider a continuous-time d-dimensional Itô process {X t , t ≥ 0} governed by the SDE paramaterised by θ = (θ 1 , . . ., θ p ) of the form Here, α is a d-vector of drift functions, the diffusion matrix β is a d × d positive definite matrix with a square root representation √ β such that √ β √ β = β and W t is a d-vector of (uncorrelated) standard Brownian motion processes.We assume that α and β are sufficiently regular so that the SDE has a weak non-explosive solution (Øksendal, 2003). For tractability, we make the same assumption as Golightly and Wilkinson (2008), Golightly and Wilkinson (2011), Picchini (2014) and Lu et al. (2015) among others, that the process is observed at t = T according to Here, Y T is a d o -vector, F is a constant d × d o matrix and T is a random d o -vector for some d o ≤ d.This flexible setup allows for only observing a subset of components.For simplicity we also assume that the process is known exactly at t = 0.This is the case when a diffusion process is observed completely and without error.In the case of partial and/or noisy observations, typically the initial position is an unknown parameter in an MCMC scheme and a new bridge is created at each iteration conditional on the current parameter values, so in terms of the bridge, the initial position is effectively known.The complication of multiple partial and/or noisy observations is discussed in Section 5. Our aim is to generate discrete-time realisations of X t conditional on x 0 and y T .To this end, we partition [0, T ] as giving m intervals of equal length ∆τ = T /m.Since, in general, the form of the SDE in (1) will not permit an analytic solution, we work with the Euler-Maruyama approximation which gives the change in the process over a small interval of length ∆τ as a Gaussian random vector.Specifically, we have that where ∆W τ k ∼ N (0, ∆τ I d ) and I d is the d × d identity matrix.The continuous-time conditioned process is then approximated by the discrete-time skeleton bridge, with the latent values x (0,T ] = (x τ 1 , . . ., x τm = x T ) having the (posterior) density where is the transition density under the Euler-Maruyama approximation, π(y T |x T , Σ) = N (y T ; F x T , Σ) and N (•; m, V ) denotes the multivariate Gaussian density with mean vector m and variance matrix V .In the special case where x T is known (so that y T = x T and F = I d ), the latent values x (0,T ) = (x τ 1 , . . ., x τ m−1 ) have the density For nonlinear forms of the drift and diffusion coefficients, the products in (3) and (4) will be intractable and samples can be generated via computationally intensive algorithms such as Markov chain Monte Carlo or importance sampling.We focus on the former but note that in either case, the efficiency of the algorithm will depend on the proposal mechanism used to generate the bridge. A common approach to constructing an efficient proposal is to factorise the target in (3) as (5) The density in (4) can be factorised in a similar manner.This suggests seeking proposal densities of the form q(x τ k+1 |x τ k , y T , θ, Σ) which aim to approximate the intractable constituent densities in (5).In what follows, we consider some existing approaches for generating bridges via approximation of π(x τ k+1 |x τ k , y T , θ, Σ) before outlining our contribution.For each bridge, the proposal densities take the form and our focus is on the choice of µ(•) and Ψ(•).For simplicity and where possible, we drop the parameters θ and Σ from the notation as they remain fixed throughout. Myopic simulation Ignoring the information in the observation y T and simply applying the Euler-Maruyama approximation over each interval of length ∆τ leads to a proposal density of the form given by ( 6) with µ EM (x τ k ) = α(x τ k ) and Ψ EM (x τ k ) = β(x τ k ).Sampling iteratively according to (6) for k = 0, 1, . . ., m − 1 gives a proposed bridge which we denote by x * (0,T ] .The Metropolis-Hastings (MH) acceptance probability for a move from . This strategy is likely to work well provided that the observation y T is not particularly informative, that is, when the measurement error dominates the intrinsic stochasticity of the process.However, as Σ is reduced, the MH acceptance rate decreases.A related approach can be found in Pedersen (1995), where it is assumed that x T is known.In this case, a move from x (0,T ) to x * (0,T ) is accepted with probability which tends to 0 as m → ∞ (or equivalently, ∆τ → 0). Modified diffusion bridge For known x T , Durham and Gallant (2002) derive a linear Gaussian approximation of π(x τ k+1 |x τ k , x T ), leading to a sampler known as the modified diffusion bridge (MDB).Extensions to the partial and noisy observation regime are considered in Golightly and Wilkinson (2008).In brief, the joint distribution of X τ k+1 and Y T (conditional on x τ k ) is approximated by In the case of no measurement error and observation of all components (so that x T is known), ( 7) and (8) become Connection with continuous-time conditioned processes Consider the case of no measurement error and full observation of all components.The SDE satisfied by the conditioned process {X t , t ∈ [0, T ]}, takes the form where the drift is α See for example chap.IV.39 of Rogers and Williams (2000) for a derivation.Note that p(x T |x t ) denotes the (intractable) transition density of the unconditioned process defined in (1).Approximating α(X t ) and β(X t ) in ( 1) by the constants α(x T ) and β(x T ) yields a process for which p(x T |x t ) is tractable.The corresponding conditioned process satisfies Use of (11) as a proposal process has been justified by Delyon and Hu (2006) (see also Stramer and Yan (2006), Marchand (2011) and Papaspiliopoulos et al. (2013)), who show that the distribution of the target process (conditional on x T ) is absolutely continuous with respect to the distribution of the solution to (11).As discussed by Papaspiliopoulos et al. (2013), it is impossible to simulate exact (discrete-time) realisations of (11) unless β(•) is constant.They also note that performing a local linearisation of (11) according to Shoji and Ozaki (1998) (see also Shoji (2011)) gives a tractable process with transition density that is, the transition density of the modified diffusion bridge discussed in the previous section.Plainly, taking the Euler-Maruyama approximation of (11) yields the MDB construct, albeit without the time dependent multiplier of β(x τ k ) in the variance.As observed by Durham and Gallant (2002) and discussed in Papaspiliopoulos and Roberts (2012) and Papaspiliopoulos et al. (2013), the inclusion of the time dependent multiplier can lead to improved empirical performance. Unfortunately, the MDB is only efficient when the drift of ( 1) is approximately constant.When this is not the case, so that realisations of the SDE started from the same point exhibit strong and similar non-linearity over the inter-observation time, the modified diffusion bridge is likely to be unsatisfactory. Lindström bridge A bridge construct that combines the myopic sampler with the MDB is proposed in Lindström (2012), for the special case of known x T .Extending the sampler to the observation scenario in (2) is straightforward.Whereas the MDB approximates the variance of where C(∆ k+1 ) 2 is the squared bias of X T |x τ k+1 using a single Euler-Maruyama time-step and C is an unknown matrix.By assuming that the squared bias is a fraction γ of the variance over an interval of length ∆τ , a heuristic choice of C is given by with γ > 0. This particular choice of C Heur ensures that Var(Y T |x τ k ) is a positive definite matrix. The joint distribution of X τ k+1 and Y T (conditional on x τ k ) is then approximated by In the case of no measurement error and observation of all components, ( 12) and ( 13) become where The Lindström bridge can therefore be seen as a convex combination of the MDB and myopic samplers, with γ = 0 giving the MDB and γ = ∞ giving the myopic approach.In practice, Lindström (2012) suggests that γ ∈ [0.01, 1], given that these values have proved successful in simulation experiments.Note also that for a fixed γ, if T − τ k+1 ∆τ then w γ k 0 and the myopic sampler dominates.However, as τ k+1 approaches T , w γ k approaches 1 and the LB is dominated by the MDB. Whilst the LB attempts to account for nonlinear dynamics by combining the MDB with the myopic approach, having to specify a model-dependent tuning parameter is unsatisfactory, since different choices of γ will lead to different properties of the proposed bridges.Moreover, the link between the regularised sampler and the continuous-time conditioned process is unclear. Improved bridge constructs In this section we describe a novel class of bridge constructs that require no tuning parameters, are simple to implement (even when only a subset of components are observed with Gaussian noise) and can account for nonlinear dynamics driven by the drift.In addition, we discuss the recently proposed bridging strategy of Schauer et al. (2016) and describe an implementation method in the case of partial observation with additive Gaussian measurement error. Bridges based on residual processes Suppose that X t is partitioned as X t = ζ t + R t where {ζ t , t ≥ 0} is a deterministic process and {R t , t ≥ 0} is a residual stochastic process, satisfying We then aim to choose ζ t (and therefore f (•)) to adequately account for nonlinear dynamics (so that the drift in ( 14) is approximately constant), and construct the MDB of Section 2.2 for the residual stochastic process rather than the target process itself.Suitable choices of ζ t and f (•) can be found in Sections 3.1.1and 3.1.2.It should be clear from the discussion in Section 2.2 that for known x T , the MDB approximates the density of R τ k+1 |r τ k , r T by In this case, the connection between ( 15) and the intractable continuous-time conditioned residual process can be established by following the arguments of Section 2.2.1.By approximating the drift and diffusion matrix in ( 14) by the constants α(x T ) − f (ζ T ) and β(x T ) gives a process with a tractable transition density.The corresponding conditioned process then satisfies The density in ( 15) is then obtained by a local linearisation of ( 16). It remains for us to choose ζ t to balance the accuracy and computational efficiency of the resulting construct.We explore two possible choices in the remainder of this section. Subtracting the drift In the simplest approach to account for dynamics based on the drift, we take ζ t = η t and f (•) = α(•) where so that The MDB can be constructed for the residual process by approximating the joint distribution of R τ k+1 and Y T − F η T (conditional on r τ k ), where Y T − F η T can be seen as a partial and noisy observation of R T since As in Section 2.2, we obtain the (approximate) joint distribution where α η k = α(η τ k ) and α k , β k and ∆ k are as defined in Section 2.2.Note that the mean in (19) uses the tangent α η k at (τ k , η τ k ) to approximate dη t /dt over time intervals of length ∆τ and ∆ k .Since η τ k+1 will be available either exactly from the solution of (17) or from the output of a (stiff) ODE solver, we propose to approximate dη t /dt via the chord between (τ k , η τ k ) and (τ k+1 , η τ k+1 ), that is, by Replacing α η k in ( 19) with δ η k , conditioning on y T − F η T and using the partition Note that in the case of known x T , Ψ * Further subtraction using the linear noise approximation Whilst the solution of the SDE governing the residual stochastic process in ( 18) is unavailable in closed form, a tractable approximation can be obtained.Therefore, in situations where η t fails to adequately capture the target process dynamics, we propose to further subtract an approximation of the conditional expectation ρ t = E(R t |r 0 , y T ), which we denote by ρt = E( Rt |r 0 , y T ).Here, { Rt , t ∈ [0, T ]} is obtained through the linear noise approximation (LNA) of ( 18).The LNA can be derived in a number of more or less formal ways (see e.g.Kurtz (1970), van Kampen ( 2001) and Fearnhead et al. (2014)).Here, we give a brief exposition of the LNA and refer the reader to Fearnhead et al. (2014) and the references therein for a complete derivation.By Taylor expanding α(X t ) and β(X t ) about η t (the solution of ( 17)), truncating the expansion of α at the first two terms and taking only the first term of the expansion of β, we obtain where H(η t ) is the Jacobian matrix with (i, j)th element (H(η t )) i,j = ∂α i (η t )/∂η j,t .It should be clear from the truncations used in the Taylor expansions of the drift and diffusion coefficients that the key assumption underpinning the LNA is that the stochastic term β(X t ) is "small".Now, for a fixed initial condition R0 = r0 , it is straightforward to show that Rt | R0 = r0 ∼ N P t r0 , P t ψ t P t (21) where P t and ψ t satisfy the ODE system The joint distribution of Rt and Conditioning further on y T − F η T and noting that r0 = r 0 = 0 gives ρt = E( Rt |r 0 , y T ) Having obtained an explicit, closed-form (subject to the solution of ( 17), ( 22) and ( 23)) approximation of the expected conditioned residual process, we adopt the partition X t = η t + ρt + R − t where {R − t , t ∈ [0, T ]} is the residual stochastic process resulting from the additional decomposition of X t .Although the SDE satisfied by R − t will be intractable, the joint distribution of R − τ k+1 and Y T − F (η T + ρT ) can be approximated (conditional on r − τ k ) by where again we use the chord Note that in the case of known Guided proposals For known x T , van der Meulen and Schauer (2015) (see also Schauer et al. (2016)) derive a bridge construct which they term a guided proposal (GP).They take the SDE satisfied by the conditioned process {X t , t ∈ [0, T ]} in ( 9) and ( 10 The guided proposal can be extended to the Gaussian additive noise regime in (2) by noting that in this case, the drift in (10) becomes Given a tractable approximation of p(y T |x t ), the Euler-Maruyama approximation of ( 9) can be applied over the discretisation of [0, T ] to give a proposal density of the form (6) with We will approximate p(y T |x t ) using the LNA.Using the partition Xt = η t + Rt and combining the transition density of Rt in (21) with the observation regime defined in (2) gives p(y where P T |t and ψ T |t are found by integrating the ODE system in ( 22) and ( 23) from t to T with P t|t = I d and ψ t|t = 0. Hence the drift (27) becomes Note that a computationally efficient implementation of this approach is obtained by using the identities P T |t = P T P −1 t and ψ T |t = P t (ψ T − ψ t )P t .Hence, the LNA ODEs in ( 17), ( 22) and ( 23) need only be integrated once over the interval [0, T ].Unfortunately, we find that this approach does not work well in practice, unless the total measurement error tr(Σ) is large relative to the infinitesimal variance β(•).Note that the variance of Y T |x t under the LNA is a function of the deterministic process η t .If η t and x t diverge as t is increased, the guiding term in (28) will result in an over or under dispersed proposal mechanism (relative to the target conditioned process) at times close to T .The problem is exacerbated in the case of no measurement error, where the discrepancy between x t and η t can result in a singularity in the guiding term in (28) at time T .This naive approach (henceforth referred to as GP-N) can be alleviated by integrating the ODE system given by ( 17), ( 22) and ( 23) for each interval [τ k , T ], k = 0, 1, . . ., m − 1, with η τ k = x τ k .In this case, the drift ( 27) is given by α In the special case that x T is known, we have that . The limiting form of the acceptance rate in this case can be found in Schauer et al. (2016), who also remark that a key requirement for absolute continuity of the target and proposal process is that σ(T ) = β(x T ).For the LNA, we have σ(t) = β(η t ).Again, we note that the naive implementation of the guided proposal (GP-N) will not meet this condition in general (when x T is known).Ensuring that σ(t) → β(x T ) as t → T by integrating ( 17), ( 22) and ( 23) for each τ k is likely to be time consuming, unless the LNA ODE system is tractable.In the case of exact observations, a computationally less demanding approach is obtained in van der Meulen and Schauer (2015) by taking the transition density of (26) with B(t) = 0 and σ(t) = β(x T ) to construct the guided proposal.Setting b(t) = α(η t ) leads to a proposal density for the simplified guided proposal (GP-S) of the form (6) with Ψ * GP-S (x τ k ) = β(x τ k ) and Further (example-dependent) methods for constructing guided proposals in the case of known x T can be found in van der Meulen and Schauer (2015). Use of the MDB variance Using the Euler-Maruyama approximation of (9) gives the variance of X τ k+1 |x τ k , y T in the guided proposal process as Ψ GP (x τ k )∆τ = β(x τ k )∆τ .In Section 4 we investigate the effect of using the variance (8) of the modified diffusion bridge construct by taking Ψ GP (x τ k ) = Ψ MDB (x τ k ).Although in this case, deriving the limiting form of the acceptance rate under the resulting proposal is problematic, we observe a worthwhile increase in empirical performance.In the case of known x T , use of the MDB variance in place of β(x τ k )∆τ comes at almost no additional computational cost.We denote this construct GP-MDB. Computational considerations For the observation regime in (2), all bridge constructs (with the exception of the myopic approach) require the inversion of a The Lindström bridge and modified diffusion bridge have roughly the same computational cost.The bridges based on residual processes incur an additional computational cost of having to solve a system of either d (when subtracting η t ) or order d 2 (when further subtracting ρ t ) coupled ODEs.However, we note that for known x 0 , the ODE system need only be solved once, irrespective of the number of skeleton bridges required.This is also true of the naive and simplified guided proposals.However, we note that in the case of known x T , the guided proposal requires solving order d 2 ODEs over each interval [τ k , T ], k = 0, 1, . . ., m − 1 for each simulated skeleton bridge, in order to maintain reasonable statistical efficiency (as measured by, for example, estimated acceptance rate of a Metropolis-Hastings independence sampler). Applications We now compare the accuracy and efficiency of the bridging methods discussed in the previous sections, by using them to make proposals inside a Metropolis-Hastings independence sampler.We consider three examples: a simple birth-death model in which the ODEs governing the LNA are tractable, a Lotka-Volterra system in which the use of numerical solvers are required, and a model of aphid growth inspired by real data taken from Matis et al. (2008).Generating discretetime realisations from the SDE model of aphid growth is particularly challenging due to nonlinear dynamics, and an observation regime in which only one component is observed and is subject to additive Gaussian noise. In what follows, all results are based on 100K iterations of a Metropolis-Hastings independence sampler targeting either (3) or (4), depending on the observation regime.We measure the statistical efficiency of each bridge via their empirical acceptance probability.R code for the implementation of the M-H scheme can be found at https://github.com/gawhitaker/bridges-apps.The bridge constructs used in each example, together with their relative computational cost can be found in Table 1.Note that in contrast to Lindström (2012), we found that γ ∈ [0.001, 0.3] was required in order to find a near-optimal γ.Where LB is used, we only present results for the value of γ that maximised empirical performance. Since the ODE system governing the LNA is tractable for this example, there is little difference in CPU cost between the bridges (see Table 1).Therefore, we use statistical efficiency (as measured by empirical Metropolis-Hastings acceptance probablity) as a proxy for overall efficiency of each bridge, with higher probabilities preferred. Figure 1 shows empirical acceptance probabilities against the number of sub-intervals m for each bridge and each x T .Figures 2 and 3 compare 95% credible regions of the proposal under various bridging strategies with the true conditioned process (obtained from the output of the Metropolis-Hastings independence sampler).It is clear from the figures that as T is increased, the MDB fails to adequately account for the nonlinear behaviour of the conditioned process.Indeed, in terms of empirical acceptance rate, MDB is outperformed by all other bridges for T = 2.As m is increased so that the discretisation gets finer, the acceptance rates under all bridges (with the exception of GP-N) stay roughly constant.For GP-N, the acceptance rates decrease with m when x T is either the 5% or 95% quantile of X T |X 0 = 50.In this case, the variance associated with the approximate transition density either overestimates (when x T is the 5% quantile) or underestimates (when x T is the 95% quantile) the true variance at the end-point.For example, when x T is the 95% quantile, this results (see Figure 3) in a 'tapering in' of the proposal relative to the true conditioned process.GP-S, GP and LB give similar performance, although we note that GP-S and LB perform particularly poorly when x T is the 5% quantile.Moreover, LB requires the specification of a tuning parameter γ and we found that the acceptance rate was fairly sensitive to the choice of γ.In all scenarios, RB, RB − and GP-MDB comprehensively outperform all other bridge constructs.When x T is the median of X T |X 0 = 50, we see that RB and RB − (red and blue lines in Figure 1) give near identical performance, with η t adequately accounting for the observed nonlinear dynamics.In terms of statistical efficiency, GP-MDB outperforms both RB and RB − in all scenarios, although the relative difference is small. Lotka-Volterra In this example we consider a Lotka-Volterra model of predator-prey dynamics.We denote the system state at time t by X t = (X 1,t , X 2,t ) , ordered as prey, predators.The mass-action SDE representation of system dynamics takes the form x T = x T,(5) 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Acceptance Probability m 20 50 100 500 1000 x T = x T,(50) 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Acceptance Probability m 20 50 100 500 1000 x T = x T,(95) 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Acceptance Probability m 20 50 100 500 1000 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Acceptance Probability m 20 50 100 500 1000 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Acceptance Probability m 20 50 100 500 1000 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Acceptance Probability m 20 50 100 500 1000 Figure Birth-death model.Empirical acceptance probability against m with T = 1 (1 st row) and T = 2 (2 nd row).The results are based on 100K iterations of a Metropolis-Hastings independence sampler.Black: MDB.Brown: LB.Red: RB.Blue: RB − .Grey: GP-N.Green: GP-S.Purple: GP.Pink: GP-MDB.(96.82,71.93) (133.35,70.75) (182.64,77.36) (242.08,97.23)x T,(95) (112.13,81.58) (162.28,84.63) (228.82,97.12) (308.58,128.76)Table 2: Lotka-Volterra model.Quantiles of X T |X 0 = (71, 79) found by repeatedly simulating from the Euler-Maruyama approximation of (30) with θ = (0.5, 0.0025, 0.3) . The components of θ = (θ 1 , θ 2 , θ 3 ) can be interpreted as prey reproduction rate, prey death and predator reproduction rate, and predator death.Note that the ODE system (( 17), ( 22) and ( 23)) governing the linear noise approximation of ( 30) is intractable and we therefore use the R package lsoda to numerically solve the system when necessary. We fixed the discretisation by taking m = 50, but note no appreciable difference in results for finer discretisations (e.g.m = 1000).As in the previous example, GP-N and GP-S perform relatively poorly, therefore in what follows we omit these bridges from the results.Note that we include MDB for reference.Figure 4 shows empirical acceptance probabilities against T for each bridge and each x T .Figure 5 compares 95% credible regions of the proposal under various bridging strategies with the true conditioned process (obtained from the output of the Metropolis-Hastings independence sampler).Unsurprisingly, as T is increased, MDB fails to adequately account for the nonlinear behaviour of the conditioned process.LB offers a modest improvement (except when x T = x T,(5) ) but is generally outperformed by the other bridge constructs.We found that as T was increased, LB required larger values of γ, reflecting the need for more weight to be placed on the myopic component of the construct.As for the previous example, unless x T is the median of X T |x 0 , RB is comprehensively outperformed by RB − (see Figure 5 for the effect of increasing T on RB and RB − ).However, we see that the acceptance probabilities are decreasing in T for both constructs.As noted by Fearnhead et al. (2014), the LNA can become poor as T increases, with the implication here being that the approximation of the expected residual (as used in RB − ) degrades with T . We note that the estimated acceptance probabilities are roughly constant for GP and (to a lesser extent) GP-MDB, and in terms of statistical efficiency for a fixed number of iterations, GP-MDB should be preferred over all other algorithms considered in this article.However, the difference in estimated acceptance probabilities between GP-MDB and RB − is fairly small, even when T = 4 (e.g.0.857 vs 0.577 when x T = x T,(5) and 0.834 vs 0.606 when x T = x T,(50) ).We also note that a Metropolis-Hastings scheme that uses RB or RB − is some 30 times faster than a scheme with GP or GP-MDB, since the latter require solving the LNA ODE system for each sub-interval [τ k , T ] to maintain reasonable statistical efficiency for a given m.Therefore, we further compare RB, RB − , GP and GP-MDB by computing the minimum effective sample size (ESS) at time T /2 (where the minimum is over each component of X T /2 ) divided by CPU cost (in seconds).We denote this measure of overall efficiency by ESS/s.When x T = x T,(5) and T = 1, ESS/s scales roughly as 1 : 3 : 56 : 83 for GP : GP-MDB : RB : RB − .When T = 4, ESS/s scales roughly as 1 : 3 : 1 : 17.Hence, for this example, RB − is to be preferred in terms of overall efficiency, although the relative difference between RB − and GP-MDB appears to decrease as T is increased, consistent with the behaviour of the empirical acceptance rates observed in Figure 4. Aphid growth Matis et al. ( 2008) describe a stochastic model for aphid dynamics in terms of population size (N t ) and cumulative population size (C t ).The diffusion approximation of their model is given by where the components of θ = (θ 1 , θ 2 ) characterise the birth and death rate respectively.Matis et al. (2008) also provide a dataset consisting of cotton aphid counts recorded at times t = 0, 1.14, 2.29, 3.57 and 4.57 weeks, and collected for 27 different treatment block combinations.The analysis of these data via a stochastic differential mixed-effects model driven by ( 31) is the focus of Whitaker et al. (2015). x T = x T,(5) 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q q q q q q q q q q Acceptance Probability Time 1 2 3 4 x T = x T,(50) 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q q q q q q q q q q Acceptance Probability Time 1 2 3 4 x T = x T,(95) 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q q q q q q q q q q Acceptance Probability Time = (347.55, 398.94) found by repeatedly simulating from the Euler-Maruyama approximation of (31) with θ = (1.45,0.0009) , and corrupting N 3.57 with additive N (0, σ 2 ) noise. Driven by the real data of Matis et al. (2008) and to illustrate the proposed methodology in a challenging partial observation scenario, we assume that X T cannot be measured exactly.Rather, we observe Y T = F X T + T , T |Σ ∼ N (0, Σ), where Σ = σ 2 and F = (1, 0) so that only noisy observation of N T is possible, and C T is not observed at all.We consider a single treatment-block combination and consider the dynamics of the process over an observation time interval [2.29, 3.57], over which nonlinear dynamics are typically observed.We fix θ and x 2.29 at their marginal posterior means found by Whitaker et al. (2015), that is, at θ = (1.45,0.0009) and x 2.29 = (347.55, 398.94) .We generate various end-point conditioned scenarios by taking y 3.57 to be either the 5%, 50% or 95% quantile of Y 3.57 |X 2.29 = (347.55,398.94) , σ.To investigate the effect of measurement error, we further take σ ∈ {5, 10, 50}.The resulting quantiles are shown in Table 3.As with the previous example, the ODE system governing the linear noise approximation of ( 31) is intractable and we again use the lsoda package to numerically solve the system when necessary. Figure 6 shows empirical acceptance probabilities against σ for EM, RB, RB − , GP and GP-MDB.Figure 7 compares 95% credible regions for a selection of bridges with the true conditioned process (obtained from the output of the independence sampler).All results are based on m = 50 (but note that no discernible difference in output was obtained for finer discretisations).As illustrated by both figures, the myopic sampler (EM) performs poorly (in terms of statistical efficiency, as measured by empirical acceptance probability) when the measurement error variance is relatively small (σ = 5).For σ = 50, the performance of EM is comparable with the other bridge constructs.In fact, as σ increases, the bridge constructs coincide with the Euler-Maruyama approximation of the target process.The gain in statistical performance of RB − over RB is clear.Likewise, GP-MDB outperforms GP, although the difference is very small for σ = 50 and again we note that as σ increases, the variance under GP-MDB, Ψ MDB (x τ k ), approaches the Euler-Maruyama variance, as used in GP. The relative computational cost of each scheme can be found in Table 1.EM is particularly cheap to implement, given the simple form of the construct and the M-H acceptance probability.However, this approach cannot be recommended in this example for σ < 10, due to its dire statistical efficiency.The computational cost of RB, RB − , GP and GP-M is roughly the same, since for the guided proposals, we found that a naive implementation that only solves the LNA ODEs once, gave no appreciable difference in empirical acceptance probability as obtained when repeatedly solving the ODE system for each sub-interval [τ k , T ] (as is required in the case of no measurement error).Consequently, in this example, GP-MDB outperforms RB − in terms of overall efficiency. Discussion We have presented a novel class of bridge constructs that are both computationally and statistically efficient, and can be readily applied in situations where only noisy and partial observation of the y 3.57 = y 3.57,(5) 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q Acceptance Probability σ 5 10 50 y 3.57 = y 3.57,(50) 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q Acceptance Probability σ 5 10 50 y 3.57 = y 3.57,(95) 0.0 0.2 0.4 0.6 0.8 1.0 q q q q q q q q q q q q q q q Acceptance Probability σ 5 10 50 process is possible.Our approach is straightforward to implement and is based on a partition of the process into a deterministic part that accounts for forward dynamics, and a residual stochastic process.The intractable end-point conditioned residual SDE is approximated using the modified diffusion bridge of Durham and Gallant (2002).Using three examples, we have investigated the empirical performance of two variants of the residual bridge.The first constructs the residual SDE by subtraction of a deterministic process based on the drift governing the target process (denoted RB).The second variant further subtracts the linear noise approximation (LNA) of the expected conditioned residual process (denoted RB − ).Our examples included a scenario in which the LNA system is tractable, and another where the system must be solved numerically.An example that considers partial and noisy observation of the process at a future time was also presented. Choice of residual bridge We find that for all examples considered, the residual bridge that further subtracts the LNA mean results in improved statistical efficiency (over the simple implementation based on the drift subtraction only) at the expense of having to solve a larger ODE system consisting of order d 2 equations (as opposed to just d when using the simpler variant).For a known initial time-point x 0 , the ODE system need only be solved once, irrespective of the number of skeleton bridges required.Taking the Lotka-Volterra diffusion (described in Section 4.2) as an example, overall efficiency (as measured by minimum effective sample size per second, ESS/s, at time T /2) of RB − is 1.5 times that of RB when T = 1 and x T is either the 5% or 95% quantile of X T |x 0 .This factor increases to 17 when T = 4.However, for unknown x 0 , as would typically be the case when performing parameter inference, the ODE solution will be required for each skeleton bridge, and the difference in computational cost between the two approaches is likely to be important, especially as the dimension of the state space increases.For the Lotka-Volterra example, the computational cost for solving the ODE system for each bridge scales as 1 : 2.8 for RB : RB − .Therefore, the relative difference in ESS/s would reduce to a factor of roughly 0.5 when T = 1 (so that RB would be preferred) and 6 when T = 4.We therefore anticipate that in problems where x 0 is unknown, the simple residual bridge is to be preferred, unless the ODE system governing the LNA is tractable, or the dimension d of X t is relatively small, say d < 5. Residual bridge or guided proposal? We have compared the performance of our approach to several existing bridge constructs (adapting where necessary to the case of noisy and partial observation).These include the modified diffusion bridge (Durham and Gallant, 2002), Lindström bridge (Lindström, 2012) and guided proposal (Schauer et al., 2016).Our implementation of the latter uses the LNA to guide the proposal.We find that a further modification that replaces the Euler-Maruyama variance with the MDB variance gives a particularly effective bridge, outperforming all others considered here, in terms of statistical efficiency.We find that for fixed x 0 and noisy observation of x T , an efficient implementation of the guided proposal is possible, where the ODE system governing the LNA need only be solved once.In this case, the guided proposal outperforms both implementations of the residual bridge in terms of overall efficiency.However, we found that in the case of no measurement error (so that x T is known exactly), the guided proposal required that the ODEs governing the LNA be re-integrated for each intermediate time-point and for each skeleton bridge required.Unless the ODE system can be solved analytically, we find that when combining statistical and computational efficiency, the guided proposal is outperformed by both implementations of the residual bridge. Extensions Our work can be extended in a number of ways.For example, it may be possible to improve the statistical performance of the residual bridges by replacing the Euler-Maruyama approximation of the variance of Y T |X 0 with that obtained under the LNA.This approach could also be combined with the Lindström sampler to avoid specification of a tuning parameter.Deriving the limiting (as ∆τ → 0) forms of the Metropolis-Hastings acceptance rates associated with the residual bridges would be problematic due to the time dependent terms entering the variance of the constructs.Nevertheless, this merits further research.Interest also lies in the comparison of the bridge constructs for SDEs that exhibit multimodal behaviour, although we anticipate that further modification of the constructs will be required to efficiently deal with such a scenario. Table 1 : Example and bridge specific relative CPU cost for 100K iterations of a Metropolis-Hastings independence sampler.Due to well known poor performance in the case of known x T , EM is not implemented for the first two examples.Likewise, due to poor performance, we omit results based on GP-N and GP-S in the second example, and results based on MDB and LB in the final example. d o × d o matrix at each intermediate time τ k , k = 1, 2 . . ., m − 1 and for each skeleton bridge required.For known x T , the proposal densities associated with each construct simplify.In this case, only the LNA-based residual bridge and guided proposal require the inversion of a d × d matrix at each intermediate time.
11,152.4
2015-09-30T00:00:00.000
[ "Mathematics", "Computer Science" ]
Certain Admissible Classes of Multivalent Functions In such a case we write f(z) ≺ F(z). If F is univalent in U, then f(z) ≺ F(z) if and only if f(0) = F(0) and f(U) ⊂ F(U) (see [1–3]; see also several recent works [4–8] dealing with various properties and applications of the principle of differential subordination and the principle of differential superordination). We denote byF the set of all functions q that are analytic and injective on U \ E(q), where Introduction Let H(U) be the class of functions analytic in the open unit disk Denote by H[, ] the subclass of H(U) consisting of functions of the form with Also let A() be the class of all analytic and -valent functions of the form ( ∈ N = {1, 2, 3, . ..} ; ∈ U) . Let and be members of the function class H(U).The function () is said to be subordinate to (), or the function () is said to be superordinate to (), if there exists a function (), analytic in U with such that () = ( ()) . We further let the subclass of F for which (0) = be denoted by F() and write In order to prove our results, we will make use of the following classes of admissible functions. In this paper, we determine the sufficient conditions for certain admissible classes of multivalent functions so that where > 0 and 1 and 2 are given univalent functions in U with In addition, we derive several differential sandwich-type results.A similar problem for analytic functions involving certain operators was studied by Aghalary et al. [9], Ali et al. [10], Aouf et al. [11], Kim and Srivastava [12], and other authors (see [13][14][15]).In particular, unlike the earlier investigation by Aouf and Seoudy [16], we have not used any operators in our present investigation.Nevertheless, for the benefit of the targeted readers of our paper, in addition to oft-cited paper [11], we have included several further citations of recent works (see, e.g., [17][18][19][20][21]) in which various families of linear operators were applied in conjunction with the principle of differential subordination and the principle of differential superordination for the study of analytic or meromorphic multivalent functions. A Set of Subordination Results Unless otherwise mentioned, we assume throughout this paper that ∈ N, > 0, ∈ U, and all power functions are tacitly assumed to denote their principal values. whenever where ∈ U, ∈ U \ (), and ≧ 1.For simplicity, we write Hence (25) becomes The proof is completed if it can be shown that the admissibility condition for ∈ Φ[Ω, , , ] is equivalent to the admissibility condition for as given in Definition 1.We note that The asserted result is now deduced from the fact that () ≺ (). If ∈ A() satisfies condition (36), then Proof.The proof of Theorem 10 is similar to the proof of a known result [2, p. 30, Theorem 2.3d] and is, therefore, omitted. Proof.Following the same arguments in [2, p. 31, Theorem 2.3e], we deduce that is a dominant from Theorems 7 and 10. Since satisfies (45), it is also a solution of (36) and, therefore, will be dominated by all dominants.Hence is the best dominant. In the special case when Superordination and Sandwich-Type Results In this section we investigate the dual problem of differential subordination, that is, differential superordination of multivalent functions.For this purpose, the class of admissible functions is given in the following definition. Definition 16.Let Ω be a set in C and ∈ H with () ̸ = 0.The class Φ [Ω, , , ] of admissible functions consists of those functions : C 3 × U → C that satisfy the following admissibility condition: whenever where ∈ U, ∈ U, and ≧ 1.For convenience, we write which evidently completes the proof of Theorem 17. Proceeding similarly as in Section 2, the following result can be derived as an immediate consequence of Theorem Definition 5 . Let Ω be a set in C and ∈ F 1 ∩ H.The class Φ[Ω, , , ] of admissible functions consists of those functions : C 3 × U → C that satisfy the following admissibility condition:
977.2
2014-09-16T00:00:00.000
[ "Mathematics" ]
Nexus between CO2 Emission, Energy Consumption and Economic Growth in ASEAN Countries Plus China Corresponding Author: Ghulam Mustafa Department of Agribusiness and Bioresource Economics, Universiti Putra Malaysia, Serdang, Selangor, Malaysia Email<EMAIL_ADDRESS><EMAIL_ADDRESS>Abstract: This study mainly contributes to test the Environmental Kuznets Curve (EKC) hypothesis using panel data for the ASEAN (Malaysia, Indonesia, Thailand and Philippines) countries plus China. Empirical focus of the study is to examine the nexus between CO2 emission, energy consumption and economic growth. While using panel data for 1971-2008 and applying panel co-integration techniques, the emergent findings of the study showed a positive relationship between per capita GDP and per capita CO2 emission. Further, we found positive effect of energy consumption on CO2 emission in long run. However, the study findings confirmed EKC inverted U-shape hypothesis for the ASEAN-china region after the inclusion of energy consumption. However, it did not hold once only quadratic relationship of per capita income was regressed with CO2 emission. Our long-run Panel Ordinary Least Squares (POLS), Dynamic Ordinary Least Squares (DOLS) and Fully Modified Ordinary Least Squares (FMOLS) estimates also confirmed U-shaped EKC hypothesis for this sample of ASEAN4 countries plus China. The findings of the study suggest the countries under consideration should focus on increasing per capita income to sustain long term economic growth and to reduce pollutants and hence, CO2 emission in the region. Introduction The risk of global climate change resulted from increase in Greenhouse Gas (GHG) emission presents a profound concern for current economic growth and welfare of both developed and developing economies. According to an estimate, CO 2 emissions have increased more than ten-fold since the start of global economic and industrial revolution. Similarly as a result atmospheric concentrations of CO 2 have increased by 30% (Olivier et al., 2012). These global environmental concerns have motivated the world towards new environmental policies and reforms in order to substantially lower the CO 2 emissions. Currently, the main focus of sustainable development revolves around shifting entire development from simple economic growth to environmental friendly growth. Therefore, it is important to understand whether environmental reforms and economic growth can coincide or not. For this purpose, Environmental Kuznets Curve (EKC) is a hypothesized nexus between economic growth and environmental degradation indicators. Basically there are two research strands in literature on the relationship between energy consumption, economic growth and environmental pollutants (Ozturk and Acaravci, 2010;Zhang and Cheng, 2009). The first strand which also validate the EKC hypothesis focuses on the relationship between environmental pollutants and economic growth. The second strand of the research is related to nexus between economic growth and energy consumption. The Association of South East Asian Nations (ASEAN) countries' energy consumption increased by nearly 7.5% and economy grew by 5% a year from 1980 to 1999 (Karki et al., 2005). The ASEAN nations and countries such as India and China since the mid-1980s has proved pivot of the global economic growth. As per ASEAN Center for Energy, the region's economic growth had a significant increase in primary energy consumption which was 3.6% per annum with 339 Million Tons of Oil Equivalent (MTOE) to 511 MTOE from 1995 to 2007. Among the energy sources consumed in the region, coal had the fastest growth rate (13.0%) followed by natural gas (6.5%), geothermal energy (6.2%), hydro energy (4.8%) and "other energy" which is mostly biomass (0.9%) had the next fastest growth at 6.5% per annum increasing its share from 16.4-21.4%. However, oil's share declined from 31.4-10.6% while natural gas share increased from 16.4% to 21.4% from 1995 to 2007. Moreover, projections suggest that energy consumption in ASEAN would rise to about 583 MTOE in 2020. Thus, the ASEAN nations need as much as US$ 461 billion in investments in the energy sector from 2001 to 2020 to sustain economic growth. The first strand, discussed above e.g., EKC, initially proposed by Kuznets (1955), is an inverted U shaped curve which shows a U-shaped relation between various indicators of environmental pollutants and per capita GDP. The EKC hypothesis further shows that initially per capita GDP and carbon emissions exhibit positive relationship but after a threshold level of per capita GDP this relation becomes reverse. This type of literature can be seen in a multi-country panel data framework (Hazama et al., 2011;Apergis and Payne, 2010) as well as some time series studies using time series econometrics (Begum et al., 2015;Shahbaz et al., 2015;2013). Most of the past researches have not taken into consideration the various income levels across nations. Hence, proposed study is an attempt to fill the research gap by taking into account on comparing the nexus between per capita CO 2 emission, economic growth and energy use by taking into consideration for development level. Also, previous studies made efforts for confirmation of EKC hypothesis through various approaches such as; parametric, semi and nonparametric, fuzzy and linear model. They took different environmental pollutants, NH 4 , SO 2 , CO 2 , etc. While using numerous kinds of data as primary, time series and panel however, the true nature of the models remained confused and the outcomes of these approaches remained mixed. Our study explained the question of the presence of an EKC hypothesis by utilizing the panel data. Determining the presence of EKC hypothesis for per capita CO 2 emission as a global pollutant is vital. It is important in the sense that the global pollutant can be lowered through financial support and international cooperation if developing nations exhibit U-shape curve. Hence, proposed study shed the light on the presence of the EKC for ASEAN4 (Malaysia, Philippine, Thailand and Indonesia) plus China countries. The existing and regional social inequities combined with increased population and rapid economic growth among the ASEAN nations have essentially put huge pressures on the regional natural resources. The competition among different ASEAN for limited resources leads to trans-boundary as well as local environmental issues including depletion of natural resources, diminishing biological diversity, urban environmental degradation, different kinds of transboundary pollution (haze, water, land and air). Further, economic competition among ASEAN nation also created problem of increase wastes and increased consumption of resources, resulting in unsustainable development and economic growth. Therefore, ASEAN countries are struggling to keep balance between economic development and use of environmental resources (ASEAN Cooperation, 2009). Recognizing the significance of environmental cooperation for sustainable development and regional integration, since 1977, ASEAN has a consensus to cooperate closely to promote environmental cooperation among its member nations. As reflected in the Blueprint for the ASEAN Socio-Cultural Community (ASCC Blueprint) 2009-2015 currently ASEAN environmental cooperation focuses on ten priority areas of regional importance. Out of these ten priority areas, clean environment is most important. For this purpose they are promoting Environmentally Sound Technologies (ESTs), cleaner production and environmental labeling is also one of the priority zones marked in the "ASEAN-China Environmental Cooperation Action Plan 2011-2013" and "ASEAN-China Strategy on Environmental Protection Cooperation 2009-2015". The targets of the cooperation are to enhance the use of recycled materials and the efficient use of raw materials to promote cooperation in cleaner production and environmental labeling and facilitate the development and transfer of ESTs. Among others the core activities include, feasibility study on mutual recognition of environmental labeling, the development of environmentally sound technology pilot projects and hence, the establishment of ASEAN-China environmental industry cooperation network. To implement the Action Plan and Strategy, ASEAN nations and China are now in the process of developing the draft of ASEAN-China Cooperation Framework for Environmentally Sound Technology and Industry, to give more detailed mechanism and guidance for ASEAN-China cooperation on the said subject area (ASEAN Cooperation, 2009). Due to the fact that pollutants like oxides of nitrogen or Sulphur oxides may have more regional effect on the quality of the environment, it has been recognized in the literature that CO 2 emission is a key reason of global warming through its greenhouse process. Another reason for studying CO 2 emissions is that it has a central role to play in the current debate on environment protection and sustainable development. Also, inclusion of CO 2 emission in this study is that it directly related to the energy consumption which then use for production and consumption. Also, due ASEAN region's highly liberalized economic policies and rich natural have attracted many foreign investors which made this region one of the fastest growing economies in the world (Yu, 2003). Some of the member countries, i.e., Thailand, Singapore and Malaysian, are greatly involved in information technology and electronics export business, whereas Indonesia, Malaysia and Brunei export liquid natural gas and crude oil (Karki et al., 2005). Many countries in Asia-Pacific region have faced serious environmental issues such as land poverty and deforestation in line with conventional growth path. Hence, this region have initiated to investigate for the new path shifting from conventional development patterns to sustainable development because of these environmental issues (Luukkanen and Kaivo-oja, 2002). Also, ASEAN nations worry about the negative effect of restricted conventional development on economic growth. Although growth rates and energy resources in ASEAN countries as a whole are high level, there are no enough studies that examine environment-growth-energy consumption nexus. Therefore, the link between economic growth and per capita CO 2 emissions has very significance implications for environmental and economic policies. Taking the EKC hypothesis, this study investigates the nexus between the per capita CO 2 emission, economic growth and energy consumption in ASEAN4 plus China. Based on previous discussion the proposed study makes several contributions to the current literature. Firstly, by including energy consumption for the first time in the literature this study analyses the economic growth-environment nexus and hence, the EKC hypothesis which is important for empirical contribution. Even though the link between two or three of them is separately investigated in different literatures such as environment-growth-population literature, growthtourism literature and growth-energy literature. Secondly, this study focuses on the panel study of selected countries from ASEAN nations plus China because the selected region has a key role in energy sector and produce a significant share of the gas emissions and world GDP. Thirdly, the ASEAN region is becoming an important player in the world economy, which is the most dynamic regions of the world. Also, region has many environmental clean projects and blueprints discussed above with China which is the second largest economy and energy consumption in the world. This led to revisit the EKC curve for these countries although individually EKC has testified for these countries. Lastly, as methodological contributions, this study uses unit root tests (LLC, Breitung, IPS, ADF, PP panel unit root tests) and the cointegration test (the Lagrange multiplier bootstrap panel co-integration test) that take into consideration for cross-sectional dependence problem since Pesaran's CD test (Pesaran, 2004) shows that disturbances in each panel time-series data are cross sectionally dependent. This is important because refusing to recognize the problem of crosssectional dependence can result in unreliable results and cause econometrically dangerous consequences. Further, we employed the Fully Modified Ordinary Least Square (FMOLS), Dynamic Ordinary Least Squares (DOLS) and Dynamic Ordinary Least Squares (DOLS) technique which are considered as a second-generation estimator to reveal the coefficient estimates. The rest of the paper is as followed; in the second part of the study detailed literature is discussed especially in case time series and panel data which confirm/disconfirm EKC, in third section we shed a light on panel unit root test, co-integration tests and long term relationship among stated variables, forth part is about results and discussion and fifth part concludes the paper with some policy recommendations. Kuznets (1955) has intuited a link between income inequality and per capita income as an inverted-Ushaped curve. Simply, he stated that with the rise in the income per capita, the income inequality also rises but starts dropping after a threshold level. On the basis of this idea, many authors have executed a new hypothesis: the existence of an inverted U-shaped relationship between measures of environmental degradation and per capita GDP (Grossman and Krueger, 1991;1995;Koop, 1998;Panayotou, 2000;Selden and Song, 1994;Stern, 2004;Panayotou, 1993;Taylor and Copeland, 2004;Hettige et al., 1992;Shafik and Bandyopadhyay, 1992). Afterward, this curve has been labelled as Environmental Kuznets Curve (EKC). Literature Review Various studies (e.g., Jalil and Feridun, 2011;Sadorsky, 2010;Jensen, 1996) examined the factors of the EKC, such as energy consumption, economic growth, CO 2 emissions and financial development. Besides of a vast pool of research on EKC in the world, , there is very limited literature (e.g., Begum et al., 2015;Hazama et al., 2011) is available on EKC in case of ASEAN countries. For instance, Hazama et al. (2011) analyzed the environment-trade interaction in the ASEAN region employing extended EKC by utilizing panel data. Further, they extended their analysis by including trade with Japan and its relationship with carbon emission in ASEAN countries. While Begum et al. (2015) analyzed EKC by focusing the emerging impacts of energy consumption, output growth and population on CO 2 emission using econometric models for Malaysia. From Chinese perspective, Dhakal (2009) explored the nexus between CO 2 emissions and urbanization in China. In sum, it is not easy to find an inverted-U form relation for the carbon emission. A number of studies working on CO 2 emissions find a significant positive correlation between economic growth and carbon emission (Pao and Tsai, 2010) for Russia, (Chang, 2010), China and Turkey (Ozturk and Acaravci, 2010). On other hand, however, various other studies (for example, Apergis and Payne, 2010;Galeotti et al., 2006;Martinez-Zarzoso and Bengochea-Morancho, 2003;Vollebergh et al., 2005) employed traditional panel methods and reported an inverted U-shaped function for CO 2 emissions. In addition, the significant role of energy consumption in CO 2 emissions should not be abandoned while analyzing economic growth and environmental performance nexus. A sizeable volume of investigation has been allocated towards analyzing economic growth and energy consumption (Ozturk, 2010). Further, literature has suggested to analyse economic growth and energy consumption simultaneously in a single multivariate fashion. Apergis and Payne (2010) adopted this approach to test both nexus in a single econometric framework. The paper treats the link between energy consumption, economic growth and CO 2 emissions, in case of ASEAN4 plus China. The major motivation behind this approach is to focus on the testing the hypothesis of environmental Kuznets curve for ASEANchina region for the period of 1971-2008. Unfortunately, a limited literature is available focusing specifically on ASEAN4-China region. Keeping its significance, the main objective of this study is to fill the existing research gap. Materials and Methods This study used the standard panel data and econometric modelling for the empirical analysis of the study. First we illustrated the standard time series procedures in panel context then we specified our empirical model for estimation. Unit Root and Stationary Tests For empirical analysis, we need to test GDP per capita and per capita CO 2 emission for the unit root tests. The use of unit roots is performed due to the fact that individual tests have generally low power when they are utilized to short series, while panel tests escalate the power of contrasts (Perman and Stern, 1999). Also, Levin et al. (1992) showed that panel approach substantively increases the power of the test compared to the time series ADF tests. We can test the unit roots by applying Breitung, LLC, IPS, ADF, PP panel type unit root tests. Thus, if the null hypothesis of non-stationary cannot be rejected, the variables have to be differenced until they become stationary at I(1), that is until the existence of a unit root is rejected, before proceeding to the empirics of co-integration. Co-Integration Analysis While a number of co-integration tests are documented in the time series literature, there are few co-integration tests developed in panel data such as: Kao (1999) Test, Pedroni (2000; Test and Larsson et al. (2001). Kao proposed an extension of the Engle and Granger (1987) co-integration test from individual time series to a panel. Basic idea is to scrutinize two I(1) series and check if the residuals of the spurious regression involving these I(1) series are I(0) as: If this is so, then the series are co-integrated and if the series are I(1) then the variables are not co-integrated. A test for the null hypothesis of no co-integration can be based on an ADFtype unit root test based on the residuals. However, the panel regression model that Pedroni proposed: Seven different co-integration statistics are offered to capture the within (pooled) and between (group mean) effects-classified into two categories. Larsson et al. (2001) proposed a likelihood-based (LR) panel test of co-integration rank in heterogeneous panel models based on the average of the individual rank trace statistics introduced by Johansen (1988). In Monte Carlo simulation, they investigated the small sample properties of the standardized LR statistic. The LR test requires a large time-series dimension and even if the panel has a large cross-sectional dimension, the size of the test will be sternly biased. Specification of Environmental Kuznets Curve To investigate the co-movement between economic growth and carbon emission which is a synthesis of the EKC and to perform our empirical analysis, we need to estimate the following two models based on above mentioned variables for ASEAN4 plus China: LGDP LGDP β β β ε = + + + where, GDP per capita is used as a measure for economic activity for ASEAN plus China and is the carbon emission per capita indicating environmental quality in a given time period. In order to check the existence of EKC, the given Equation 2 and 3 which will be derived from the relationship between GDP and pollution level will be used. For EKC to hold, it is expected that pollution levels escalates with increasing income up to a limit beyond which pollution levels are likely to fall with higher levels of income. Hence, if coefficient of GDP will be positive and that of coefficient of GDP 2 will be negative, then it indicates the inverted U-shaped link amid GDP per capita and CO 2 emission. Data The present study utilize balance panel data of ASEAN4-China for the time period of 1971-2008 on CO 2 emissions, real GDP and energy use. CO 2 emissions (CO 2 ) are represented by carbon dioxide emissions evaluated in metric tons per capita while (real) GDP per capita (GDP) is a measure of economic development or level of income. The GDP is in constant 2000 US dollar. We utilize the energy use (kg of oil equivalent per capita) as a measure of Energy Consumption (EC). Data on CO 2 emissions, real GDP and energy use are sourced from World Development Indicators (WDI). Starting from year 1971 is important as in this era, the use of technology increased due to the Green Revolution in 1960 s whereas the fully use of technology initiated the environmental problems. Results of the Unit Root and Panel Co-Integration Tests In this study, we applied the Levin-Lin-Chu (LLC), Breitung, Im-Pesaran-Shin (IPS), ADF and PP test. The results are given in Table 2 and 3. The LLC, Breitung, IPS and ADF statistics for level of per capita GDP, energy use per capita and carbon emission per capita measured in kilotons unable to reject the null hypothesis of the unit root. But once we took first difference I(1), all variables become stationary. After making the series as stationary, we can now proceed to panel co-integration tests. We also presented descriptive statistics of the variables as shown in Table 1. Mean of CO 2 , GDP and energy use found to be 0.4023, 6.8541 and 6.5358, respectively. Furthermore, to check the long term co-movement between the given variables, we can use KAO and/or Pedroni co-integration test. We applied the Kao cointegration test and we rejected the null of no cointegration at 10% level of significance when association is checked for GDP and carbon emission. Further, once we applied the Pedroni co-integration test including all three variables with no intercept and no trend the results showed that 4 out of seven statistics are statistically significant indicating that cointegration exist among the variables. Results of the Empirical Models In next step, we estimated the long run coefficients of the panel model. First, we estimated the pooled OLS model and selected an appropriate model between pooled OLS and random effect model. Chi square value on the basis of Breusch and Pagan LM test found highly significant favoring random effect model. Secondly, we estimated the fixed effect model and on the basis of Hausman test, we found that fixed effect model is more appropriate over random effect model because chi square value in former case is highly significant rejecting the null hypothesis of Cov( χ i , χ it ) = 0. Thirdly, the diagnostic tests showed that model was suffering from serial correlation and heteroscedasticity. To correct the model from these problems, we applied robust standard error to adjust the standard error of model in order to get unbiased results. After estimating all above mentioned analysis and diagnostic tests, we estimated following long run equation: LGDP LGDP Parenthesis values illustrated the t-statistics in Table 3. The values show that all coefficients of GDP per capita and its square are not statistically significant at 5 and 1%. First model disconfirm the EKC. Results are significant. It means countries are still need to improve the per capita GDP. However, in second model, the relationships between energy consumption, GDP per capita and CO 2 emissions exist. Importantly, the positive coefficient of energy consumption, indicate a sizeable effect of energy consumption on pollution. This result indicated that a 1% increase in energy use increases CO 2 emissions per capita by 110% in ASEAN4 plus china region. Thus, energy use leads towards environmental degradation. This can be simply elucidated by the realm that when the GDP is low, environmental concern is overshadowed by the pursuit of growth. This is common as in the case of the emerging and developing countries which should be the main objective of the economic policy. However, once income increases there may be succeeds in as second stage characterized by a relatively slower degradation of the environment. This act can be illustrated by here realization by middle income countries to bracket, the environmental issue. Also this kind of attentiveness can be helpful for financial efforts allocated to the grants or the creation of institutions, cleaning of air or water that handle these cases. The objective of this study was to investigate the existence of environmental Kuznets curve for the given countries, using the panel unit root tests, panel cointegration and dynamic ordinary least squares as well as fully modified ordinary least squares. First, this study conducted the panel unit root and panel co-integration tests to analyze the long run movement between CO 2 emission and economic growth. After that, this study showed the Dynamic Ordinary Least Squares (DOLS) and Fully Modified Ordinary Least Squares (FMOLS) along with the traditional Panel Ordinary Least Squares (POLS) tests to analyze whether economic growth and energy use had an impact on CO 2 emission in the selected countries. The results are illustrated in Table 4 for the given two models corresponding to environmental Kuznets curve. The results of Panel Ordinary Least Squares (POLS) for the first model (column 2) and for the second model (column 5) showed that economic growth has a positive and significant impact on CO 2 emission, whereas increasing economic growth as measured by the squared of GDP per capita are significantly negatively affecting the CO 2 emission, indicating the existence of environmental Kuznets curve in these countries. As for as the impact of energy use on CO 2 emission is related, the estimated results reported a positive impact of energy use on CO 2 emission as indicated by OLS and FMOLS. Further, the estimated results of DOLS and FMOLS also supported the findings of panel OLS such that economic growth (increasing economic growth) is positively (negatively) affecting CO 2 emission while energy use is having a positive impact on CO 2 emission. Overall, it is concluded that economic growth is positively affecting CO 2 emission while improvement in economic development is negatively affecting CO 2 emission. Further, energy use is positively contributing towards CO 2 emission in these countries. Knowing this fact, government or environmental agencies can put taxes according to the principle of "polluter payers" because environment is considered as luxury goods. Whatever its form, there should be efforts to decrease environmental degradations as this could be apparent by the above estimated equation. Overall, there is a positive association between CO 2 emissions and per capita real GDP and a negative relationship between CO 2 emissions and per capita real GDP confirming the quadratic form and hence EKC for ASEAN4 plus one countries: One percent increase in per capita real GDP increases per capita CO 2 emissions by 158% in the ASEAN4 plus China countries. Moreover, findings of the study are supportive of the Environmental Kuznets Curve hypothesis in the ASEAN4 plus one region: The level of CO 2 emissions first increases with income, stabilizes and then reduces. Thus, there appears to be an inverted Ushaped association between per capita real GDP and per capita CO 2 capita in the ASEAN4 plus one region once energy consumption is included in the model. Conclusion The question of sustainability of growth in ASEAN4 plus China has gain much attention of the policy makers which motivated us to do nexus between economic growth, energy use and environmental pollutants in ASEAN4 plus China region. This study had two main objectives. Firstly, existence of EKC was investigated in the ASEAN4 plus one China in the matter of per capita CO 2 emissions. Secondly, panel co-integration techniques were utilized to explore the nexus between real GDP per capita, energy consumption and per capita CO 2 emissions for 4 ASEAN nations plus China from 1971 to 2008. In this study we confirmed EKC curve for stated countries. In order to explain dynamic of U shaped curve, three theoretical explanations can be provided: Firstly, growth impacts tastes of economic agents to a more friendly environmental production process and products, secondly, economic growth augments the setup of capacities, institutions and organizations for deal with environmental issues and thirdly, technological and innovation change lead to utilize more friendly process and technologies following the market opportunities. This study showed that energy use had a significantly positive effect on per capita CO 2 emissions in the long run. Energy consumption is likely to be a crucial factor effecting the quality of the environment if a country's income level is not high enough for it to care about the environment. Moreover, the country's economic development and energy usage had substantial effect on carbon dioxide emissions. Importantly, study showed that real GDP per capita did not exhibit a quadratic link with per capita CO 2 emissions as shown by insignificance of results in first model. Taken together, estimated results did not show an inverted Ushaped pattern associated with the EKC hypothesis for the ASEAN4 plus one region. However, it did hold after incorporation of energy usage with positive and significant effect on CO 2 emission. Hence, we recommend that countries under consideration should increase per capita income and impose tax for energy consumption for sustainable development and to reduce pollutants. Muhammad Khalid Bashir: Provided proof readings, paraphrasing and helped in designing of article, helped in analysis and interpretations. Ethics This article is original and never published before. This was the individual class assignment for advance econometrics which is never published in any journal or elsewhere.
6,396.6
2016-08-19T00:00:00.000
[ "Economics" ]
Estimation of Daily Reproduction Numbers during the COVID-19 Outbreak : (1) Background: The estimation of daily reproduction numbers throughout the contagiousness period is rarely considered, and only their sum R 0 is calculated to quantify the contagiousness level of an infectious disease. (2) Methods: We provide the equation of the discrete dynamics of the epidemic’s growth and obtain an estimation of the daily reproduction numbers by using a deconvolution technique on a series of new COVID-19 cases. (3) Results: We provide both simulation results and estimations for several countries and waves of the COVID-19 outbreak. (4) Discussion: We discuss the role of noise on the stability of the epidemic’s dynamics. (5) Conclusions: We consider the possibility of improving the estimation of the distribution of daily reproduction numbers during the contagiousness period by taking into account the heterogeneity due to several host age classes. Overview and Literature Review Following the severe acute respiratory syndrome outbreak caused by coronavirus SARS CoV-1 in 2002 [1] and the Middle East Respiratory Syndrome outbreak caused by coronavirus MERS-CoV in 2012 [2], the COVID-19 disease caused by coronavirus SARS CoV-2 is the third coronavirus outbreak to occur in the past two decades. Human coronaviruses, including 229E, OC43, NL63 and HKU1, are a group of viruses that cause a significant percentage of all common colds in humans [3]. SARS CoV-2 can be transmitted from person to person by respiratory droplets and through contact and fomites. Therefore, the severity of disease symptoms, such as cough and sputum, and their viral load, are often the most important factors in the virus's ability to spread, and these factors can change rapidly within only a few days during an individual's period of contagiousness. This ability to spread is quantified by the basic reproduction number R 0 (also called the average reproductive rate), a classical epidemiologic parameter that describes the transmissibility of an infectious disease and is equal to the number of susceptible individuals that an infectious individual can transmit the disease to during his contagiousness period. For contagious diseases, the transmissibility is not a biological constant: it is affected by numerous factors, including endogenous factors, such as the concentration of the virus in aerosols emitted by the patient (variable during his contagiousness period), and exogenous factors, such as geo-climatic, demographic, socio-behavioral and economic factors governing pathogen transmission (variable during the outbreak's history) [4][5][6][7][8]. Due to these exogenous factors, R 0 might change seasonally, but these factor variations are not significant if a very short period of time is considered. R 0 depends also on endogenous factors such as the viral load of the infectious individuals during their contagiousness period, and the variations in this viral load [9][10][11][12][13][14][15] must be considered in both theoretical and applied studies on the COVID-19 outbreak, in which the authors estimate a unique reproduction number R 0 linked to the Malthusian growth parameter of the exponential phase of the epidemic, during which R 0 is greater than 1 (Figure 1). The corresponding model has been examined in depth, because it is useful and important for various applications, but the distribution of the daily reproduction number R j at day j of an individual's contagiousness period is rarely considered within a stochastic framework [16][17][18][19][20]. Figure 1. Spread of an epidemic disease from the first infectious "patient zero" (in red), located at the center of its influence sphere comprising the successive generations of infected individuals, for the same value of the reproduction number R 0 = 3, with a deterministic dynamic (left) and a stochastic one (right), with standard deviation σ of the uniform distribution on an interval centered on R 0 and with a random variable time interval i between infectious generations (after [16]). We therefore defined a partial reproduction number for each day of an individual's contagiousness period, and, assuming initially that this number was the same for all individuals, we obtained the evolution equation for the number of new daily cases in a population. Assuming that the distribution of partial reproduction numbers (referred to as daily for simplicity) was subject to fluctuations, we calculated the consequences for their estimation, and we estimated them for a large number of countries, taking a duration of contagiousness of 3 followed by 7 days. When this distribution is considered, it is possible to calculate its entropy as a parameter quantifying its uniformity and to simulate the dynamics of the infectious disease either using a Markovian model such as that defined in Delbrück's approach [17] or a classical discrete or ODE SIR deterministic model. In the Markovian case, R 0 can be calculated from the evolutionary entropy defined by L. Demetrius as the Kolmogorov-Sinaï entropy of the corresponding random process [18], which measures the stability of the invariant measure, dividing the population into the subpopulations S (individuals susceptible to but not yet infected with the disease), I (infectious individuals) and R (individuals who have recovered from the disease and now have immunity to it). In the deterministic case, R 0 corresponds to the Malthusian parameter quantifying its exponential growth, and the stability of the asymptotic steady state depends on the subdominant eigenvalue [19,20]. Calculation of R 0 In epidemiology, there are essentially two broad ways to calculate R 0 , which correspond to the individual-level modeling and to the population-level modeling. At the individual level, if we suppose the susceptible population size constant (hypothesis valid during the exponential phase of an epidemic), the daily reproduction rates of an individual are typically non-constant over his contagiousness period, and the calculations we present in the following define a new method for estimating R 0 , as the sum of the daily reproduction rates. This new approach allows us to have a clearer view on the respective influence on the transmission rate by endogenous factors (depending on the level of immunologic defenses of an individual) and exogenous factors (depending on environmental conditions). Materials and Methods The methodology chosen starts from an attempt to reconstruct an epidemic dynamic from the knowledge of the number R ikj of people infected at day j by a given infectious individual i during the kth day of his period of contagiousness of length r. By summing up the number of new infectious individuals X j−k present on day j − k where started their contagiousness, we find that the number of new infected people on day j is equal to: We will assume in the following that R ikj is the same, equal to R k , for all individuals I and day j, then depends only on day k. Then, we have: The convolution Equation (2) is the basis of our modelling of the epidemic dynamics. The Contagion Mechanism from a First Infectious Case Zero Let us suppose that the secondary infected individuals are recruited from the centre of the sphere of influence of an infectious case zero and that the next infected individuals remain on a sphere centred on case 0, by just widening its radius on day 2. Therefore, the susceptible individuals C(j), which each infectious on day j − 1 can recruit, are on a part of the sphere of influence of case 0 reached at day j (rectangles on Figure 2). Figure 2. Spread of an epidemic disease from a first infectious case 0 (located at its influence sphere centre) progressively infecting its neighbours in some regions (rectangles) on successive spheres. The Biphasic Pattern of the Virulence Curve of Coronaviruses Mostly, the clinical course of patients with seasonal influenza shows a biphasic occurrence of symptoms with two distinct peaks. Patients have a classic influenza disease followed by an improvement period and a recurrence of the symptoms [11]. The influenza RNA virus shedding (the time during which a person might be contagious to another person) increases sharply one half to one day after infection, peaks on day 2 and persists for an average total duration of 4.5 days, between 3 and 6 days, which explains why we will choose in the following contagiousness duration these extreme values, i.e., either 3 or 6 days, depending on the positivity of the estimated daily reproduction numbers. It is common to consider this biphasic evolution of influenza clinically: after incubation of one day, there is a high fever (39-40 • C), then a drop in temperature before rising, hence the term "V" fever. The other symptoms, such as coughing, often also have this improvement on the second day of the flu attack: after a first feverish rise (39-39.5 • C), the temperature drops to 38 • C on the second day, then rises before disappearing on the 5th day, the fever being accompanied by respiratory signs (coughing, sneezing, clear rhinorrhea, etc.). By looking at the shape of virulence curves observed in coronavirus patients [12][13][14][15][16], we often see this biphasic pattern. Relationships between Markovian and ODE SIR Approaches In the following, we suppose that the susceptible population size remains constant, which constitutes a hypothesis valid during the exponential phase of epidemic waves. The Markovian stochastic and ODE deterministic approaches are linked by a common background consisting of the birth and death process approach used in the kinetics of molecular reactions by Delbrück [17], then in dynamical systems theory by numerous authors [18][19][20][21][22][23], namely in modelling of the epidemic spread in exponential growth. In the ODE approach, the Malthusian parameter is the dominant eigenvalue, and the equivalent in the Markovian approach is the Kolmogorov-Sinai entropy (called evolutionary entropy in [24][25][26]). First Method for Obtaining the SIR Equation from a Deterministic Discrete Mechanism Let us suppose the model is deterministic and denote by X j the number of new infected cases at day j (j ≥ 1), and R k (k = 1, . . . , r) the daily reproduction number at day k of the contagiousness period of length r for all infectious individuals. Then, we have obtained Equation (2) by supposing that the contagiousness behaviour is the same for all the infectious individuals: which says that the X j−k new infected at day j − k give R k X j−k new infected on day j, throughout a period of contagiousness of r days, the R k 's being possibly different or zero. For example, if r = 3, for the number X 5 of new cases at day 5, equation X 5 = R 1 X 4 + R 2 X 3 + R 3 X 2 means that new cases at day 4 have contributed to new cases at day 5 with the term R 1 X 4 , R 1 being the reproduction number at first day of contagiousness of new infected individuals at day 4. In matrix form, we obtain: where X = (X j , . . . , X j−r−1 ) and R = (R 1 , . . . , R r ) are r-dimensional vectors and M is the following r-r matrix: It is easy to show that, if X 0 = 1 and r = 5 (estimated length of the contagiousness period for COVID-19 [12][13][14][15][16][17][18][19][20][21]), we obtain: The length r of the contagiousness period can be estimated from the ARIMA series of the stationary random variables Y j 's, equal to the X j 's without their trend, by considering the length of the interval on which the auto-correlation function remains more than a certain threshold, e.g., 0.1 [4]. For example, by assuming r = 3, if R 1 = a, R 2 = b and R 3 = c, we obtain: X 0 = 1, X 1 = a, X 2 = a 2 + b + c, X 3 = a 3 + 2ab, X 4 = a 4 + 3a 2 b + b 2 + 2ac, X 5 = a 5 + 4a 3 b + 3ab 2 + 3a 2 c + 2bc, X 6 = a 6 + 5a 4 b + 4a 3 c + 6a 2 b 2 + 6abc + b 3 + c 2 , X 7 = a 7 + 6a 5 b + 5a 4 c + 10a 3 b 2 + 12a 2 bc + 4ab 3 + 3b 2 c + 3ac 2 If R 1 and R 2 are equal, respectively, to a and b, and if a = b = R/2, c = 0, then, X 5 behaves like: X 5 = R 5 /32 + R 4 /4 + 3R 3 /8 If R = 2, {X j } i=1,∞ is the Fibonacci sequence, and more generally, for R > 0, the generalized Fibonacci sequence. Let us suppose now that b = c = 0 and a depends on the day j: a j = > C(j), where C(j) represents the number of susceptible individuals, which can be met by one contagious individual at day j. If infected individuals (supposed to all be contagious) at day j are denoted by I j , we have: Let us suppose, as in Section 2.1, that the first infectious individual 0 recruits from the centre of its sphere of influence secondary infected individuals remaining in this sphere, and that the susceptible individuals recruited by the I j infectious individuals present at day j are located on a part of the sphere of centered on the first infectious 0 obtained by widening its radius ( Figure 2). Then, we can consider that the function C(j) increases, then saturates due to the fact that an infectious individual can meet only a limited number of susceptible individuals as the sphere grows. We can propose for C(j) the functional form C(j) = S(j)/(c + S(j)), where S(j) is the number of susceptible individuals at day j. Then, we can write the following equation, taking into account the mortality rate µ: This discrete version of epidemic modeling is used much less than the classic continuous version, corresponding to the ODE SIR model, with which we will show a natural link. Indeed, the discrete Equation (9) is close to SIR Equation (10), if the value of c is greater than that of S: Second Method for Obtaining the SIR Equation from a Stochastic Discrete Mechanism Another way to derive the SIR equation is the probabilistic approach, which comes from the microscopic equation of molecular shocks by Delbrück [17] and corresponds to a classical birth-and-death process: if at least one event (with rates of contact ν, birth f, death µ or recovering ρ) occurs in the interval (t, t + dt), and by supposing that births compensate deaths, leaving constant the total size N of the population, we have: Hence, we have, if P k (t) denotes Probability({S(t) = k, I(t) = N − k}): and we obtain: Then, by multiplying by s k and summing over k, we obtain the characteristic function of the random variable S. If births do not compensate deaths, we have: Probability ({S(t + dt) = k, I(t + dt) = j}) = P(S(t) = k, I(t) = j) ( (12) If S and I are supposed to be independent and if the coefficients ν, f, µ and ρ are sufficiently small, S and I are Poisson random variables [27], whose expectations E(S) and E(I) verify: leading to the SIR equation for the variables S, I and R considered as deterministic: If R 0 denotes the basic reproduction number (or average transmission rate) in a givenpopulation, we can estimate the distribution V (whose coefficients are denoted V j = R j /R o ) of the daily reproduction numbers R j along the contagious period of an individual, by remarking that the number X j of new infectious cases at day j is equal to X j = I j − I j−1 , where I j is the cumulated number of infectious at day j, and verifies the convolution equation (equivalent to Equation (2)): where r is the duration of the contagion period, estimated by 1/(ρ + µ), ρ being the recovering rate and µ the death rate in SIR Equation (14). r and S can be considered as constant during the exponential phases of the pandemic, and we can assume that the distribution V is also constant; then, V can be estimated by solving the linear system (equivalent to Equation (3)): where M is given by Equation (4). Equation (16) can be solved numerically, if the pandemic is observed during a time greater than 1/(ρ + µ). We will first demonstrate an example of how the matrix M can be repeatedly calculated for consecutive periods of length equal to that of the contagiousness period (supposed to be constant during the outbreak), giving matrix series M 1 , M 2 , . . . Following Equation (4), we put the values of X i 's in the two matrices below, with r = 3 for two periods, the first from day 1 to day 3 and the second from day 4 to day 6. where, after Equation (6), M 1 and M 2 can be calculated from the R j 's as: and M 2 is given by: Additionally, from Equation (2), if, for instance, j = 8 and r = 3, then we have the expression below, which means that the new cases on the 8th day depend on the new cases detected on the previous days 7, 6 and 5, supposed to be in a period of contagiousness of 3 days: Let us suppose now that the initial R j 's on a contagiousness period of 3 days, are equal to: gives the R j 's from Equation (16), hence allows the calculation of X j = Σ k=1,3 R k X j−k . The inverse of M is denoted by M −1 and verifies: R = M −1 X, where X = (X 6 , X 5 , X 4 ), with X 1 = 1, X 2 = 2, X 3 = 5, X 4 = 14, X 5 = 37, X 6 = 98 and we obtain: and a deconvolution gives the resulting R j 's:  , thanks to the following calculation: We obtain for the resulting distribution of daily reproduction numbers the exact replica of the initial distribution. We obtain the same result by replacing M 1 by the matrix M 2 . Distribution of the Daily Reproduction Numbers R j 's When They Are Supposed to Be Random Let us consider a stochastic version of the deterministic toy model corresponding to Equation (17), by introducing an increasing noise on the R j 's, e.g., by randomly choosing their values following a uniform distribution on the three intervals: [2 − a, 2 + a], [1 − a/2, 1 + a/2] and [2 − a, 2 + a] (for having a U-shape behavior), with increasing values of a, from 0.1 to 1, in order to see when the deconvolution would give negative resulting R j 's, with conservation of the average of their sum R 0 , if the random choice of the values of the R j 's at each generation is repeated, following the stochastic version of Equation (2): X j = Σ k=1,r (R k + ε k ) X j−k , where r is the contagiousness period duration and ε k is a noise perturbing R k , whose distribution is chosen uniform on the interval [0, 2a] for k = 1,3, and [0, a] for k = 2. This choice is arbitrary, and the main reason of the randomization is to show that the deconvolution can give negative results for R k 's, as those observed for increasing values of a, from 0.1 to 1, with explicit calculations for three consecutive periods, from day 1 to day 3, from day 4 to day 6, and from day 7 to day 9. For each random choice of the values of the daily reproduction numbers R j 's, we can calculate a matrix M 1 corresponding to Equation (3). Its inversion into the matrix M 1 −1 makes it possible to solve the problem of deconvolution of Equation (2)-that is to say, to obtain new R j 's as a function of the observed X k 's. We can then calculate a new matrix M 2 from these new Rj's and thus continue during an epidemic the estimation of the daily reproduction numbers R j 's from the successive matrices M 1 , M 2 , . . . , and observed X k 's. 1. For a = 0.1, let us randomly and uniformly choose the initial distribution of the daily reproduction numbers R 1 in the interval [   and its inverse is given by: New cases are: X 6 = 18.209, X 5 = 9.101, X 4 = 4.81, X 3 = 2.355, X 2 = 1, X 1 = 1, and by deconvoluting, we obtain the resulting R j 's equal to: R 1 = 1, R 2 = 1.355, R 3 = 1.1, i.e., the exact initial distribution. Let us now consider new initial R j 's: R 1 = 1, R 2 = 1, R 3 = 1. That gives a new matrix M 2 , with new X 7 and X 8 calculated from the new initial R j 's, by using the former values of X 6 , . . . , X 2 : X 7 = X 6 + X 5 + X 4 More precise simulation results are given in Table 1, which summarizes computations made for random choices of R j 's distributions, for a = 0.1 and a = 1 and until time 20. These simulations show a great sensitivity to noise, but a qualitative conservation of their U-shaped distribution along the contagiousness period of individuals. More precisely, because of the presence of noise on the Rj's, we cannot always obtain positive values from the data for the Rj's by applying the deconvolution, which explains the presence of negative values in empirical examples, as in the theoretical noised examples. A way to solve this problem could be to suppose that noise is stationary during all of the growth period of a wave, then calculate the Rj's for all running time windows of length equal to the contagiousness duration and then obtain the mean of the Rj's corresponding to these windows. As this stationary hypothesis is not widely accepted, we prefer to keep negative values and focus on the shape of the distribution of the Rj's. Figure 3 gives the effective transmission rates R e calculated between 20-25 October 2020 just before the second lockdown in France [28,29]. As the second wave of the epidemic is still in its exponential phase, it is more convenient (i) to consider the distribution of the marginal daily reproduction numbers and (ii) to calculate its entropy and simulate the epidemic dynamics using a Markovian model [4]. By using the daily new infected cases given in [30], we can calculate, as in Section 3.1, the inverse matrix M −1 for the period from 20 to 25 October 2020 (exponential phase of the second wave), by choosing 3 days for the duration of contagiousness period and the following raw data for new infected cases: 20,468 for 20 October, then 26,676, 41,622, 42,032, 45,422 and 52,010 for 25 October. Then, for France between 15 February and 27 October 2020, we obtain the daily reproduction numbers given in Figure 3 with a U-shape as observed for influenza viruses. We have: The effective reproduction number is equal to R 0 ≈ 1.174, a value close to that calculated directly (Figure 3 Figure 4. Top: estimation of the effective reproduction number R e 's for the 1 November and the 12 November 2020 (in green, with their 95% confidence interval) [28,29]. Bottom left: Daily new cases in Chile between 1 November and 12 November [30]. Bottom right: U-shape of the evolution of the daily R j 's along the infectious 6-day period of an individual. Hence, after deconvolution, we obtain: The effective reproduction number is equal to R 0 ≈ 1.011, a value close to that calculated directly, with a maximal daily reproduction number the last day of the contagiousness period. Due to the negativity of R 1 , we cannot derive the distribution V and therefore calculate its entropy. As entropy is an indicator of non-uniformity, an alternative could be to calculate it by shifting values of Rj's upwards by the value of their minimum. The quasi-endemic situation in Chile since the end of August, which corresponds to the increase of temperature and drought at this period of the year [4], gives a cyclicity of the new cases occurrence whose period equals the length of the contagiousness period of about 6 days, analogue to the cyclic phenomenon observed in simulated stochastic data of Section 3.2. with a similar U-shaped distribution of the R j 's. Russia By using the daily new infected cases given in [30], we can calculate M −1 for the period from 30 September to 5 October 2020 (exponential phase of the second wave), by choosing 3 days for the duration of the contagiousness period and the following raw data for new infected cases ( The effective reproduction number is equal to R 0 ≈ 1.073, a value close to that calculated directly, with a maximal daily reproduction number the first day of the contagiousness period. Due to the negativity of R 2 , we cannot derive the distribution V and therefore calculate its entropy. The period studied corresponds to a local slow increase of new infected cases at the start of the second wave in Russia, which looks like a staircase succession of slightly inclined 4-day plateaus followed by a step: at the beginning of October, in Russia, new tightened restrictions (but avoiding lockdown) appeared [31], which could explain the change of the value of the slope observed in the new daily cases [30]. Nigeria By using the daily new infected cases given in [30], we can calculate M −1 for the period from 5 November to 10 November (endemic phase), by choosing 3 days for the duration of the contagiousness period and the following raw data for the new infected cases ( The effective reproduction number is equal to R 0 ≈ 1.129, value close to that calculated directly, with a maximal daily reproduction number the last day of the contagiousness period. The distribution V equals (0.143, 0.342, 0.515) and its entropy H is equal to: H = −Σ k=1,r V k Log(V k ) = 0.29 + 0.37 + 0.34 = 1. In Appendix C, Table A1 gives the shape of the R j 's distribution for 194 countries. Weekly Patterns in Daily Infected Cases Daily new infected cases are highly affected by weekdays, such that case numbers are lowest at the start of the week and increase afterwards. This pattern is observed at the world level, as well as at the level of almost every single country or USA state. Hence, in order to estimate biologically meaningful reproduction numbers, clean of weekly patterns due to administrative constraints, analyses have to be restricted to specific periods shorter than a week, or at rare occasions when patterns escape the administrative constraints. This weekly phenomenon occurs during exponential increase as well as decrease phases of the pandemic and during endemic periods in numbers of daily cases ( Figure 6). In addition, the daily new infected case record is discontinuous for many countries/regions, which frequently publish, on Monday or Tuesday, a cumulative count for that day and the weekend days. For example, Sweden typically publishes only four numbers over one week, the one on Tuesday cumulating cases for Saturday, Sunday and the two first weekdays. Discontinuity in records further limits the availability of data enabling detailed analyses of daily reproduction numbers and can be considered as extreme weekday effects on new case records due to various administrative constraints. We calculated Pearson correlation coefficients r between a running window of daily new case numbers of 20 consecutive days and a running window of identical duration with different intervals between the two running windows. These Pearson correlation coefficients r typically peak with a lag of seven days between the two running windows. The mean of these correlations are for each day of the week from Tuesday (data making up for the weekend underestimation) to Monday: 0.571, 0.514 (0.081), 0.383 (0.00008), 0.347 (0.000003), 0.381 (0.000006), 0.468 (0.000444) and 0.558 (0.03916), with, in parentheses, the p-value of the one-tailed paired t-test showing that the correlation observed with running windows starting Tuesday are more than the others (see also supplementary material). This could reflect a biological phenomenon of seven infection days. However, examination of the frequency distributions of lags for r maxima reveals, besides the median lag at 7 days, local maxima for multiples of 7 (14, 21, 28, 35, etc.). About 50 percent of all local maxima in r involve lags that are multiples of seven (seven included). This excludes a biological causation, except if data periodicity comes from an entrainment by the weekly "Zeitgeber" of census, near the duration of the contagiousness interval. We tried to control for weekdays using two methods, and combinations thereof. For the first method, we calculated z-scores for each weekday, considering the mean number of cases for each weekday, and subtracted that mean from the observed number for a day (Figure 7). This delta was then divided by the standard deviation of the number of cases for that weekday. The mean and standard variation are calculated across the whole period of study for each weekday. The second method implies data smoothing using a running window of 5 consecutive days, where the mean number of new cases calculated across the five days is subtracted from the number of new cases observed for the third day. Hence, data for a given day are compared to a mean including two previous, and two later days (Figure 8). We constructed two further datasets, where z-scores are applied in the first to data after smoothing from the second method and are applied in the second data after smoothing from the first method (not shown) (Figures 9 and 10). These four datasets from daily new cases database [30] transformed according to different methods and combinations thereof designed to control for weekday were analysed using the running window method. Despite attempts at controlling for weekday effects, the median lag was always seven days across all four transformed datasets, and local maxima in lag distributions were multiples of seven. After data transformations, about 50 percent of all local maxima were lags that are multiples of seven, seven included. Visual inspection of plots of these transformed data versus time for daily new infected cases from the whole world shows systematic local biases in daily new infected cases (after transformation) on Sundays and Mondays, for all four transformed datasets, with Sundays and/or Mondays as local minima and/or local maxima, according to which method or combination thereof was applied to the data. Hence, the methods we used failed to neutralize the weekly patterns in daily new cases due to administrative constraints. This issue highly limits the data available for detailed analyses of daily new cases aimed at estimating biologically relevant estimates of reproduction numbers at the level of short temporal scales. [30] applied to z-scores from Figure 8, as a function of days since 26 February 2020 until 23 August 2020 + indicates Sundays, X indicates Mondays. Z-transformations are specific to each weekday. For specific day j, the mean number of confirmed new cases calculated for days j − 1, j − 2, j, j + 1, j + 2 is subtracted from the number for day j. By smoothing on five consecutive days of raw data (confirmed world daily new infected cases [24]) and applying the z-transformation, we obtain a better result in Figure 11 than in Figure 10 in order to neutralize the weekly pattern. The reason is that the smoothing largely eliminates the counting defect during weekends due either to fewer hospital admissions and/or less systematic PCR tests or to a lack of staff at the end of the week to perform the counts. Discussion The duration of the contagiousness period, as well as the daily virulence, are not constant over time. Three main factors, which are not constant during a pandemic, can explain this: - In the virus transmitter, the transition between the mechanisms of innate (the first defense barrier) and adaptive (the second barrier) immunity may explain a transient decrease in the emission of the pathogenic agent during the phase of contagiousness [15], - In the environmental transmission channel, many geophysical factors that vary over time can influence the transmission of the virus (temperature, humidity, altitude, etc.) [4][5][6][7][8], - In the recipient of the virus, individual or public policies of prevention, protection, eviction or vaccination, which evolve according to the epidemic severity and the awareness of individuals and socio-political forces, can change the sensitivity of the susceptible individuals [32]. It is therefore very important to seek to estimate the average duration of the period of contagiousness of individuals and the variations, during this phase of contagiousness, of the associated daily reproduction numbers [33][34][35][36][37][38][39]. If the duration of the contagiousness phase is more than 3-5 days, for example ±7 days, the periodicity of seven days observed for the new daily cases could result of an entrainment of the dynamics of new cases driven by the social "Zeitgeber" represented by the counting of new cases, less precise during the weekend (probably underestimated in many countries not working at this time). That questions the deconvolution over 3 and 5 days, giving some negative R j . In a future work, we will compare our results with those obtained by deconvolutions on contagiousness durations between 3 and 12 days in order to obtain possibly more realistic values for the R j 's, and hence, have perhaps a double explanation for the 7 days periodicity, both sociological and biological. Before this future work, we have extended our study using a duration r = 3 of contagiousness to r = 7. The results are given in Appendix B: they show the same existence of identical variations of U-shape type but they specify the values of R j 's, more often positive and of more realistic magnitude, while keeping a sum approximately equal to R 0 . Rhodes and Demetrius have pointed out the interest of the distribution of the daily reproduction numbers [24] with respect to the classical unique R 0 , even time-dependent [25]. In particular, they found that this distribution was generally not uniform, which we have confirmed here by showing many cases where we observe the biphasic form of the virulence already observed in respiratory viruses, such as influenza. The entropy of the distribution makes it possible to evaluate the intensity of its corresponding U-shape. This entropy is high if the daily reproduction numbers are uniform, and it is low if the contagiousness is concentrated over one or two days. If some Rj are negative, it is still possible to calculate this uniformity index, by shifting their distribution by a translation equal to the inverse of the negative minimum value. We have neglected in the present study the natural birth and death rates by supposing them identical, but we could have taken into account the mortality due to the COVID-19. The discrete dynamics of new cases can be considered as Leslie dynamics governed by the matrix equation: where X j is the vector of the new cases living at day j and L is the Leslie matrix given by: . . , r, is the recovering probability between days j and j + 1. The dynamical stability for L 2 distance to the stationary infection age pyramid P = lim j X j /Σ i=j,j−r+1 X i is related to |λ − λ |, the modulus of the difference between the dominant and sub-dominant eigenvalues of L, namely λ = e R and λ , where R is the Malthusian growth rate and P is the left eigenvector of L corresponding to λ. The dynamical stability for the distance (or symmetrized divergence) of Kullback-Leibler to P considered as stationary distribution is related to the population entropy H [26][27][28][29][30][31][32], which is defined if l j = ∏ i=1,j−1 b i and p j = l j R j /λ j , as follows: H = −Σ j=1,r p j Log(p j )/Σ j=1,r j p j (18) The mathematical characterization by the population entropy defined in Equation (16) of the stochastic stability of the dynamics described by Equation (16) has its origin in the theory of large deviations [40][41][42]. This notion of stability pertains to the rate at which the system returns to its steady state after a random exogenous and/or endogenous perturbation and it could be useful to quantify further the variations of the distribution of the daily reproduction numbers observed for many countries [43][44][45][46][47][48][49][50][51][52][53]. In summary, the main limitations of the present study are: -The hypothesis of spatio-temporal stationarity of the daily reproduction numbers is no longer valid in the case of rapid geo-climatic changes, such as sudden temperature rises, which decrease the virulence of SARS CoV-2 [4], or mutations affecting its transmissibility. - The still approximate knowledge of the duration r of the period of contagiousness necessitates a more in-depth study at variable durations, by retaining the value of r, which makes all of the daily reproduction numbers positive. - The choice of uniform random fluctuations of the daily reproduction numbers is based on arguments of simplicity. A more precise study would undoubtedly lead to a unimodal law varying throughout the contagious period, the average of which following a Ushaped curve, of the type observed in the literature on a few real patients [10,[54][55][56][57][58]. Conclusions and Perspectives Concerning contagious diseases, public health physicians are constantly faced with four challenges. The first concerns the estimation of the basic reproduction number R 0 . The systematic use of R 0 simplifies the decision-making process by policymakers, advised by public health authorities, but it is too much of a caricature to account for the biology behind the viral spread. We have observed in the COVID-19 outbreak that it was non-constant during an epidemic wave due to exogenous and endogenous factors influencing both the duration of the contagiousness period and the daily transmission rate during this phase [54][55][56]. Then, the first challenge concerns the estimation of the mean duration of the infectious period for infected patients. As for the transmission rate, realistic assumptions made it possible to obtain an upper limit to this duration [45], mainly due to the lack of viral load data in large patient cohorts (see Figure A1 in Appendix A from [57][58][59]), in order to better guide the individual quarantine measures decided by the authorities in charge of public health. This upper bound also makes it possible to obtain a lower bound for the percentage of unreported infected patients, which gives an idea of the quality of the census of cases of infected patients, which is the second challenge facing specialists of contagious diseases. The third challenge is the estimation of the daily reproduction number over the contagiousness period, which was precisely the topic of the present paper. A fourth interesting challenge for this community is the extension of the methods developed in the present paper to the contagious non-infectious diseases (i.e., without causal infectious agent), such as social contagious diseases [59][60][61], the best example being that of the pandemic linked to obesity, for which many concepts and modelling methods remain available. Eventually, our approach using marginal daily reproduction numbers involving a certain level of noise in the dynamics of new daily infected cases defines a stochastic framework which describes phenomenologically the exponential phase as our results show for countries such as France, Russia, Sweden, etc. This stochastic modelling allows a better understanding of the role of the contagiousness period length and of the heterogeneity (e.g., the U-shape) of its daily reproduction number distribution in the COVID-19 outbreak dynamics [62][63][64][65]. On the medical level, the important message about the U-shape is that COVID-19 is similar to other viral diseases, such as influenza, with two successive reactions from the two immune defense barriers, innate cellular immunity first, which is not sufficient if symptoms persist, then adaptive immunity (cellular and humoral), which results in a transient decrease in contagiousness between the two phases. The medical recommendations are, in this case, never to take a transient improvement for a permanent disappearance of the symptoms. One could indeed, for a public health use, be satisfied after estimating the sum of the R j 's, that is to say, R 0 or the effective R e . For an individual health use, it is important to know the existence of a minimum of the R j 's, which generally corresponds to a temporary clinical improvement, after the partial success of the innate immune defenses. This makes it possible to prevent the patient from continuing to respect absolute isolation and therapeutic measures, even if a transient improvement occurs; otherwise, they risk, as in the flu, a bacterial pulmonary superinfection (a frequent cause of death in the case of COVID-19). On the theoretical level, the interest of the proposed method is its generic character: it can be applied to all contagious diseases, within the very general framework of Equation (1), which makes no assumption about the spatial heterogeneity or the longitudinal constancy of the daily reproduction numbers. The deconvolution of Equation (1) Acknowledgments: The authors hereby give their thanks to the framework of the University of Excellence Concept "Research University in Helmholtz Association I Living the Change". Conflicts of Interest: The authors declare no conflict of interest. Figure A1 shows a U-shaped evolution for the viral load in real [57] and in simulated [58] COVID-19 patients, and in real influenza-infected animals for the viral load and the body temperature [59]. and we can represent the evolution of X j 's on Figure A2. The evolution of the Xj's along the period of contagiousness shows at day 4 a sharp increase and a saturation. 2. Exponential phase in France from 25 October 2020 to 7 November 2020 The numbers of new cases are: The Figure A3 shows an evolution of the Xj's with a U-shape on the three first days along the period of contagiousness with a sum of R j 's equal to 1.11, close to the effective reproduction number R e = 1.13 [28]. 3. Beginning of the pandemic in the USA from 21 February 2020 to 5 March 2020 The evolution of the Xj's shows in Figure A4 a U-shape on day 4 with a sum of R j 's equal to 2.72, less than the effective reproduction number R e = 3.27 [28]. 4. USA exponential phase from 1 November 2020 to 4 November 2020 The evolution of the Xj's shows in Figure A5 a U-shape on the four last days with a sum of R j 's equal to 1.35, close to the effective reproduction number R e = 1.24 [28]. Figure A5. Values of the daily reproduction numbers R j along the period of contagiousness of length 7 days. 5. Beginning of the pandemic in the UK from 23 February 2020 to 7 March 2020 Figure A6 shows an evolution of the Xj's with a U-shape on the three last days along the period of contagiousness with a sum of R j 's equal to 9.88, higher than the effective reproduction number R e = 2.95 [28]. Figure A7 shows an evolution of the Xj's with a U-shape on the five last days along the period of contagiousness with a sum of R j 's equal to 1.07, close to the effective reproduction number R e = 1.06 [28]. Table A1 is built from new COVID-19 cases at the start of the first and second waves for 194 countries; it shows 42 among these 194 countries having a U-shape evolution of their daily R j 's twice, for 12.12 ± 6 expected with 0.95 confidence (p < 10 −12 ), and 189 times, a U-shape evolution for all countries and waves (397), for 99.3 ± 9 expected with 0.95 confidence (p < 10 −24 ). Hence, the U-shape is the most frequent evolution of daily R j 's, which confirms the comparison with the behavior of seasonal influenza (see Section 2.2).
10,364.2
2021-10-18T00:00:00.000
[ "Mathematics", "Medicine", "Environmental Science" ]
The Contribution of Convection to the Stratospheric Water Vapor: The First Budget Using a Global Storm‐Resolving Model The deepest convection on Earth injects water in the tropical stratosphere, but its contribution to the global stratospheric water budget remains uncertain. The Global Storm‐Resolving Model ICOsahedral Non‐hydrostatic is used to simulate the moistening of the lower stratosphere for 40 days during boreal summer. The decomposition of the water vapor budget in the tropical lower stratosphere (TLS, 10°S–30°N, and 17–20 km altitude) indicates that the average moistening (+21 Tg) over the simulated 40‐day period is the result of the combined effect of the vertical water vapor transport from the troposphere (+27 Tg), microphysical phase changes and subgrid‐scale transport (+2 Tg), partly compensated by horizontal water vapor export (−8 Tg). The very deep convective systems, explicitly represented thanks to the employed 2.5 km grid spacing of the model, are identified using the very low Outgoing Longwave Radiation of their cold cloud tops. The water vapor budget reveals that the vertical transport, the sublimation and the subgrid‐scale transport at their top contribute together to 11% of the water vapor mass input into the TLS. 2 of 16 to quantify. Only three studies from two groups provided such estimates so far, two for boreal winter (Schoeberl et al., 2014(Schoeberl et al., , 2018 and one for the full year (Dauhut et al., 2015). Two studies from another group provided estimates for two 7-day periods in both boreal summer and winter, but only in the vicinity of the tropical cold-point tropopause at the 100 hPa level (Ueyama et al., 2015. Summarizing the results of these studies, Schoeberl et al. (2014) found a 13% contribution of convection, by applying a Lagrangian model that follows air mass trajectories to reanalysis and comparing the results with and without the effect of convection. In their study, the effect of convection is deduced indirectly from the resulting adjustment of the water vapor content of the air masses in the presence of cloud along the trajectory. Such a quantification is limited by the accuracy and the resolution of the cloud height dataset as well as of the wind and temperature fields used to advect the air masses. Using the same method, Schoeberl et al. (2018) revised their estimate using an updated and more correct temperature dataset, with colder tropopause, and found a convective contribution of 2% only. Using also a Lagrangian model but with a more sophisticated, computationally expensive microphysical scheme, Ueyama et al. (2015) and Ueyama et al. (2018) found a 14% and 15% contribution of convection to the water vapor content at 100 hPa for their two 7-day periods in boreal winter and summer, respectively. The level at which their investigation is focused is located close, though slightly below the averaged cold-point tropopause. The impact of convection on the full stratospheric water vapor budget might be larger than these estimates for two main reasons: first the air masses are very close to saturation at the cold-point, limiting the hydration by convection, second they can be hydrated by convection at higher altitudes, during their whole rise through the low stratosphere. Dauhut et al. (2015) found an 18% contribution by simulating one very deep convective system with a Large-Eddy Simulation (LES) and upscaling the subsequent stratosphere hydration to the whole tropics. Here the estimate relies on the ability of the LES model to represent the hydration, on the representativeness of the study case and on the estimated frequency of very deep convective systems in the tropics, which varies with season. It should be kept in mind that such studies are highly dependent on the chosen representation of the microphysics. Large uncertainty remains in quantifying the importance of convection for the global stratospheric water budget. Here, to take up the challenge we use a distinct approach. We take advantage of a GSRM that is able to both represent the convective-scale processes explicitly and simulate those over the full globe. The global atmosphere is simulated with the ICOsahedral Non-hydrostatic model (ICON), integrated with a grid spacing of 2.5 km during 40 days in the framework of the DYAMOND intercomparison project (Stevens et al., 2019). The convection is resolved and the vertical resolution around the tropopause, about 600 m, is fine enough to capture the small-scale mixing at the top of the overshoot and the subsequent shallow hydration patches that form above the tropical tropopause, as will be shown. Taking advantage of these two aspects, we derive for the first time a global budget of the low-stratospheric water vapor using a model that resolves convection, and provide a new estimate of the convective contribution. In Section 2 the observational data sets, the simulation and the analysis methods are presented. Section 3 describes the observed stratospheric humidity field and assesses the ability of our model to represent this field and its variations. Section 4 presents the budget of the stratospheric water vapor to identify the origin of its temporal variations. Section 5 quantifies the contribution of the very deep convective systems to this stratospheric water budget. Section 6 discusses the implications of our investigation, trying in particular to reconcile the various estimates of the convective contribution to the stratospheric water vapor as deduced from various observations and modeling studies. Section 7 gives the conclusions. MLS Observations To assess the representation of the stratospheric water vapor field and its variations in the simulation, we use the observations (version v4.22) from the Microwave Limb Sounder (MLS) instrument onboard of the NASA's Earth Observing System Aura satellite (Waters et al., 2006). Microwave Limb Sounder has a daily near global coverage thanks to its near-polar orbit. We use the gridded water vapor product on the seven pressure levels located between 100 and 132 hPa, between 1 August and 9 September 2016 to match the simulation time period, as well as in eight other years to document the year-to-year variability. The vertical resolution ranges between 3.0 and 3.2 km, and the horizontal resolution between profiles ranges between 190 and 265 km. The zonal and temporal averaging of the MLS observations reduces much of the uncertainty resulting from the precision of the instrument, but the uncertainty due to its accuracy is unchanged and ranges from 4% to 9% (Livesey et al., 2017). of 16 The recommendations for data screening given in Livesey et al. (2017) are followed. We do not apply the MLS averaging kernels to the simulated stratospheric water vapor. As shown by the study of Ploeger et al. (2013), using the averaging kernels only matters for the estimated stratospheric water vapor content at high latitudes. As our study is focusing on the tropics, using the averaging kernel is not necessary. CERES Observations The development of the very deep convective systems, characterized by their extremely cold cloud tops, can be monitored using spatial observations of the Outgoing Longwave Radiation (OLR). Here we use the observations from the CERES project (Clouds and the Earth's Radiant Energy System; Wielicki et al., 1996) as given in the CERES SYN1deg Edition4A product (Doelling et al., 2016). The latter is a global dataset with a resolution of 1°. It contains, among others variables, hourly averaged OLR. The radiances are measured by the MODIS imager onboard of the Terra and Aqua satellites, with daily global coverage. To account for local, hourly variations of OLR, measurements from new-generation geostationary satellite imagers are incorporated. The resolution of the CERES dataset is too coarse to capture individual overshoots, that have a width of about 10 km and a lifetime of 15 min (Dauhut et al., 2018), but the cold temperature of the anvil tops and of the rising or collapsing overshoots leaves a low-value and large-scale signature in the OLR field that allows us to detect the very deep convective systems. MODIS and geostationary infrared measurements have been extensively used to detect the overshooting tops in the past (e.g., Bedka et al., 2010;Sohn et al., 2009). The CERES product, that combines the two and provides an hourly global coverage is used here to validate the geographical distribution of the deep convective activity in the simulation. DYAMOND ICON Simulation The global troposphere and stratosphere are simulated for 40 days starting on 01 August 2016 with the Eulerian ICON atmospheric model (Icosahedral Nonhydrostatic Weather and Climate Model, Zängl et al., 2015), under the framework of the DYAMOND intercomparison project (DYnamics of the Atmospheric general circulation Modeled On Non-hydrostatic Domains, Stevens et al., 2019). We use here the simulation performed with a horizontal grid spacing of 2.5 km. The vertical grid spacing ranges between 20 m at the surface and 1.8 km at 44 km altitude (top of the physical domain, below the sponge layer), and is about 600 m at the tropopause. The DYAMOND set-up is described in details in Stevens et al. (2019), while the set-up, parameterizations used and validation of this particular ICON simulation are presented in Hohenegger et al. (2020). In short, deep and shallow convection are explicitly represented, without the use of any convection parametrization. Cloud and precipitation are represented by the prognostic specific mass content of five hydrometeor species: cloud water, cloud ice, rain, snow and graupel, whose evolutions are calculated by a bulk one-moment microphysics scheme (Baldauf et al., 2011). The precipitating ice hydrometeors (snow and graupel) are assumed to have a size distribution that is exponentially decaying with particle size, and a fall speed that only depends on particle size. A comparison (not shown) to the size distributions reported by Woods et al. (2018) for Tropical Tropaupose Layer (TTL) cirrus indicates that the number concentrations are in the range of the ones observed between −90°C and −70°C for the large precipitating hydrometeors (larger than 100 μm), whereas they are too low for the smaller hydrometeors. Turbulent fluxes are represented by a turbulent scheme based on a prognostic equation for turbulent kinetic energy (Raschendorfer, 2001). Heating and cooling rates due to radiation are calculated every 15 min with the Rapid Radiative Transfer Model (Mlawer et al., 1997;Mlawer & Clough, 1998). Chemical reactions are not included, this is not an issue since the source of stratospheric water vapor by methane oxidation is active well above the investigated area that stops at 24-km altitude. The atmosphere as well as soil moisture and temperature are initialized on 1 August 2016 at 00 UTC with the analysis from the European Centre for Medium-Range Weather Forecast (ECMWF), and then evolve freely. Sea surface temperature and sea ice are prescribed from ECMWF operational analysis. In order to account for the spin up of the atmosphere, the first day of the simulation is discarded when computing the water vapor budget described in the following section. As the model is allowed to freely evolve, the simulated fields are expected to differ from the observed ones, especially after 5 days of simulation time when much of the atmospheric predictability is lost. For this reason, in the current study, the simulated fields are not only compared to the observed fields of 2016, but also to those of eight other years (all shown in Supporting Information S1). As seen in Lee et al. (2019), a grid spacing of 2.5 km allows representing the convective overshoots into the stratosphere and the mixing leading to the local hydration. The results are however sensitive to the chosen grid spacing. Dauhut et al. (2015) analyzed the sensitivity of the transport by one very deep convective system to the horizontal grid spacing, varying it from 100 to 1600 m. They found that the updraft properties (vertical velocity, hydrometeor content) and the stratospheric hydration start to converge at a horizontal grid spacing finer than 200 m, with 20%-25% weaker transport at kilometric horizontal grid spacing. The current study may thus underestimate the hydration of the stratosphere by the convection. Not only the horizontal resolution, but also the vertical resolution, may affect the results. Dauhut et al. (2018) investigated the processes at the overshooting tops and found that the subsequent hydration at each top is determined by its maximal overshooting altitude, using 100-m vertical grid spacing. The 600-m vertical grid spacing used here certainly undersamples the full range of overshooting depths, although it is not clear whether this will lead to a high or low bias on the estimated convective hydration. The output of the simulation, provided at a 3-hourly frequency, is initially on a native grid made of triangular cells. In order to apply the budget decomposition described below, we first regridded the data on a Cartesian, latitude-longitude grid, that has a grid spacing of 0.1°. Water Vapor Budget To investigate the causes of the variations in the water vapor content of the lower stratosphere, we decompose the water vapor budget at each grid cell with the following equation, consistent with the continuity equation used in ICON to achieve mass conservation (Equation 5 in Zängl et al., 2015): where ρ is the full air density, q is the specific humidity, u, v, and w are the zonal, meridional and vertical components of the wind, respectively, and s is the sink/source term due to microphysics (sublimation, condensation, deposition) and subgrid-scale transport (turbulent fluxes and coherent flows that are finer than 0.1°, the resolution of the analysis). The first two terms in brackets on the right-hand side denote the horizontal convergence of the moisture flux, and the third one is the vertical convergence of the moisture flux, all simulated by the explicit flow on the 0.1° analysis grid. The sink/source term is computed as a residue. With such a decomposition of the water vapor budget, the variations of water vapor mass in each grid cell equals the convergence of the fluxes across the edges, plus a sink/source term that must be understood as the local variations not accounted for by the explicit flow at 0.1°. This decomposition is consistent with the equation solved in the Eulerian ICON model, and differs from the equation of conservation formulated this way: = where s Lag is the water vapor sink/ source term associated to a followed air parcel, and = + + + is the material derivative of water vapor mass. The two decompositions differ by a divergence term, that is non-zero since ICON is fully compressible. We opted for Equation 1 because it is the one used to include the microphysical water vapor variations in ICON (Reinert, 2020). In the whole study, the specific humidity and the integrals of the terms of Equation 1 are systematically converted into volume mixing ratios. Moistening of the Stratosphere The distributions of the low-stratospheric water vapor in the MLS observations and in the ICON simulation are shown in Figures 1a-1c as zonal and temporal averages over the simulated period. Two years are shown for the MLS observations (2016 and 2017) to illustrate the year-to-year variability and more years can be found in the SI. The simulation correctly reproduces the contrast between the moist region up to 60 hPa in the northern hemisphere and the dry region between 80 and 40 hPa in the southern hemisphere. Except for the latitudes between 15° and 35°S, where the simulation exhibits a moist bias at the tropopause, the simulation produces values in agreement with the MLS observations within 1 ppmv (within 21%). Between 10°S and 30°N and between 90 and 50 hPa, that corresponds to the region of investigation defined below, the ICON moist bias with respect to MLS observation in 2016 (2017) reaches a maximum of +0.45 (+0.25) ppmv, and +0.4 (+0.1) ppmv on average. As a matter of reference, the inter-annual variability of humidity in this region, computed as the standard deviation between the MLS average volume mixing ratio in this region between 2011 and 2019, amounts to 0.25 ppmv. of 16 The less-than-0.5-ppmv bias with respect to the MLS observations is also much smaller than the biases generally apparent in traditional global Eulerian models: many state-of-the-art General Circulation Models (GCMs) and Chemistry Climate Models (CCMs) exhibit very large biases, about ±2 ppmv (Eyring et al., 2006;Hardiman et al., 2015). Several factors contribute to these biases, in particular the propagation of incorrect humidity values from the upper troposphere by the vertical advection scheme (Hardiman et al., 2015) and biases in tropical tropopause temperatures (Eyring et al., 2006). A common limitation of GCMs and CCMs is their coarse vertical and horizontal resolutions. Besides inaccuracy in the large scale cross-tropopause transport (Stenke et al., 2008), their representation of convection is not designed to reproduce the overshoot transport up into the stratosphere, as illustrated for one convective parametrization in Dauhut et al. (2018). In terms of the temporal evolution of the low-stratospheric humidity (Figures 1d-1f), both MLS observations and ICON simulation show an increase between 01 August and 09 September. The observations show for 2016 a moistening of the low stratosphere of 0.2-0.6 ppmv, up to a height of 50 hPa, over latitudes spanning 40°S to 60°N. Such a moistening is part of the annual cycle of the stratospheric water vapor, and can actually be of much larger amplitude in other years, as Figure S2 in Supporting Information S1 shows with up to 1.8 ppmv in 2017 ( Figure 1e). A simple bulk computation can explain these variations solely based on a typical upward velocity w = 4.10 −4 m/s and the water vapor gradients that one can estimate from Figure 1: The moistening Δq equals − Δ . For instance, in ICON at 10°N, between 62 hPa (4.9 ppmv) and 34 hPa (4.1 ppmv) ≃ −2.10 −4 ppmv/m. For Δt = 40 days this leads to Δq = +0.3 ppmv, consistent with the water vapor variation shown in Figure 1f Above 50 hPa, and south of 40°S, the variation is negative, and is due to the upward and poleward advection of the dry phase of the lower tropical stratosphere by the Brewer-Dobson circulation. Similar bulk computations can explain why the Brewer-Dobson circulation induces less drying in the ICON simulation than in the observations. This is because the ICON initial stratosphere above 40 hPa, and south of 40°S, is slightly drier than in the MLS observations and has then smaller humidity gradients. To quantify the contribution of the tropical convection to this moistening, and more generally to the water vapor input into the stratosphere, our study focuses on a wide latitudinal band, between 10°S and 30°N, that encompasses well the Inter-Tropical Convergence Zone (ITCZ) during the simulated period (cf. Figure 3), where air enters the stratosphere from the troposphere (the head of the atmospheric tape-recorder, Mote, 1995). At these latitudes we define the tropical lower stratosphere (TLS) as the 3-km deep region above the tropopause: between 17 and 20 km altitudes. This region is where the agreement between observations and simulation is best. The altitude range corresponds approximately to the observational pressure levels between 90 and 50 hPa. In this region and over the simulated period, the water vapor mass increase is +29 Tg in the ICON simulation and +12 Tg (+28 Tg) in MLS observations in 2016 (2017) (Figures 1d-1f). The interannual variability of this moistening is very large ( Figure S2 in Supporting Information S1), with an average and a standard deviation both equal to 11 Tg over 9 years 2011-2019. The ICON simulation lies in the upper range, typical of observed years like 2017 that exhibits a large TLS moistening between 1 August and 9 September. The following section investigates the origin of the TLS moistening by decomposing the water vapor budget to shed some light on the contribution by the convergence of the moisture fluxes and by the sink/source term. Origin of the Moistening The decomposition of the water vapor budget following Equation 1 allows us to quantify the contributions of moisture flux convergence and of the sink/source term to the net variation of the simulated water vapor. The terms of Equation 1 are zonally averaged and integrated over the course of the simulation (Figure 2). The resulting water vapor mass variations per unit volume are converted into volume mixing ratio variations in order to facilitate the comparison with the background values. Mean residual meridional and vertical velocities are overlaid in order to visualize the low-stratospheric circulation, and to assess the strength of the Brewer-Dobson circulation as it drives the large-scale water vapor transport. The distribution of the mean residual vertical velocity at 70 hPa in ICON (near 18.5 km altitude) is in very good agreement with the climatological one in August, as given in Figure 4 of Butchart (2014). The values are upward up to 0.3 mm/s between 20°S and 40°N and downward elsewhere, down to −0.5 mm/s around 50°S. The moistening at the tropopause, below 17 km altitude, is first due to the vertical convergence of the moisture flux ( Figure 2b) with a strong maximum between 5° and 30°N, extending up to 19 km (around 470 K potential temperature). This region is situated at the same latitudes as the ITCZ, characterized by deep convection producing very cold cloud tops and very low OLR (Figure 3c). Part of the strong moistening by the vertical flux is compensated by horizontal divergence, which actually reaches its largest values below 17 km as well. This is expected as the horizontal moisture flux redistributes the water vapor brought there by the vertical moisture flux. In contrast, the sink/source term (Figure 2c) leads to a dehydration between 20°S and 30°N. The sink/source term is interpreted as the tendency resulting from all processes occurring at scales finer than 0.1° (about 10 km), the resolution of the regridded output. Theses processes include from the finest to the largest scales: the microphysical processes, namely the sublimation and the condensation either by deposition or by nucleation, the turbulent mixing, and the mixing by coherent circulations finer than 0.1°. The dehydration around the tropopause is due to the condensation of the water vapor exceeding the saturation. This dehydration largely compensates the vertical convergence of the moisture flux and explains that, despite the strong maximum in the vertical moisture flux convergence, the net variation does not exhibit such a strong maximum. It matches the concept of cold trap, where moist air enters the stratosphere in the coldest regions where it is strongly dehydrated by freeze-drying processes (condensation). As will be shown and in line with previous studies, while the cold trap is efficient to remove water vapor around the tropopause, convection is able to bring water vapor above it. of 16 Between 17 and 20 km, the sink/source term adds water vapor. In the next section, the contribution of convection to this TLS moistening is quantified, after having demonstrated the ability of the ICON model to simulate the very deep convective systems responsible for this hydration. The pattern of the horizontal and vertical moisture flux convergence terms is more noisy but together they also contribute to the moistening of the TLS. Quantifying the contributions (see the upper row of Table 1) the sink/source term contributes 9% to the moistening of the TLS, the vertical one 128% and the horizontal one −37%. As the integral of the moisture flux convergence over the box equals the flux across the edge of the region (divergence theorem), the moistening by the vertical moisture convergence is a direct result of the upward flux at 17 km altitude, also because the vertical flux at 20 km is small in comparison. In contrast, the integral of the horizontal convergence is due to the exchanges across the region boundaries at 10°S and 30°N and indicates a net poleward export of water vapor. Contribution of the Very Deep Convective Systems The objective of this section is to isolate the contribution of the very deep convective systems to the stratospheric water vapor budget. The first step is to define a threshold, here based on OLR, to detect the very deep convective systems. Then the water vapor tendencies associated with the very deep convective systems will be explained and quantified. Detection of the Very Deep Convective Systems The ability of the ICON model to simulate deep convection is shown in Hohenegger et al. (2020). In particular, the simulation produces precipitation and top-of-the-atmosphere radiation fluxes averaged over −30° to 30°N and over the full simulation period that match the observed values within 8% (see Table 2 in Hohenegger et al., 2020). The center of mass of the Intertropical Convergence Zone (ITCZ) is simulated in a very good agreement with observations over both the Atlantic and Pacific oceans, although it is slightly wider (by less than 1°) than observed in 2016. The ability of ICON to simulate the deepest convection on Earth is assessed here using maps of the first percentile of the OLR time series at each grid point (a metric of the minimal OLR values). This quantity will nevertheless not be used to define the very deep convective systems, for which we rather select a fixed OLR threshold as detailed just after. The very deep convective systems are the few deepest systems whose tops reach the lower stratosphere. These tops are overshoots. Only the systems supplied with very unstable air at the surface and internally organized so that this air experiences only little dilution by mixing with environmental air during its ascent are able to produce such overshoots into the stratosphere (Dauhut et al., 2016). Iwasaki et al. (2010) and C. Liu and Zipser (2015) documented the climatological distribution of these systems based on a combination of CloudSat satellite radar and Caliop satellite lidar observations, and on TRMM satellite radar observations, respectively. Rysman et al. (2017) characterized their microphysical profiles using the spaceborne Microwave Humidity Sounder. These systems can be identified from space thanks to their cold cloud tops, and the associated very low longwave radiation exiting the atmosphere aloft. The first percentile of the OLR time series at each grid point allows us to spot the low-OLR regions where these very deep convective systems developed, both in the CERES observation for 2016 and 2017 as well as in the ICON simulation (Figure 3). The simulated OLR first percentiles are slightly larger than the observed ones in 2016 (by 10-20 W/m 2 in the West Pacific), but in fair agreement with the observed ones in 2017. This is true over the Asian monsoon region in particular, which indicates that ICON deepest monsoon convective systems are not deeper than the observed ones. This suggests that ICON is neither producing too deep monsoon convection, which could have led to a too high injection of convective water into the stratosphere over this region. The obtained larger OLR for the 2016 comparison is consistent with the too low frequency of the lowest IR brightness temperatures as found in Senf et al. (2018), who used the same model and the same period, but integrated the model over the Tropical Atlantic only. The geographical distribution of simulated OLR is in excellent agreement with the observations from the 2 years and with the climatologies of Iwasaki Note. The budget is computed between 10°S and 30°N, between 17 and 20 km altitude, and integrated between 2 August and 8 September included. Last column is the water vapor input calculated as the sum of the vertical convergence and the sink/source terms. The percentage is with respect to the water vapor input for all points. Table 1 Contribution of the Different Budget Terms to the Simulated Net Variation of Water Vapor (in Tg), for All Points (First Line) and Over the Very Deep Convective Systems Only (Second Line) and C. Liu and Zipser (2015). The simulation reproduces the deepest convection on Earth over the regions where it has been observed. At the top of the very deep convective systems, the water vapor volume mixing ratio is anomalously large with more than 6 ppmv between 17 and 18 km (Figure 4a). Just below, the air is particularly dry, with less than 4 ppmv. This is explained by the processes occurring inside the overshoots: the overshoots are extremely cold because of the strong adiabatic cooling during their ascent. As they are colder than their environment, they are also drier, that is, they contain less water vapor. Most of their water content is in the ice phase. At their top, the entrainment of subsaturated stratospheric air warms the cloud, leads to intense sublimation of the ice crystals and the development of a moist anomaly (Dauhut et al., 2018). The vertical velocity at the top of the very deep convective systems (Figure 4c) is at least one order of magnitude larger than in the other regions, reaching 300 m/h on average up to 21 km. Stratosphere Hydration by Convection Once the very deep convective systems are detected, it is possible to visualize their impact on the stratospheric water vapor field. Figure 5 illustrates the injection of water by some very deep convective systems into the stratosphere, at 19-km altitude. Note that water vapor anomalies can be clearly seen at 19 km. As shown in Hassim and Lane (2010) and Lee et al. (2019), the water vapor anomalies lie at higher altitudes than the overshooting tops, typically 0.5 up to 1.5 km above because of top entrainment of stratospheric air (Dauhut et al., 2018) or hydraulic jump (O'Neill et al., 2021). Also individual overshoot reaching as high as 19 km do exist (e.g., C. Liu & Zipser, 2015), although they are extremely infrequent. In Figure 5 the propagation of the hydration patches injected by the very deep convective systems over the West Pacific on 5 August (labeled 1), over the border region between Pakistan and India on 8 August (labeled 2), and over Halong Bay and Hainan coasts on 11 August (labeled 3), can be visually tracked. The strong easterlies in the low stratosphere quickly advect the hydration patches away from their genesis location, over large-OLR regions. Diffusion by turbulence, itself fostered by the vertical shear of the horizontal wind, decreases the water vapor volume mixing ratio of the hydration patches, from more than 11 ppmv shortly after the injection to about 8 ppmv 3 days after. Similar advection and decrease of the water vapor anomalies shortly after the injection were already reported by Dauhut et al. (2015) and Lee et al. (2019). Contribution to the Water Vapor Budget Summing up the previous results, ICON produces water vapor variations in line with the observations of boreal summer (Figure 1), its TTL vertical velocities are consistent with the ones reported in this region (Figure 2), the model produces the deepest convection where it is observed (Figure 3), it produces overshoot cloud tops and the associated dry and moist anomalies in line with previous studies (Figure 4) and finally the moist patches produced downstream of the overshoot injections are as expected ( Figure 5). In short, the ICON model is able to reproduce the key phenomena at play, as well as their overall impact. We can thus now assess the contribution of convection to the water vapor budget based on this ICON simulation. By integrating the terms of the water vapor budget over all time steps, between 10°S and 30°N, and plotting them as a function of OLR, it is possible to disentangle the contribution of the different OLR regions to the stratospheric water budget (Figure 6). Here the focus is on the very deep convective systems, defined above as the regions with OLR lower than 90 W/m 2 . The OLR coordinate in Figure 6 is used to primarily distinguish these very deep convective systems from the other regions, but it should not be read as a proxy for the horizontal distance from the overshooting tops. As illustrated in Figure 5, intermediate OLR values like 110-180 W/m 2 can also correspond to regions far away from the very deep convective systems. Figure 6 indicates that, in contrast to the bin averages shown in Figure 4, the few points at very low OLR contribute significantly to the different budget terms, despite their much lower frequency (cf. Figure S3 in Supporting Information S1). The very deep convective systems do not lead to an obvious moistening (Figure 6d). In the stratosphere above them, the hydration by microphysics and subgrid-scale transport up to 19 km (Figure 6c) is compensated by the vertical moisture flux (Figure 6b) that transports part of this hydration upward, between 18 and 20 km. No net hydration is visible in these regions because of the efficient transport out of the very deep convective regions Figure 4 that shows bin averages, here bin contributions are shown, weighting each bin by its frequency (cf. Figure S3 in Supporting Information S1). Values are in ppmv. by the divergent horizontal moisture flux (Figure 6a): this corresponds to the fast advection and spread of the hydration patches by the stratospheric winds, visible in Figure 5. In contrast, in regions with slightly larger OLR, between 90 and 110 W/m 2 , a net moistening is visible and can be explained by the convergence of the horizontal moisture flux. Table 1 summarizes the contribution of the very deep convective systems (OLR lower than 90 W/m 2 ) to the net moistening of the TLS. The net water vapor mass variation above the very deep convective systems corresponds to an increase of +0.57 Tg of water vapor, that is, around 3% of the water vapor mass increase in the whole TLS. The sink/source term actually adds 4.0 Tg of water vapor. The reason why most of the hydration by the sink/ source term does not translate into a net increase of water vapor comes from the strong horizontal divergence of the moisture flux above the very deep convective systems that contribute to −2.6 Tg. In order to differentiate the impact of convection itself from the impact of the diverging stratospheric winds (advection and spread of the hydration patches), we define here the stratospheric water vapor input as the sum of the vertical moisture flux convergence term and the sink/source term. Comparing the input of +3.2 Tg in the regions with OLR lower than 90 W/m 2 , attributed to the very deep convective systems, to the input over the full TLS gives a 11% contribution of the very deep convective systems to the input of water vapor into the stratosphere. At large OLR, the net moistening between 17 and 20 km visible in Figure 6d is due to the horizontal convergence of water vapor and the sink/source term (Figures 6a and 6c). The contribution of microphysics to this latter term is expected to be small, as the ice concentrations are very low in this range of OLR values, cf. Figure 4b. The moistening of these regions are thus due to the horizontal transport by the flow and the subgrid-scale transport including the turbulent fluxes. The spread of the hydration patches-visible in Figure 5, Figure 2 of Dauhut et al. (2015) and Figures 4 and 6 of Lee et al. (2019)-is an illustration of such transport that can occur thousands of kilometers away from the very deep convective systems, over regions with large OLR (white-hatched areas in Figure 5). This moistening by the horizontal moisture fluxes and the sink/source term is partly compensated by the vertical moisture flux that is divergent there (Figure 6b). Discussion The decomposition of the water vapor budget in the TLS and the integration of its terms over the very deep convective systems allowed us to quantify the contribution of convection to the water vapor input in the TLS. Our result, a 11% contribution, lies in the range of the estimates by previous studies. It is lower than the estimate of 18% made by Dauhut et al. (2015), who upscaled the stratospheric water vapor input from one very deep convective system. It is however not as low as the last estimate of Schoeberl et al. (2018) of 2%. The latter is computed for boreal winter, and as the difference in the stratospheric humidity between Lagrangian simulations considering and not considering the transport of water by convection. Using this method, Ueyama et al. (2015) and Ueyama et al. (2018) computed the convective contribution to the water vapor at the very specific 100 hPa level. Their estimates of 14% and 15% for boreal winter and summer, respectively, are slightly larger than our estimate. The differences between the estimates can result from differences in: (a) the time period considered, (b) the level considered, (c) the methods used to get the estimate, and (d) the models used. The different time periods considered could explain that the estimate in Schoeberl et al. (2018) is lower than in our study. They considered a boreal winter period. Given the compilation of observations by C. Liu and Zipser (2005), the climatological abundance of overshooting convection during the month of August, investigated here, is larger than during boreal winter, but certainly not by a factor of more than 5 that would be needed to reconcile the estimates. Also, the two estimates from Ueyama et al. (2015) and Ueyama et al. (2018) suggest a very limited seasonal variation of the convective contribution to the low-stratospheric water vapor, with the caveat that these latter two estimates are based on short time periods (7 days). Especially for the comparison to the Ueyama et al. (2015) and Ueyama et al. (2018) estimates, the different levels considered could be a plausible explanation as they only considered the 100 hPa level. However, as stated in the introduction, considering only the 100 hPa should bias the estimate low, not high, as found here with our estimate. Furthermore, Ueyama et al. (2015) and Ueyama et al. (2018) ran diabatic back trajectories to convection, using monthly averaged heating rates inconsistent with the wind fields. Finally, about one third of their back trajectories end up in the stratosphere, where they set them to MLS water vapor values. This implicitly ties their results to MLS, so uncertainty remains about the ability of their simulation to quantify the water vapor variations. This suggests that the different methods used to get the estimates might explain much of the differences as, in contrast to these other studies, we computed here a full budget of water vapor in the low stratosphere. Precisely isolating the reason for the differences between the various estimates is beyond the scope of this study and would require an international concentrated effort, for example, defining an experiment where the hydration would be computed using the different methods on the same field/time period. Finally, the differences in the model used, and especially in their representation of the microphysics, is also crucial. A way to evaluate this uncertainty would be to apply our analysis to the other storm-resolving models participating in the DYAMOND intercomparison project, something left for future work. Besides the representation of microphysical processes, there are other sources of uncertainties associated to our results. As mentioned in Section 2, whereas the horizontal grid spacing of 2.5 km may lead to underestimate the convective injection of water vapor into the stratosphere (Dauhut et al., 2015), the use of a vertical grid spacing of 600 m may lead to an over-or underestimation of the convective hydration of the stratosphere. Also, the convective contribution of the water vapor input into the stratosphere is deduced from the sink/source term of the water vapor budget (Equation 1), a term that is computed as a residue. Using in-line budget of the condensation/ sublimation processes would provide a more direct quantification. Finally further uncertainties arise due to the shortness of the simulated period. Doing similar quantification with other GSRM and using longer simulations, simulations that now become available, would help reduce such uncertainties. Our analysis also draws to the attention the fact that deducing the convective contribution to the water transport into the stratosphere from the anomalies of humidity observed above the tropical very deep convective systems (like the ones in Figure 5) can be misleading. Indeed, whereas the local humidity increases at the top of the very deep convective systems correspond to +0.6 Tg in the simulation, that is, 3% of the water mass increase in the full TLS (Table 1), the budget indicates that this does not reflect the 11% contribution of convection to the water mass input. The difference is primarily due to the horizontal moisture fluxes, which efficiently advect the water vapor anomalies out of the very deep convective systems (Table 1 and Figure 5). This strong ventilation contrasts with the paradigm of containment, that holds at the scale of the Asian monsoon anticyclone, that is, at much larger scale than the typical size of the moist anomalies due to individual convective injections (about 100-km wide). The strong ventilation explains why a relatively limited increase of water vapor (less than +8 ppmv) is found in the stratosphere over the individual tropical very deep convective systems, as underlined in E. J. Jensen et al. (2020). Conclusions We used a simulation of the global atmosphere performed with the ICON model at a grid spacing of 2.5 km to assess the contribution of deep convection to the water vapor budget of the lower stratosphere. For the first time, global estimates of this contribution can be made without having to rely on a convective parameterization or using a trajectory model. The considered period lasts 40 days, from 1 August to 9 September 2016. The simulation reproduces the structure of the zonal and temporal average of the water vapor field. It reproduces as well the moistening of the stratosphere observed over the simulated period, with a too large amplitude compared to the observations in 2016, but in better agreement with observations from other years like 2017. We quantified the budget of the water vapor in the TLS and disentangled its different contributions. The main source of hydration comes from the vertical fluxes of water vapor that converge up to 19 km altitude, with a net maximum localized above the ITCZ between 5° and 30°N. The hydration due to these vertical moisture fluxes is compensated locally by the horizontal moisture fluxes that redistribute the humidity southwards and northwards. Finally, the sink/source term due to the microphysics and mixing by the smaller-scale circulations subtracts water vapor from the tropopause up to 17.5 km and adds water vapor above, spreading the hydration vertically up to 20 km. For the TLS and for the simulated period, this translates into an increase of +27.3 Tg water vapor by the vertical moisture fluxes, −8.0 Tg by the horizontal ones and +2.0 Tg by the sink/source term. To quantify the contribution of the very deep convective systems to this hydration we first identified them by their associated very low OLR. Their distribution is in excellent agreement with observations. The lowest OLR regions, with OLR lower than 90 W/m 2 , exhibit typical characteristics of very deep convective systems that overshoot into the stratosphere: significant ice content (larger than 1 eq. ppmv) and a positive water vapor anomaly (larger than 6 ppmv) above the tropopause. The computation of the water vapor budget based on the ICON simulation indicates a 11% contribution of the very deep convective systems to the water vapor input into the stratosphere. This water vapor input includes both the direct hydration by the overshoots (i.e., the sink/source term) and the vertical convergence term. The seasonal and interannual variability of this estimate remains to be investigated as well as the sensitivity of our results to the representation of the microphysical processes. 81% of the water vapor input by the very deep convective systems is directly transported away from these systems by the horizontal moisture fluxes, decreasing the amplitude of the stratospheric water vapor anomalies. Our results underline that, despite the relatively small increase of stratospheric water vapor concentration observed above the tropical very deep convective systems a full-budget computation greatly helps to quantify the contribution of convection to the stratospheric water vapor input. Data Availability Statement The Aura-MLS data are archived online (https://mls.jpl.nasa.gov/). The CERES SYN1deg product is available on the CERES data portal (https://ceres.larc.nasa.gov/data/). Model output supporting the conclusions of this article are archived by the German Climate Computing Centre (DKRZ) and made available through the ESiWACE project webpage (https://www.esiwace.eu/services/dyamond). The scripts for the analysis will be available online (http://hdl.handle.net/21.11116/0000-0007-989A-0) upon publication.
10,255
2022-02-21T00:00:00.000
[ "Environmental Science", "Physics" ]
Fuzzy-Based Trust Prediction Model for Routing in WSNs The cooperative nature of multihop wireless sensor networks (WSNs) makes it vulnerable to varied types of attacks. The sensitive application environments and resource constraints of WSNs mandate the requirement of lightweight security scheme. The earlier security solutions were based on historical behavior of neighbor but the security can be enhanced by predicting the future behavior of the nodes in the network. In this paper, we proposed a fuzzy-based trust prediction model for routing (FTPR) in WSNs with minimal overhead in regard to memory and energy consumption. FTPR incorporates a trust prediction model that predicts the future behavior of the neighbor based on the historical behavior, fluctuations in trust value over a period of time, and recommendation inconsistency. In order to reduce the control overhead, FTPR received recommendations from a subset of neighbors who had maximum number of interactions with the requestor. Theoretical analysis and simulation results of FTPR protocol demonstrate higher packet delivery ratio, higher network lifetime, lower end-to-end delay, and lower memory and energy consumption than the traditional and existing trust-based routing schemes. Introduction Wireless sensor networks (WSNs) have attractive wider range of application from civil sector to military [1][2][3][4][5]. A WSN consists of large number of resource constraint sensor nodes (SNs) deployed in hostile environment which makes it feasible for adversaries to perform varied types of attack [6,7]. Due to the limited communication range [8], the SNs communicate with the sink in multihop. This cooperative nature of WSNs makes it vulnerable to insider attack which requires a trust management scheme. Most of the trust management schemes proposed in the literature were dependent on direct and indirect observations. For direct trust computations, the promiscuous mode of operation was used in most of the trust-based routing protocols for neighbor monitoring to compute direct trust. It demands that nodes should be in wakeup state for longer duration which incurs more energy. The indirect trust was computed by receiving recommendation from all the neighbors which also consumes more energy. Moreover, the malicious nodes were identified based on their historical trust. Hence, in order to reduce the damages caused due to malicious activities in mission critical applications, the behavior of a node should be predicted in advance based on the historical trust and the tendency of that node to maintain it consistently with the neighboring nodes in the network. To address these issues, in this paper we proposed a novel fuzzy-based trust prediction model for routing (FTPR) in WSNs. FTPR was designed with the following objectives: (i) To minimize the energy consumption by avoiding promiscuous mode of operation for neighbor monitoring and by reducing the number of recommendations collected from neighbors to compute indirect trust. (ii) To reduce packet loss by identifying and eliminating malicious nodes by trust prediction. The trust of a neighboring node is predicted based on direct trust, number of trust fluctuations, and recommendation inconsistency. (iii) To thwart black hole attack, on-off attack, badmouthing attack, and conflicting behavior. This paper is organized as follows: Section 2 discusses the related work. In Section 3, we described the framework of the proposed FTPR protocol. Simulation results and theoretical 2 The Scientific World Journal analysis are discussed in Section 4, and Section 5 concludes the paper along with the future scope of the work. Related Works Several trust-based routing schemes proposed recently in the literature were designed not only to meet the security requirements but also considered the resource constraint nature of WSNs. Paris et al. proposed a novel routing protocol to eliminate the selfish behavior of a neighbor [9]. The scheme used a novel routing metric called expected forwarding counter (EFW) that was used to thwart selective forwarding attack in wireless mesh networks. EFW was a cross-layer metric updated based on the observation of network layer and MAC layer. Mohi et al. proposed an intrusion detection scheme to eliminate denial-of-service (DoS) attack using Bayesian game approach in WSNs [10]. It was an incentive-based approach that motivates the nodes to behave properly. DoS attack was prevented based on the past behavior of the nodes in the Bayesian game formulation. Fuzzy-based detection and prediction system (FBDPS) [11] was proposed to detect distributed DoS (DDoS) attack. FBDPS compared the actual energy consumed by a neighbor with the normal value. When the energy consumed by that node was abnormal, then the node was considered as malicious. The drawback of these schemes was their ability to identify only a specific attack which may not be suitable for realistic applications. Group-based trust management scheme (GTMS) [12] was designed to overcome black hole attack. The trust was dependent on direct and indirect monitoring. A distributed trust management scheme was adopted in intragroup level by collecting recommendations from all its group members to compute trust. A centralized trust management approach was used in intergroup level as each cluster head (CH) collected recommendations of other CHs directly from the sink. In order to reduce the memory consumption, the trust was represented as unsigned integers in the range from 0 to 100. The drawback of the GTMS was in the requirement of high energy CHs to directly communicate with the sink. Ambient trust sensor routing (ATSR) [13] was proposed to thwart black hole attack, bad-mouthing attack, and conflicting behavior. It was a geographic routing protocol and trust was computed based on direct and indirect observations. The trust values were represented as real numbers in the range from 0 to 1. Lightweight and dependable trust system (LDTS) [14] designed for hierarchical WSNs thwarted black hole and badmouthing attacks. The trust was computed based on direct and indirect observations. A centralized trust management scheme was used in intracluster and intercluster level. The trust value was assigned in the range of 0 to 10. All the abovementioned schemes use promiscuous mode of operation for direct observation. The malicious nodes were identified only based on the past experience of a node. In order to improve the network security through trust prediction, trust-based source routing protocol (TSR) [15] was proposed for mobile ad hoc networks (MANET). Fuzzy logic-based approach was used to predict the future behavior of a node from the knowledge of past behaviors. Trust was derived from the direct observations and TSR was able to thwart black hole attack and grey hole attack. Ad hoc ondemand trusted multipath distance vector routing protocol (AOTMDV) [16] was proposed for MANET to eliminate modification attack, black hole attack, and grey hole attack. It derived the trust based on direct and indirect observations. It used all the received recommendations to compute the historical trust of node which made it vulnerable to badmouthing attack. Trust-aware secure routing framework (TSRF) [8] proposed for WSNs was based on direct and indirect observations. It was designed to thwart grey hole, tampering, on-off and bad-mouthing attack. As the trust value was represented as real numbers in the range from 0 to 1, TSRF consumed more memory. The malicious nodes were identified only based on historical trust of a node. Two-way acknowledgment-based trust (2-ACKT) [17] framework did not use promiscuous mode of operation for trust derivation and thwarted the black hole attack in WSNs. It used acknowledgments to derive the trust on the neighboring nodes. The scheme assumed that the malicious node drops data packets alone and not the acknowledgments. The scheme depends only on direct trust. As the recommendations were not gathered from the neighboring nodes, the decisions derived might not be fully consistent with the actual state of the network. Fuzzy-Based Trust Prediction Routing Protocol In this section, we discussed the detailed framework of our proposed FTPR protocol for WSNs. The assumptions made for the protocol design and the threat model employed for evaluating the performance of the protocol were also presented. FTPR protocol derived the trust based on direct and indirect observations. Assumptions and Threat Model. In WSNs, each node forwards the data to the sinks with the help of other intermediate nodes. The number of sinks did not have any impact on the FTPR protocol. Hence, for simplicity we assumed there was only one sink in the network. We assumed a hierarchical topology that consists of CHs and cluster members (CMs). The FTPR protocol maintains intracluster and intercluster topologies. The intragroup topology consists of group of CMs which were attached to a CH. Intergroup topology comprises CHs and sink. The proposed FTPR framework consisted of two stages, namely, route discovery stage and data forwarding stage. During route discovery stage, each node discovered a route to the sink using a routing protocol and it was assumed that all nodes behaved legitimately during this stage. In data forwarding stage, each CM forwarded the data to the CH and CH in turn forwarded the data to the sink using multihop communication link. We assumed that some of the intermediate nodes in the multihop communication link behaved maliciously while forwarding the data packets. Basically trust is a relationship associating between two nodes for a specific action. In FTPR, we derived the trust between any two communicating nodes based on packet forwarding action. An adversary can modify the contents of data packets or various control packets exchanged The Scientific World Journal 3 between the neighboring nodes in FTPR protocol. In order to prevent fabrication of control or data packets, a secure communication channel which can be established with the help of any key management schemes [18][19][20][21]. We assumed malicious nodes manifested black hole attack [8], on-off attack [8], bad-mouthing attack [8], and conflicting behavior attack [8]. The black hole attack and onoff attack are manifested in the data forwarding plane. In black hole attack, a malicious node drops all the received packets instead of forwarding. It behaves well and bad alternatively in on-off attack, hoping that it can remain undetected while misbehaving. The bad-mouthing attacks and conflicting behavior are manifested in the trust evaluation plane. In bad-mouthing attack, a malicious node provides dishonest feedback to recommend good node as bad node and bad node as good node. In conflicting behavior attack, a malicious node behaves differently to nodes in different groups. Network Topology. Consider the topology shown in Figure 1. Let node be the subject wanted to evaluate the trust on its neighbor node target . Node forwards the data packet to its neighbor node which in turn forwards the packet to its neighbor node sponsor . On receiving the data packet, sponsor forwards the data packet to its neighbor as well as transmits an acknowledgment to node through third party as shown in the Figure 1. 2-ACKT protocol [17] was used for routing and determination of third party for the transmission of acknowledgment. A transaction was considered to be successful when the subject receives the acknowledgment for the data packet sent to target . The higher the number of successful transactions, the higher the trust on the target. In FTPR, the trust was computed in intracluster level and intercluster level. In intracluster level, the CH aggregates all the data packets transmitted by the CMs. Some of the intermediate CMs in the communication link were malicious nodes. In FTPR protocol, when a CM "x" wanted to communicate with the CH through the intermediate CM "y, " then node x would check the trust of node y in its trust table. If node y was legitimate, then node x would transmit the data packet to node y; otherwise node x would find another route to the CH. Within the cluster, the trust was based on direct observation only as discussed in Section 3.3. In order to reduce the overhead involved in gathering recommendations, the indirect trust was not considered at the intracluster level. In intercluster level, the CH sends all the aggregated data to the sink through the multihop communication link which may contain malicious nodes. In FTPR protocol, when a CH "x" wanted to communicate with the sink through the intermediate CH "y, " then node x would check the trust of CH "y" in its trust table. If the CH "y" was legitimate, then CH "x" would transmit the data packet to node y; otherwise node x would find another route to the sink. The trust was computed based on direct trust and indirect trust as discussed in Section 3.4. Indirect trust was considered to maintain trust consistency within the network. Direct Trust Computation. In trust-based routing schemes, direct trust was calculated based on direct interaction with the neighbors. It must ensure that the neighbor had successfully received the packet and then forwarded the packet honestly by following the underlying routing protocol. The packet forwarding behavior of a CM was monitored by two-hop group acknowledgment scheme as discussed in [17]. In order to identify the inconsistent behavior of a node, the historical trust of a node should be considered to compute trust. To address these issues, a sliding time window scheme for trust calculation was used as shown in Figure 2. The time scale was divided into equal sized observation windows such as −5 , −4 , −3 , −2 , −1 , , +1 , . . ., where was the th observation window. The numbers of successful and failed transactions were calculated for each observation window. The sliding time window consisted of four observation windows as shown in Figure 2. The details of interactions in each observation window were recorded separately. Trust of a node was computed based on the numbers of successful and failed transactions. For each unit of time, the sliding time window slides one observation window to the right, thereby dropping the older experience by one unit and adds up the newer experience. Hence, the trust on the target during the th observation window depends on the numbers of successful and failed transactions during four observation windows, namely, , −1 , −2 , and −3 . In order to store the details gained during direct interaction, we introduced a transaction table in the routing protocol. The observed successful and failed transactions were stored in the transaction table. The transaction table of the CM consisted of the following fields: ⟨node id, number of successful transactions, number of failed transactions, trust level⟩, where node id was the address of the target, the number of successful transactions field was incremented by one whenever it received a link-layer acknowledgment from the target and a third-party acknowledgment within a timeout period, 4 The Scientific World Journal the number of failed transactions was incremented by one whenever it received a link-layer acknowledgment from the target and not the third-party acknowledgment within a given timeout period, and trust level can take an integer value that lies in the range from 0 to 7. The computed trust value that lies in the range from 0 to 100 was mapped to a trust level which lies in range from 0 to 7 as discussed in [17]. As the CH computed trust based on direct observations as well as from the neighbor's recommendations, the transaction table of CH consists of the following fields. where peer recommendation field was updated based on the recommendations received from the neighbors, number of recent transactions was incremented by one when it received a data packet from the neighbor node id, and number of fluctuations was used to monitor frequency of change in the trust level of neighboring node. A good node maintains a constant trust level and hence the number of fluctuations was low. The number of fluctuations ( NF ) can be updated as given by where PNF was the previous number of fluctuations and MTL was the maximum trust level; PTL and CTL were the previous and current trust level. The equation was designed in such a way that NF increases rapidly when the trust level decreases. The recommendation inconsistency was used to monitor whether the target was behaving in a consistent manner with all its neighbors and it is equal to the variance ( 2 ) of the received recommendations. The variance of the recommendations received for a target that behaves uniformly with all its neighbors is lesser than for a target that behaves inconsistently with its neighbors. It was used to identify the conflicting behavior of node. The predicted trust was updated using the fuzzy-based trust prediction model discussed in Section 3.6. The direct trust between the subject and target based on the number of successful and failed transactions can be derived as follows. Let DV ( , ) be the direct trust value of computed by . It was initially assumed to be 100 since all nodes were considered legitimate during network setup. Let and be the number of successful and failed transactions during the th observation window, respectively; then where is the total number of observation windows and represents positive trust value given by The term ( (3) simply gives the ratio of number of successful transactions to the total number of transactions during the th observation window. In order to give more importance to the number of successful transactions, the ratio ( + 1)/( + + 2) was multiplied by the term 1 − (1/( + 2)). As the bad behavior of a node should be remembered for longer duration and the recent transactions must carry more significance than older transactions, was multiplied by (1 − ) while calculating the trust value, where is an aging factor and 1 − represents negative trust value for th observation window. The value of can take any values in the range from 0 to 1 subject to the condition that 1 < 2 < ⋅ ⋅ ⋅ < . Trust Counselor. The two-hop acknowledgments depend on an alternate path through sponsor and third party. So, the trust of the target might be reduced due to the malicious activity of the sponsor or target or both. In order to identify the malicious activity of sponsor and target, a trust counselor component was introduced. When the trust level of the target dropped below a warning threshold ( ), then the subject initiated the counseling process by unicasting a warning packet to the target. was not a constant and it was determined based on the trust level of the trusted neighboring nodes. The warning packet comprises the following fields: ⟨warning identifier, subject address, packet category, node address 1, node address 2⟩, where warning identifier and subject address uniquely identified the warning packet and warning identifier was defined by the subject; packet category was assigned 0 or 1 if the node address 2 had not forwarded data or acknowledgment packets, respectively. It was assigned 0 when the subject initiated the counseling process as it assumed target had not forwarded the data packet. When the subject unicasted a warning packet to target, node address 1 and node address 2 were set as subject and target addresses, respectively. On receiving the warning packet, target modified the packet category to 1 as it assumed that the sponsor had not forwarded the acknowledgment back to the subject through the alternate path. In this way the warning packet reached the third party through the sponsor. On receiving the warning packet, the third party unicasts a response packet back to the sponsor. The response packet consists of the following fields: ⟨response identifier, subject address, node address 1, node address 2, status⟩, where response identifier and subject address were assigned with the warning identifier and subject address as mentioned in the warning packet, respectively. node address 1 was assigned with its own address (third party address) and node address 2 was assigned with the sponsor address. The status field can be 0, 1, 2, 3, 4, or 5 as mentioned in the Table 1. Status 0 denoted link failure and not yet rectified. Hence, the node was not ready to forward any packet. Status 1 referred to the condition that the node had not forwarded the packet due to link failure; but later the failure was The Scientific World Journal 5 rectified and ready to forward. Link failure could be due to network traffic conditions. Status 2 implied the inability of a node to participate in data forwarding activity due to energy or bandwidth unavailability. Status 3 referred to the condition that the node had not forwarded the packets due to insufficient resources; but later the resources were available and ready to forward the packets. Status 4 was used to indicate the existence of noncooperative malicious neighbor and the unavailability of an alternate path. Status 5 was used when the node had identified another alternate path due to the existence of noncooperative neighbor. In this way the response travelled back to the subject through the sponsor and target. If the response packet did not reach the subject within the response wait time, then the subject reinitiated the route discovery process. The time interval between the transmission of warning packet to the target and the reception of response generated by any one of the three entities, namely, target, sponsor, and third party is called response wait time. The upper bound for the response wait time is equal to the three times the sum of the propagation delay and the processing delay experienced by the packets in the network. Indirect Trust Computation. The indirect trust for the target was computed based on the recommendations obtained from neighbors. Recommendations also help in building a trust consistent with the network. In this section, we discussed the procedures for requesting recommendations and responding to such requests in WSNs. For requesting recommendations, subject broadcasts a trust request (TREQ) message to its neighbors in the transmission range. The TREQ message contains the following fields: ⟨TReqId, subject, target, min , timestamp⟩. TReqId is the trust request identifier used to uniquely identify the TREQ message and timestamp indicated the issuing time. min denotes the minimum number of interactions a recommender must have with the subject and it is given by where is the number of neighboring CHs. Algorithm 1(a) describes the procedure for transmitting the TREQ message in detail. Upon receiving the TREQ message, the nodes that had prior trust relationship with target processed the TREQ message as given in the Algorithm 1(b) and unicast trust reply (TREP) message back to the subject. The TREP message contains the following fields: ⟨recommender, subject, target, DL ( , )⟩, which indicates that a recommender ( ) unicasts the TREP message back to the subject which had a trust level of DL ( , ) with target. Let us assume that is the set of recommenders for the subject defined as where is the total number of recommenders. Then the indirect trust between the subject and target can be defined as where DL ( , ) is the trust level associated between the th recommender and the target . It was an integer value represented in the range from 0 to 7. It can be noted that a malicious node can manifest bad-mouthing attacks while responding to the TREQ message. In order to detect such outliers from the received recommendations, we used empirical rule [22] with mean ( ) plus or minus one standard deviation ( ) as the recommendations were represented as integer values in the range from 0 to 7. Only those recommendations that lie in the range from ( − ) to ( + ) were considered consistent and used for calculating IL ( , ) from (6). Let us assume that there are number of outliers in the number of received recommendations, and then (6) can be rewritten as where ≤ . A target was considered to be trusted when IL ( , ) is greater than or equal to TL . Trust Level. The trust level TL ( , ) was computed based on DL ( , ) and IL ( , ). The trust level was predicted based on direct trust level, number of fluctuations, and recommendation inconsistency. Table 2. The bases of functions are chosen so that they result in optimal value of performance measures. To illustrate one rule, the first rule can be interpreted as, "If the direct trust value is LOW and number of fluctuations is LOW and recommendation inconsistency is LOW, the predicted trust is medium. " Similarly the other rules are framed. Performance Analysis The performance of FTPR protocol was evaluated using ns-2 simulator. The simulation parameters are listed in Table 3. We took a simulation area of 300 m × 300 m, with six hundred nodes placed at random. The transmission range was 45 m. IEEE 802.15.4 was the MAC layer protocol used to evaluate the performance of the proposed trust model under attack conditions. Metrics. The performance of FTPR routing protocol was evaluated using the following metrics. (i) Packet Loss. The total number of data packets lost legitimately or through malicious action without any notification. (ii) Packet Delivery Ratio (PDR). The ratio of total number of data packets delivered to the total number of data packets sent. Simulation Results and Discussion. The performance of FTPR protocol was compared with 2-ACKT [17], GTMS [12], and AODV [23] protocols under varying number of malicious nodes as shown in Figure 3. Among the total number of malicious nodes, 40 percent performed black hole attack, 30 percent performed on-off attack, 15 percent performed bad-mouthing attack, and 15 percent performed conflicting behavior attack. The FTPR, 2-ACKT, GTMS, and AODV routing protocols were tested against exactly the same scenario and connection pattern. The packet loss of FTPR, 2-ACKT, GTMS, and AODV protocols was plotted against varying percentage of malicious attacks as The Scientific World Journal 7 shown in Figure 3(a). The AODV is a traditional routing protocol which cannot thwart any malicious attacks and hence resulted in higher packet loss compared to FTPR, 2-ACKT, and GTMS. The GTMS and 2-ACKT were designed to thwart only black hole attack. The presence of on-off attack, badmouthing attack, and conflicting behavior attack resulted in higher packet loss in GTMS and 2-ACKT protocols. In GTMS and 2-ACKT, a node forwarded the data packets to its malicious neighbor until the trust level of that neighbor dropped below the TL . But in FTPR, the node transmitted a packet to its next hop neighbor based on the predicted trust level and as a result, the packet loss in FTPR protocol is 43.53 percent and 45.24 percent lower than GTMS protocol and 2-ACKT protocol, respectively. As the malicious nodes were identified only based on direct trust in 2-ACKT, the packet loss was slightly higher when compared to GTMS. It has a positive effect on the PDR of FTPR protocol as shown in Figure 3(b). The PDR of FTPR routing protocol is augmented by 43.91 percent, 19.78 percent, and 18.18 percent when compared to AODV, 2-ACKT, and GTMS protocols, respectively. In FTPR routing protocol, only direct observation was considered to compute trust in intracluster level and in intercluster level; the recommendations were collected only from the most interacted neighbors. In GTMS, the recommendations were considered both in the intracluster level and in the intercluster level. As the promiscuous mode of operation was not used for neighbor monitoring, the control overhead of FTPR protocol is 13.99 percent lower than the GTMS protocol as shown in the Figure 3(c). As the recommendations were not gathered, the control overhead of 2-ACKT protocol is 15.04 percent lower than FTPR. The lower control overhead and the effective trust prediction mechanism in FTPR reduce the energy consumption by 17.26 percent when compared to GTMS protocol as shown in Figure 3(d). The energy consumption of GTMS is higher as the nodes use promiscuous mode for neighbor monitoring and also as the CHs use high powered transmitters to communicate with the BS. The simulation was performed with an initial energy of 0.5 J to calculate the network lifetime. The lower energy consumption improves the network lifetime of FTPR protocol by 8.72 percent higher than GTMS routing protocol as shown in Figure 3(e). Even though the control overhead of 2-ACKT and AODV was lower than FTPR, the presence of malicious nodes resulted in higher end-toend delay in AODV, 2-ACKT, and GTMS as most of the data packets had not reached the destination as shown in Figure 3(f). Theoretical Analysis. Let us assume that " " is the total number of SNs in the network and let "ℎ" be the average number of hops between a CM and the sink. For this analysis, we assumed that all nodes in the network wanted to communicate with the sink using "ℎ" hops and did not have any prior trust relationship between their neighbors. In this section, the performance of FTPR protocol was compared with the GTMS [12] protocol in terms of communication overhead and memory consumption. Communication Overhead. In FTPR, when a node from th cluster wants to communicate with the sink through its CH, the total number of acknowledgments generated for trust computation was 2(ℎ − 1). Assuming a maximum of 30 percent malicious nodes, the maximum number of warning packets generated in the network was 0.3 × 3(ℎ − 1) and the maximum number of response packet generated was 0.3 × 3(ℎ − 1). Therefore, the communication overhead incurred by direct observation for one node to communicate with the sink is 2 (ℎ − 1) + (0.3 × 6 (ℎ − 1)) . The indirect observation was considered to compute trust between clusters. FTPR broadcasts one recommendation request and receives recommendation only from a set of neighbors and let be the number of received recommendations. Therefore, the communication overhead incurred by indirect observation for one CH to communicate with the sink is So the total communication overhead incurred by one node to communicate with the sink is 1 + (ℎ − 2) + 2 (ℎ − 1) + (0.3 × 6 (ℎ − 1)) . As described in [19], the communication overhead of GTMS [12] protocol can be derived as The communication overhead was plotted against the number of communicating nodes by setting = 144 and ℎ = 5 as shown in Table 4. The GTMS protocol with cluster size of 9, 12, and 18 nodes was represented as GTMS-9, GTMS-12, and GTMS-18, respectively. Communication overhead of GTMS protocol increases with cluster size as shown in Table 4, whereas the communication overhead of FTPR protocol was same throughout as it was not dependent on cluster size. In GTMS, the recommendations were collected even in intracluster level and so, when the cluster size was large, more numbers of recommendations were received in GTMS protocol. As a result, the communication overhead of GTMS-18 is 17.9 percent higher than our proposed FTPR protocol. In GTMS protocol, the CH employed a high power transmitter to directly communicate with the sink for requesting and gathering recommendations about the state of neighboring CHs. But in FTPR protocol, all the nodes used similar low power transmitters and communicated with the sink using a multihop link. The exchange of acknowledgments, warning and response packets in the multihop link increases the overhead of FTPR protocol. As a result, the communication overhead of GTMS-9 was 35.8 percent lower when compared to that of FTPR. Hence, FTPR is more suitable for large cluster sized homogenous WSNs. Memory Consumption. In FTPR, CMs and CHs maintained a transaction table to monitor and store the trust level of their neighbors. The fields in the transaction table and its memory size of CMs are shown in the Table 5. The node id occupied 2 bytes, number of successful transactions and number of failed transactions occupied 2 bytes each for each observation window present in the sliding time window, and trust level required 3 bits. Therefore, the memory required to store a record in the transaction table that represented the trust relationship with a neighbor was 2.375 + 4 bytes, where is the number of observation windows. The fields in the transaction table and its memory size of CHs are shown in Table 6. The CH contains 3 more additional fields than CM, namely, peer recommendations that occupy 3 bits and number of fluctuations and recommendation inconsistency occupying 4 bits each. Therefore, the memory required by a CH to store a record in the transaction table that represented the trust relationship with a neighbor was 3.75+4 bytes, where is the number of observation windows. The total size of the transaction table that represented the trust relationship between a CM and all its neighbors was FTPR(CM) = (2.375 + 4 ) ( − 1) bytes, (14) where is the average size of cluster. The total size of the transaction table that represented the trust relationship between a CH and all its neighbors was FTPR(CH) = (6.125 + 4 ) ( − 1) bytes, where av is the average number of CHs. The memory consumption for FTPR and GTMS protocols was plotted against the number of neighboring nodes and setting the size of the observation window = 4 as shown in Table 7. It was found that the memory consumption in FTPR protocol is 19.9 percent lower than the GTMS protocol. It was achieved due to the use of 3 bits to represent trust levels of the neighboring nodes in the transaction table and also direct trust was only considered in intercluster level. Conclusions and Future Scope In this paper, we proposed FTPR protocol to effectively thwart black hole attack, on-off attack, conflicting behavior attack, and bad-mouthing attack. It employed a fuzzy-based trust prediction model to predict the future behavior of a neighboring node based on its historical behavior, trust fluctuations, and recommendation inconsistency. It derived the trust based on the direct and indirect observations. It reduces the energy consumption significantly by avoiding the promiscuous mode of operation for direct trust derivation and by gathering recommendations only from a subset of neighbors for indirect trust derivation. The memory consumption is significantly reduced by representing 8 bit trust values as 3 bit trust levels. By considering the historical behavior of node using sliding time window scheme, the on-off attack was identified and eliminated. The bad-mouthing attack was avoided effectively by eliminating outliers from the received recommendations. The conflicting behavior was thwarted by considering recommendation inconsistency in the fuzzy-based trust prediction. The novel trust prediction model significantly improved the packet delivery ratio in the network. As the recommendations were received only from a subset of neighbors, there was a significant reduction in control overhead. Theoretical and simulation results of FTPR protocol demonstrate higher packet delivery ratio, lower end-to-end delay, higher network lifetime, and lower memory consumption than the traditional and existing trust-based routing schemes. The limitation of this research work was that the nodes were assumed to have unique identity which is not suitable for some applications. So, we plan to design a trust-based routing protocol for applications that require anonymous identity in WSNs.
8,075.2
2014-07-14T00:00:00.000
[ "Computer Science", "Engineering" ]
EXPERIENCE WITH PREPARATION OF AN INACTIVATED VACCINE AGAINST AUJESZKY ' S DISEASE Dedek L., J. Jerabek: Experience with Preparation of an Inactived Vaccine against Aujeszky's Disease. Acta vet. Bmo, 50, 1981: 221-227. Cell lines IBRS-2, PK-C or primary porcine kidney cells are the most suitable ones for AujeszkY's disease virus propagation for preparation of an inactivated vaccine. Glutaraldehyde-inactivated virus of the Aujeszky's disease proved to be safe in mice and rabbits. Effectiveness of the vaccine was tested in rabbits. Their vaccination followed by revaccination generated immunity for 6 months. Lyophilized inactivated vaccine can be stored for long periods without loss of effectiveness. The vaccine is reconstituted immediately before use by the supplied diluting fluid containing a lipoid adjuvant. Aujeszky's disease, preparation, inactivated vaccine, cell cultures, glutaraldehyde, safety and potency. Pork meat production is largely dependent upon sufficient numbers of healthy feedlot piglets. Their successfull rearing, however, may be threatened, especially under large-scale systems, by Aujeszky's disease. In Czechoslovakia, live vaccines have been used to date for immunoprophylaxis of Aujeszky's disease. They were prepared from avirulent Aujeszky's disease viral strains and at the time of their introduction they yielded better results than aluminium hydroxide-inactivated vaccines. The avirulent vaccines decreased considerably animal loss in pig populations. Recent knowledge of new inactivation substances and the use of adjuvants for potentiation of the antibody response when administered with various antigens enabled the preparation of a new, potent inactivated vaccine against Aujeszky's disease. Both live avirulent and inactivated vaccines against Aujeszky's disease must contain sufficient amounts of virus. So, for instance, only a sufficient viral content (105 TCID 6o) of the avirulent strain BUK-TK-300/9,2 can induce a good antibody response in sheep (:.luffa 1972). For preparation of inactivated vaccines viral suspensions with substantially higher viral titers are employed (from 10'·5 to 10"/cm3). For preparation of an inactivated vaccine the Aujeszky's disease virus propagated in hamster kidney cell line BHK-21 was used by Wittmann and Jakubik (1977). They obtained a viral titer of 108 •5 -10. TCID5o/cm3 • For virus inactivation, various agents have been used. Ethylenimine was used by Wittmann and Jakubik (1977), acetylethylenimine by Gutenkunst (1978), betapropiolactone by Frescura et al. (1977), glutaraldehyde or formaldehyde by Toma et al. (1975), formaldehyde by :.luffa and Neurath (1962), alcohol and saponine by :.luffa et al. (1978). Inactivation by gamma-irradiation at a dose of 0.5 Mrad was used by Dilovski (1973). Possibilities of Aujeszky's disease virus inactivation by ultraviolet light were explored by Lai and J ong (1980). To increase the antibody response to inactivated vaccines against Aujeszky's disease, Toma et al. (1975) and Delagneau et al. (1975) employed vaseline oil, mannit and emulgin, Wittmann and Jakubik (1977) used DEAE-D, Frescura et al. (1977) used Marcol 52 and Arlacel 80, Lukert et al. (1978) used Tween 80, mineral oil and Arlacel. Gutenkunst (1978) followed the immune response after a vaccine supplemented with lauric acid and aluminium hydroxide. employed Marcol, Arlacel and Tween 80.All above-mentioned vaccines supplemented with agents potentiating the immune response were liquid. The present paper deals with preparation of a lyophilized inactivated vaccine against Aujeszky's disease and its testing in experimental animals -rabbits. Aujeszky's disease virus Cell cultures were infected with Aujeszky's disease (AD) virus isolated from cattle and propagated in cell line PK-C. Potency of the prepared inactivated vaccine was tested by challenging the experimental rabbits infected with a virulent AD virus -lyophilized strain B/200 (Institute for State Control of Veterinary Biologicals and Drugs, Brno). Cell cultures Cell lines PK-C and primary cell cultures from pig kidney cultivated in Earle's solution with LAH were tested for suitability for AD virus propagation.We further tested a cell line from porcine kidney cortex IBRS-2 cultivated in Hanks' medium, a rabbit kidney cell line RK 13, and a monkey kidney cell line VERO cultivated in MEM (Eagle) medium. Virus inactivation A suitable concentration of the inactivation agent was determined using the viral titer of 10 8 •• TCID.o/1 em".From inactivation agents glutaraldehyde was tested at final concentrations of 0.025 %, 0.05 %, 0.1 % and.0.2 % added to the viral samples.The mixture of the virus and glutaraldehyde was incubated for 2 hours at 34°C in a water bath in tightly closed vials.The inactivated substance at each of the above-mentioned dilutions was injected intracerebrally into mice weighing 10 g at doses of 0.03 ems.The animals were observed for 6 days after inoculation. Vaccine preparation The virus for vaccine preparation was propagated in 3-to-4-day old cell line IBRS-2, infected with 0.05 TCID.o of virus per cell.The infected cell culture was cultivated in the maintenance medium without serum at 37°C for 36 to 48 hours.Within this time a marked cytopathic effect occurred.The harvested viral suspension was homogenized, stored at + 4 °C and then tested for sterility.The virus titer was determined.Preparation of the vaccine required 10' to 10 8 of viral TCID.o per 1 em".Before the vaccine preparation proper the cell debris was eliminated by centrifugation or filtration.The virus was inactivated by glutaraldehyde at 0.15 % concentration for 2 hours at 34°C.From the inactivated product a sample was taken for inactivation control in mice.The product was mixed with the lyophilic medium and lyophilized. The lyophilized vaccine was reconstituted in a solvent serving also as a lipoid adjuvant.The solvent was composed of distilled water, paraffin oil, pharmaceutical lanolin and Tween 80. Safety and potency tests were performed in groups of rabbits weighing 2-2.5 kg (3 animals per group).The rabbits were inoculated i. m. (thigh muscles) with various amounts of the vaccination dose for pigs (see Table 1) amounting 5 em" (Jerabek and Dedek 1981).Eleven days later the rabbits were revaccinated with the same dose and in the same way.Ten days later (i.e. 21 days after the vaccination) all rabbits, including the control animals, were challenged by the virulent AD virus at a dose of 1 em" containing 10'LD.o for rabbits.The viral suspension was administered into the thigh musculature of the intact hind limb.An observation period of 14 days followed. Evaluation of the safety test None in the vaccinated rabbits may show symptoms of AD during the ll-day observation period after the first vaccination and they may not exhibit overall or local postvaccination reactions. Evaluation of the potency test The vaccine must protect 100 % of the experimental animals against a challenge by AD virus in groups 1, 2, 3 (see Table 1).Animals of the groups 4 and 5 may die.All animals of the control group should die within the 14-day observation period after the challenge (Delagneau et al. 1975).For the experiment 13 rabbits weighing 2.5 kg each were employed.They were vaccinated intramuscularly with 1 cm 3 of the vaccine and revaccinated with the same dose 21 days later.Virus-neutralizing antibodies were assayed prior to vaccination (with negative results) and 21, 35, 97 and 153 days thereafter.For the neutralization tests a micromethod was employed using volumes of 0.05 cm 8 and working viral dilutions of 100 to 500 TCID.o. For challenges AD viral doses of 10 000 LDso per rabbit were used and the animals were challenged 1, 3, 5, 6, 9, 10 and 11 months after vaccination. Expiration of the vaccine was determined in lyophilized vaccine samples stored at + 4 °C and inoculated into rabbits at various time intervals from 6 to 31 months. Results Suitability of cell cultures for AD virus propagation is shown in Table 2. Highest viral titers were ~btained in the IBRS-2 cell line, in primary porcine kidney cells and in the PK-C cell line.Markedly lower titers were obtained in RK13 and VERO cell lines.In testing the most sensitive cell system the cell lines PK-C and RK 13 yielded best results.By one order less sensitive were the line IBRS-2 and primary porcine kidney cells.Safety and potency tests All 14 batches of the vaccine were perfectly safe.Potency of the vaccine was tested in 13 batches.Two of them did not give satisfactory results although their AD viral titer had been sufficiently high prior to inactivation and lyophilization.The reason of this failure remains unknown.The remaining 11 batches of the inactivated vaccine gave satisfactory results. Post-vaccination immunity in rabbits In rabbits vaccinated with 1 ems of the vaccine and revaccinated 21 days later the highest antibody titer of 51 was found on day 35.On day 97 after vaccination an average titer of 10 was found (Fig. 1). The control of vaccination effectiveness by challenges of the vaccinated rabbits had shown 100 % of the animals . to be protected 1,3,5 and 6 months after vaccination.Fifty per cent of the animals were found to be protected 9 and 10 months after vaccination.The control, non-vaccinated rabbits died on days 5 and 6 after challenge.The results are shown in Table 4. 5 shows good effectiveness of all vaccine samples tested in the period of 6 to 31 months. Discussion Vaccines containing a live, avirulent AD virus generally confer excellent immunity but the persisting residual virulent virus may limit their use (Delagneau at al. 1975;Lai and J ong 1980).Therefore in several countries including Czechoslovakia possibilities have been explored to produce inactivated vaccines against AD suitable for immunoprophylaxis in swine. This paper presents preliminary results obtained at preparation of the inactivated vaccine against AD.Among various cell substrates for virus propagation the most suitable were the cell line IBRS-2, PK-C and primary porcine kidney cells.The viral titer yields in these cell line cultures are comparable with data of other authors (Delagneau et al. 1975;Frescura et al. 1977).Glutaraldehyde--inactivated AD virus was entirely safe for mice and rabbits.Our results of AD virus inactivation by glutaraldehyde are comparable to those published by Toma et al. (1975). The two-component form of the vaccine (i.e. lyophilized inactivated viral antigen and the lipoid adjuvant) proved advantageous since in this type of vaccines the decrease in effectiveness is smaller than in those adjuvanted during the manufacturing process.Effectiveness of this vaccine proved to be high in rabbits as shown by good immunogenicity of the glutaraldehyde-inactivated AD virus.Vaccination and revaccination of rabbits with 1 cm 3 of the vaccine generated a 100 per cent immunity 6 months thereafter.A protection period of 7 months in rabbits experimentally infected by AD virus reported Toma et al. (1975).Another study of our laboratory presents the results reporting on the use of the inactivated vaccine in immunoprophylaxis of the Aujeszky's disease in pigs (Jerabek and Dedek 1981). Fig. 1 . Fig. 1.Titres of virus-neutralizing antibodies in rabbits after i. m. vaccination and revaccination with 1 em" Table 1 Vaccine safety and potency test in rabbits Table 2 Titres of Auleszky's disease virus (in 1 em') muhiplieated in different cell systems IVirus inactivation is shown in Table3, indicating a perfect inactivation of the AD virus by glutaraldehyde at 0.1 % and higher concentrations. Table 3 Effect of glutara1ctehyde concentration on inactivation of Auieszky's disease virus Table 4 Survival of vaccinated and revaccinated rabbits after challenge performed at vari.ous time intervals after vaccination Table 5 Results of potency tests carried out with vaccines stored at +4 °C for determination of expiration time I
2,546.6
1981-01-01T00:00:00.000
[ "Biology", "Medicine" ]
Seasonality Role on the Phenolics from Cultivated Baccharis dracunculifolia Baccharis dracunculifolia is the source of Brazilian green propolis (BGP). Considering the broad spectrum of biological activities attributed to green proplis, B. dracunculifolia has a great potential for the development of new cosmetic and pharmaceutical products. In this work, the cultivation of 10 different populations of native B. dracunculifolia had been undertaken aiming to determine the role of seasonality on its phenolic compounds. For this purpose, fruits of this plant were collected from populations of 10 different regions, and 100 individuals of each population were cultivated in an experimental area of 1800 m2. With respect to cultivation, the yields of dry plant, essential oil and crude extract were measured monthly resulting in mean values of 399 ± 80 g, 0.6 ± 0.1% and 20 ± 4%, respectively. The HPLC analysis allowed detecting seven phenolic compounds: caffeic acid, ferulic acid, aromadendrin-4′-methyl ether (AME), isosakuranetin, artepillin C, baccharin and 2-dimethyl-6-carboxyethenyl-2H-1-benzopyran acid, which were the major ones throughout the 1-year monthly analysis. Caffeic acid was detected in all cultivated populations with mean of 4.0%. AME displayed the wide variation in relation to other compounds showing means values of 0.65 ± 0.13% at last quarter. Isosakuranetin and artepillin C showed increasing concentrations with values between 0% and 1.4% and 0% and 1.09%, respectively. The obtained results allow suggesting that the best time for harvesting this plant, in order to obtain good qualitative and quantitative results for these phenolic compounds, is between December and April. Introduction Baccharis dracunculifolia D. C. (Asteraceae) is a native plant from Brazil commonly known as "Alecrim do campo" and "Vassoura". This plant is well known for its interaction with insects, mainly Apis mellifera L., and for bearing a wide range of secondary metabolites. Its leaves are punctuated with secretory thricomes that are rich in secondary metabolites, as well as secretory ducts that produce and store essential oils and phenolic compounds. Baccharis dracunculifolia secondary metabolites are collected by A. mellifera to produce Brazilian green propolis (BGP) [1], which is of great importance for food and pharmaceutical industries [2] as it displays anticancer [3], antibacterial [4], antiinflammatory [5] and antiulcer [6] properties among others. Lemos et al. [7] described the gastric protective effect of the hydroalcoholic extract of B. dracunculifolia aerial parts. Fukuda et al. [8] reported the cytotoxic activity of B. dracunculifolia constituents. Da Silva Filho et al. [9] showed the presence of flavonoids [isosakuranetin, aromadendrin-4 -methyl ether (AME)] and cinnamic acid derivatives (caffeic acid, p-coumaric acid, ferulic acid) with trypanocidal activity. Munari et al. [10] reported the antimutagenic activity of the hydroalcoholic extract of the leaves of this 2 Evidence-Based Complementary and Alternative Medicine plant. Akao et al. [11] showed that prenylated p-coumaric acid derivatives (artepillin C, drupanin and baccharin) exhibited antitumor properties. Missima et al. [12] identified diterpenes and triterpenes with immunomodulatory activity. Leitão et al. [13] reported that B. dracunculifolia displays anticariogenic activity. Klopell et al. [14] found that (E)nerolidol, the major constituent of the volatile fraction, stood out for antiulcer activity. It is important to point out that honey is another major bee product, and recently there were three reported works: the inhibition of lipid peroxidation in biological systems [15], the antiseptic agent in wound care [16] and the enhancement of immune function and antitumor activity [17]. In addition, seasonal variation, chemical composition and antioxidant activity of Brazilian propolis samples were reported as well [18]. Forty phenolic substances were identified, in different concentrations, from Brazilian propolis extracts produced in three distinct regions. It is well known that B. dracunculifolia is the main botanical source of BGP. Therefore, considering that the majority of reported works with B. dracunculifolia were undertaken with native plants and that this plant has a great potential for the development of new products, the aim of this work was to evaluate the seasonality role in the phenols chemical profile of 10 different populations of B. dracunculifolia cultivated during 1 year. It would not only allow the selection of the B. dracunculifolia population bearing higher production of the phenolic compounds, but also to determine the best timing of plant harvesting. Cultivation and Sampling. Initially, the fruits of B. dracunculifolia were collected from populations of 10 different regions of Brazil in their natural habitat (Table 1). Professor Nelson Ivo Matzenbacher authenticated the plant material. The fruits were first germinated in a nursery and then propagated under glasshouse conditions for 30 days. The obtained seedlings were transplanted to the experimental field area of Chemical, Biological and Agricultural Pluridisciplinary Research Center (CPQBA), University of Campinas, São Paulo, in January 2004. The field was divided into four replications. Each replication was composed of The homogenous samples were harvested monthly to determine both chemical profile and seasonal role on the major phenolic constituents. Sample Preparation and Chromatographic Analysis. The sample preparation was undertaken following the analytical method previously developed [21]. The buds and adult leaves from 10 dried collected branches of B. dracunculifolia were removed and powdered using a knife mill. To a homogeneous sample of 500 mg, 20 ml of 90% ethanol containing 300 µg/ml of the internal standard in 125 ml Erlenmeyer flasks was added. The solution was stirred at 170 r.p.m. and 40 • C on a shaker (Innova 4300, New Brunswick Scientific, Edison, NJ, USA). After 2 h of extraction, the flasks were cooled to room temperature and filtered through analytical filter papers. A 1.0 ml aliquot of the extracts was then filtered through a Millex-LCR-PTFE (Millipore, Bedford, MA, USA 0.45 µm × 13 mm i.d.) and transferred to an appropriate vial for automatic injection, and a 15-µl aliquot was injected into the HPLC system, which was described by Sousa et al. [21]. Veratraldehyde was used as internal standard, and it was added to the extracting solvent prior to extraction. The spectral data from the photodiode array detector were collected within 60 min over the 265-320 nm range of the absorption spectrum, and the chromatograms were plotted Evidence-Based Complementary and Alternative Medicine 3 at 280 nm. Peaks were assigned according to their retention times and by co-elution with authentic standards, as well as based on UV spectra for both the standards and samples under the same chromatographic conditions. Quantitative Analysis and Statistical Studies. The calibration curves were prepared in the concentration range expected of each compound in B. dracunculifolia sample, ranging from 25 to 1200 µg/ml. The linearity was investigated by calculation of the regression plots by the least squares and it was expressed by the determination coefficient (R 2 ) showing values ranging from 0.9982 to 0.9998. Absolute concentrations of four compounds (acid caffeic, AME, isosakuranetin and artepillin C) in the B. dracunculifolia samples were calculated based on the phenolic area/IS area. Regarding the chromatographic profile of the hydroalcoholic extracts, the relative percentages of each peak of interest were obtained monthly, taking into account the area percentage. After checking for normality (Kolmogorov-Smirnov test) and homogeneity of the variances (Bartlett's test), the intergroup variation of different parameter was estimated by the analysis of variance (ANOVA). These ANOVA analyses were then completed by Tukey's multiple range tests, in order to locate the differences [22]. Thus, qualitative treatments were compared by Tukey test, with a probability of 95% and significance levels of 5% (P < .05) for comparative studies among populations. It was also considered the probability of 99% and significance levels of 1% (P < .01) for comparison among months and considering each population as well. Statistics calculations, as well as graphic representation were prepared by using GraphPad Prim Ö (v. 4.0) and additional calculations were carried out with aid of Microsoft Ö Excel 2003. Agronomy Aspects. Baccharis dracunculifolia was cultivated rapidly and developed from the production of seedlings in nursery, showing excessive and intense growth in an interval of 2 months. At the fourth month it was about 0.8 m in height, achieving 2.5-3.0 m in 1 year, despite of no use of chemical fertilizers during the plant development. The productive potential of the species expressed as amount of dry biomass per plant was variable depending on the population. According to the obtained results, the mean of yielding of plant dry biomass, after 16 months of seeding and considering 10 populations, was 399 ± 80 g. The Paraguaçu-MG (302±34 g) and Colombo 1-PR (584±75 g) populations displayed the lowest and the highest yield of dry biomass, respectively. Additionally, the essential oil of the dry leaves of each cultivated population was studied as well [23]. Chemical Composition. The HPLC method used allowed the analysis of seven major phenolic compounds along 1 year in B. dracunculifolia samples: caffeic acid, ferulic acid, AME, isosakuranetin, artepillin C, baccharin and DCBEN. Chemical structures for these seven phenolics are displayed in Figure 1. Seasonality and Phenolic Contents. The quantification of caffeic acid, AME, isosakuranetin and artepillin C were undertaken for all cultivated plants from 10 distinct regions. Tables 2-5 display the variation on the concentration of these four phenolic compounds between May 2004 and April 2005. Considering the mean values calculated monthly for B. dracunculifolia from these regions, the relative percentages of these phenolics were obtained by taking their percentages in the ethanolic extracts. The sum of these four compounds correspond to ∼30% of the total extract, caffeic acid (23.7 ± 2.4%) being the major one, followed by AME (2.9 ± 1.1%), isosakuranetin (2.1 ± 1.2%) and artepillin C (1.3 ± 0.8%), respectively. Caffeic acid and AME were detected in all the studied populations during the entire year (Tables 2 and 3). Artepillin C was found in most of the studied populations, with exception of the one from Colombo 1 ( Table 5). Isosakuranetin was the phenolic that displayed larger qualitative variation, since it was detected mainly in the last 6 months, between November and April of 2005 (Table 4). Ferulic acid, baccharin and DCBEN, were found in almost every period of analysis, but in concentrations lower than the limit of quantification previously established [21]. Taking into account the agronomic aspect, chemical composition, seasonality role and phenolic contents, the results of this work are summarized in Figure 2 showing the potential of B. dracunculifolia for both pharmaceutical and cosmetic industries in the production not only of BGP, but also the standardized extracts and essential oil. Capital and small letters denote significant differences at P < .01. pathogens are basic factors that along with environment can influence biosynthesis of plant secondary metabolites. Baccharis dracunculifolia have drawn great attention among agronomists, chemists and pharmacists, aiming to develop the agro technological knowledge to allow both the acclimatization and the selection of a high productive population. In this regard, the knowledge about the best cultivation techniques is one of the first steps to develop commercial scale production. The developed cultivation technique demonstrated to be feasible of cultivating 1000 plants in an area of 1800 m 2 . The yields of dry plant, essential oil and crude extract were measured monthly resulting in mean values of 399 g, 0.6 ± 0.1% and 20 ± 4%, respectively. Hence, the cultivation of B. dracunculifolia in large scale using an area of 10 000 m 2 , which is equivalent to 1 hectare, would allow to cultivate 5556 individuals furnishing, after 12 months of cultivation, about 2200 kg of dry plant, from which it could be obtained 13 kg of essential oil or 440 kg of crude extract. Therefore, it is viable to cultivate this plant in large scale for commercial use, since industries of medium and large productivity, specializing in plants, have the ability to grow at least 5 hectares of the plant of interest. Considering B. dracunculifolia plant, this is the first time that cultivation studies, involving chemical composition analysis and seasonality role, have been reported. Thus, the cultivation of this species can provide biomass for phytochemical and pharmacological studies, as well as for the continuous supply of botanical raw material. Moreover, the selection of a good population for cultivation could enhance the production of desired compounds. Standardized Extract and Phenolic Compounds. To obtain standardized extracts it is necessary to produce biomass with excellent quality. For that, the development of an analytical-validated method is mandatory to determine the concentration of each metabolite of interest in the plant biomass. Moreover, the analytical method is an important tool to study the influence of seasonality, to select a good population for cultivation, to determine the best time for harvesting, to develop the extraction and formulation process, to analyze the final products, to run the pre-clinical and clinical assays, among others. Most of the works with B. dracunculifolia report its secondary metabolites as main source for the production of BGP. Because of that, there are researches reporting comparative studies about the chemical composition of this plant and its relationship with green propolis [24,25]. So far, at least 100 substances in native B. dracunculifolia have been identified including: cinnamic acid derivatives, anthracene derivatives, phenolics, prenylated phenylpropanoids, sesquiterpenes, diterpenes, and triterpenes, among others, for which different biology activities have been found [9,13,20,26]. Intake of the phenolic compounds present in both BGP and B. dracunculifolia as health promoter has been linked to reduced risk of colon cancer and gastrointestinal disorders [27,28]. Caffeic, ferulic and p-coumaric acids are trans-cinnamic acids that occur naturally in their free forms, and as a family of mono or diesters with (-)quinic acid, collectively known as chlorogenic acids (CGAs). CGAs are antioxidant components produced by plants in response to environmental stress conditions such as infection by microbial pathogens, mechanical wounding as well as excessive UV or visible light levels [29]. An important phenolic acid, present in both Brazilian propolis and B. dracunculifolia, is 3,5-diprenyl-4hydroxycinnamic acid (DHCA), which is known as artepillin C. Studies have shown that DHCA inhibits lipid peroxidation and the development of pulmonary cancer in mice prevents colon cancer through the induction of cell-cycle arrest, and displays chemopreventive action in colon carcinogenesis, as well as anti-leukemic effect with low inhibitory effect on normal lymphocytes [30]. Isosakuranetin and AME, along with other flavonoids have been widely investigated [9,13,31], and their intake may have beneficial effects such as: increase vitamin absorption and action, help woundhealing processes, act as antioxidant, antimicrobial and immunomodulatory [32]. With respect to derivatives of pcoumaric and caffeolyquinic acids, a positive association of the biological activity against Staphylococcus aureus, S. pneumoniae and Trypanosoma cruzi was found [26]. DHCA and DCBEN, previously characterized in both BGP and B. dracunculifolia, were active against T. cruzi and S. aureus [33]. In addition, Barros et al. [28] demonstrated that caffeic, ferulic, p-coumaric and cinnamic acids possess gastroprotective activity. Therefore, the knowledge of the chemical variations of phenolic compounds in B. dracunculifolia becomes essential to provide raw materials of high quality for the development of new products. In this regard, the developed protocols allowed to undertake the analysis of 480 samples of B. dracunculifolia collected monthly during 1 year. Also, based on these results AME along with caffeic acid could be considered good chemical markers for the analysis of cultivated B. dracunculifolia, considering that both were found in all the samples throughout the studied year. Seasonal Variation. The phenolic compounds can be considered as a chemical interface between B. dracunculifolia and surrounding environment, and their biosynthesis can be amended by environmental conditions. Thus, Weather and Climate Applied to Agricultural Research Center CEPAGRI-UNICAMP monitored rainfall availability, humidity and temperature of the cultivation site. The mean of temperature, considering the 12 months of the experiment, was 22.5 • C. The lowest mean temperature (19 • C) was detected between May and July and the highest (25 • C) was detected between January and March. The average rainfall for the year was 120 mm. The lowest average of rain (42 mm) occurred from May to July and the highest amount of rain (218 mm) occurred from January to March. The humidity did not vary significantly along the year, resulting in mean values of 60%. It is important to point out that the flowering period of B. dracunculifolia in this experiment occurred from May to July of 2004, which was the period that displayed both lower temperature and less amount of rain. During the flowering period wide variation in the concentration of phenolics was observed (Tables 2-5). It is mandatory to know the role of seasonality on the chemical profile and the content of each compound of interest in a cultivated plant for pharmaceutical use, aiming to obtain either standardized extract or pure compounds, which can define the potential of individual components and give its potency on the synergistic effect, considering the major metabolites as a whole. According to statistic studies (one-way ANOVA), the caffeic acid did not vary significantly during the period of study in both individuals and among all populations, with mean value of 4.0% ( Table 2). The population from Colombo 1-Paraná displayed significant variations for artepillin C (P < .05) ( Table 5) and AME (P < .05) ( Table 3). On one hand, the concentrations of AME, comparing population from Colombo 1 with other populations were higher ranging between 0.45% and 1.11%, and on the other hand the concentrations of artepillin C were lower (0-0.16%). The isosakuranetin (Table 4) showed statistic difference for the populations from Paraguaçu-Minas Gerais and Colombo 1. The concentration levels for this flavonoid in Paraguaçu region were higher (0-1%; P < .05) in comparison with other populations, while for the population from Colombo 1, isosakuranetin was not detected. The statistical analysis of the data, considering each population and the mean values, which were obtained among the population for the different months of the experiment, are shown in Tables 2-5 with P < .01. In general, the significant values found for each population were similar to the ones obtained by the calculation of the mean values given by each month. Thus, taking into account the monthly mean values it is possible inferring that caffeic acid content was higher in the months of October (4.96±0.92%), January (4.93±0.53%), March (5.17±0.52%) and April (4.33 ± 0.50%). Isosakuranetin and artepillin C displayed higher concentrations in the months of November, December and from January through April. Likewise, the means concentration for isosakuranetin ranged from 0.49 ± 0.20 to 0.79 ± 0.32%, and for the prenylated p-coumaric acid derivate, it ranged from 0.25±0.10 to 0.71±0.21%. Regarding AME, its concentrations behaved differently from the other phenolics. The mean yield for this phenolic was about 0.7 ± 0.13% in May, decreasing to 0.4 ± 0.15% until August. In September there was an increase again to 0.75 ± 0.20%, which decreased to 0.34 ± 0.24% in January, and increased to 0.65 ± 0.13% in February, which was maintained until April. Statistically, AME content was higher in the months of May, September and February through April. Moreover, during this same period other compounds, such as ferulic acid, baccharin and DCBEN were detected. Considering the interaction between B. dracunculifolia and A. mellifera in the production of green propolis, it is interesting to note that, according to Lima [34], the optimum time of the year for the highest yield of BGP production was from December to April. Sousa et al. [23] demonstrate that the yield of essential oil from B. dracunculifolia leaves is higher from February to April. It is important to point out that the relationship between period of leafbud growing and contents of physiological compounds was confirmed by this experiment, once the period of leafbud growth was coincident with the period of higher content of secondary metabolites, which corresponds to rainfall season. Therefore, the optimum time for both green propolis and B. dracunculifolia essential oil production was the same for phenolic compounds production as well. Hence, it is suggested that the best time to obtain good qualitative and quantitative results, considering phenolic compounds, essential oil and green propolis production is mainly between December and April, which matches with the summer time. All populations were cultivated, and the population from Colombo 1 produced the highest yield of dry biomass and good concentration of AME, but it produced the lowest amounts of other important compounds, such as artepillin C and isosakuranetin. The cultivation of B. dracunculifolia is economically viable, and it can be scaled up for commercial production, since the biomass production, mean yields of crude extract and essential oils, as well as phenolic compounds were excellent.
4,711
2011-06-23T00:00:00.000
[ "Biology", "Environmental Science" ]
Development of Low-Cost IoT System for Monitoring Piezometric Level and Temperature of Groundwater Rural communities in Mexico and other countries with limited economic resources require a low-cost measurement system for the piezometric level and temperature of groundwater for their sustainable management, since anthropogenic action (pumping extractions), natural recharge and climate change phenomena affect the behavior of piezometric levels in the aquifer and its sustainability is at risk. Decrease in the piezometric level under a balanced level promotes salt intrusion from ocean water to the aquifer, salinizing and deteriorating the water quality for agriculture and other activities; and a decrease in water level under the pumps or well drilling depth could deprive communities of water. Water temperature monitoring is essential to determine electric conductivity and dissolved salt content in groundwater. Using IoT technology, a device was developed that monitors both variables inside the well, and the ambient temperature and atmospheric pressure outside the well. The measurements are made in real time, with sampling every second and sending data to a dedicated server every 15 min so that the visualization can be accessed through a device with Internet access. The time series of the variables measured inside and outside the well were obtained over a period of three months in the rural community of Agua Blanca, Guasave, Sinaloa, Mexico. Through these records, a progressive temporary drawdown of the piezometric level is observed, as well as the frequency of pumping. This low-cost IoT system shows potential use in hydrological processes of interest such as the separation of regional and local flow, drawdown rates and recognition of geohydrological parameters. Introduction Groundwater is a vital resource, essential for agriculture, domestic use, industry, and environment.Rapid economic growth, population increase, urbanization, and the continued expansion of human development have aggravated water scarcity in many basins [1].The management of groundwater resources is an important issue, especially regarding agricultural potential [2].However, due to the lack of public policy and supervisory measures for its use, overexploitation in some aquifers has been extensive, altering flow regimes and thus becoming a threat to socioeconomic development and ecological health [3]. Thus, piezometric level and temperature monitoring are necessary in sustainable management to prevent negative impacts.Piezometric monitoring allows one to take Sensors 2023, 23, 9364 2 of 16 actions to avoid salt intrusion into the aquifer [4,5], caused by a hydrostatic imbalance as a result of different densities between saltwater from the Pacific Ocean and conditioned diminution by pumping continental freshwater; to prevent water deprivation in certain communities and well weakening due to decrease in piezometric level under pumps or well drilling depth; or identify water level increase in topographic depression that generates flood zones [6].Temperature monitoring enables one to determine electric conductivity in groundwater, which is correlated with water quality by dissolved salt content and piezometric level [7]. In addition, monitoring the piezometric level is required to design recharge strategies for the aquifer [8], to prevent the levels from dropping, causing sinking and loss in the storage capacity of the aquifer, and to know the flow direction of groundwater and possible contamination risks [9]. The global extraction of groundwater has increased considerably in recent years, generalizing the overexploitation of aquifers and resulting in a decrease in the piezometric level, which causes greater energy consumption by increasing the pumping head [10].The impact of extractions and recharges are not well known until months or years after the event, which makes sustainable groundwater management difficult and motivates modernization with real-time technology [11].To achieve the correct management of water resources, constant monitoring of water parameters is necessary [12], as the continuous measurement of the piezometric level in groundwater is one of the main tools for any study on the effective characterization of an aquifer and flow models [13]. Continuous groundwater monitoring is mainly used to estimate changes in aquifer storage [14], calibrate groundwater flow models [15,16] and provide updated information to agencies responsible for implementing water resource management legislation [17]. Well monitoring provides direct in situ measurements of the depth below the natural ground surface at which water is found; this technique is used in most aquifers from measurement networks [18,19].The area of study and the temporal frequency of measurements are usually different and vary in terms of cost, scalability, and ability to answer scientific and management questions [13].Otherwise, there are commercial automated systems for monitoring groundwater that remain fixed in the well for months or years, collecting data with a specific temporal monitoring frequency and recording data for later analysis; however, they tend to be unaffordable due to their high cost to be applied in an aquifer monitoring network [11]. In recent years, with the development of technology, the reduction of costs in hardware, expansion in software supported on the open-source community, the development of realtime monitoring systems by researchers and engineers has increased [20,21].These systems are low-cost wireless sensors, built from affordable electronic components, enabled by telemetry to provide real-time data [22]. Systems connected with the Internet of Things (IoT) offer opportunities in multiple areas of research to solve new problems [23,24].Low-cost wireless sensors have been widely developed in different areas, to measure the value of particles suspended in the air [25], monitoring of geothermal systems [21], climate stations with artificial intelligence for smart farms [26], radon gas monitoring system for smart homes [27], hydrological monitoring of landslide-prone areas [28], and for the study of fish behavior [29].Previous works agree on the need to develop low-cost sensors since their application on large scales significantly reduces the costs and times to generate information [11]. The objective of this work is to generate a low-cost system designed for rural Mexican communities based on IoT technology for real-time groundwater monitoring.This system has advantages such as not requiring a specific calibration control when it is installed, unlike others on the market, and data are available for visualization through open source libraries. Materials and Methods The prototype measures in situ and in real time the piezometric level and temperature inside the well, in the saturated zone, as well as the ambient temperature and atmospheric pressure outside the well.The data obtained are sent to a web page for subsequent analysis or displayed in graphic form or tables.The prototype components are described in the following sections. Component Overview The depth of groundwater is obtained using a high-performance pressure transducer from the DFRobot brand (KIT0139; [30]).The pressure transducer is encapsulated in an industrial stainless-steel probe.When the probe is immersed in the saturated zone, it will be exposed to the pressure by the liquid column.This pressure is converted into a current signal (4-20 ma) that is converted to analog voltage, compatible with most current microcontrollers.An ADS1115 analog-to-digital converter with 16-bit resolution is used, which is connected to the microcontroller through the I2C protocol.The ESP8266 microcontroller is used, integrated with a 2.4 Ghz WiFi antenna ideal for IoT tasks, powered with 5 V via micro-USB. Due to the fact that the pressure sensor has an operating range of 12 to 36 volts, an isolated electrical connection was made to reduce electrical noise interference with an isolated DC-DC converter (B0505S-2W) Figure 1.The ESP8266 microcontroller was powered with a conventional alternating current regulator at 5 V, 2 A direct current (DC) as the main power source and connected to an isolated DC-DC converter to power the DC-DC booster (Xl6009e1).This has an approximate efficiency of 94% and, in this way, it regulates the 12 V required by the pressure sensor. The prototype measures in situ and in real time the piezometric level and t ture inside the well, in the saturated zone, as well as the ambient temperature an pheric pressure outside the well.The data obtained are sent to a web page for sub analysis or displayed in graphic form or tables.The prototype components are d in the following sections. Component Overview The depth of groundwater is obtained using a high-performance pressure tra from the DFRobot brand (KIT0139; [30]).The pressure transducer is encapsulat industrial stainless-steel probe.When the probe is immersed in the saturated zon be exposed to the pressure by the liquid column.This pressure is converted into a signal (4-20 ma) that is converted to analog voltage, compatible with most curren controllers.An ADS1115 analog-to-digital converter with 16-bit resolution is use is connected to the microcontroller through the I2C protocol.The ESP8266 microc is used, integrated with a 2.4 Ghz WiFi antenna ideal for IoT tasks, powered with micro-USB. Due to the fact that the pressure sensor has an operating range of 12 to 36 isolated electrical connection was made to reduce electrical noise interference wit lated DC-DC converter (B0505S-2W) Figure 1.The ESP8266 microcontroller was p with a conventional alternating current regulator at 5 V, 2 A direct current (DC main power source and connected to an isolated DC-DC converter to power the booster (Xl6009e1).This has an approximate efficiency of 94% and, in this way, it r the 12 V required by the pressure sensor.In addition to the pressure sensor, temperature and atmospheric pressure sensors were placed.The temperature sensor uses the waterproof Ds18b20 chip and is submerged with the pressure transducer.The piezo-resistive sensor (BME-280) was used with an operating voltage of 3.3 V that measures ambient temperature, humidity, and atmospheric pressure.For detecting and recording the water extraction pump operation in a well, a noninvasive current sensor (SCT-013) was added.This record is important to know the consumptive uses of groundwater, its frequency of use. Figure 1 shows the configuration of the PCB (printed circuit board).This figure was created with Fritzing software, version 0.9.3 (open source). The circuit was designed in 6.1 × 7.8 cm size (Figure 2) and placed inside a cylinder 4 inches in diameter and 8 inches long that acts as the axis of a reel on which the cable that connects the sensors for measuring the variables: piezometric level, temperature, and atmospheric pressure. Sensors 2023, 23, x FOR PEER REVIEW For detecting and recording the water extraction pump operation in a we invasive current sensor (SCT-013) was added.This record is important to know sumptive uses of groundwater, its frequency of use. Figure 1 shows the configuration of the PCB (printed circuit board).This fi created with Fritzing software, version 0.9.3 (open source). The circuit was designed in 6.1 × 7.8 cm size (Figure 2) and placed inside a c inches in diameter and 8 inches long that acts as the axis of a reel on which the c connects the sensors for measuring the variables: piezometric level, temperature mospheric pressure. Prototype Operation The code that allows reading the sensor data from the ESP8266 module wa with the Arduino IDE programmer.Using the WiFiManager library [31], through device via WiFi, it connects to the microcontroller to configure Internet network c and initial values from well, this data are stored in the internal memory of the module. The PubSubClient library [32] is used to publish the data of the measured on a server, while the ESP8266 module is configured to send the information on (BROKER), using an address compatible with the protocol MQTT (Message Quer etry Transfer) on a public or personal server.The ESP8266 microcontroller proc information from the sensors and publishes it to a server. Prototype Installation at Monitoring Site The prototype is configured with the site information: name of the site, ele a reference point on the ground surface (Zo) or well curb, and the total length introduced into the well (L), measured from the Zo point (Figure 3).These vari added in situ to the code via a mobile device connected to the ESP8266 thr WiFiManager library. In this way, the elevation of the groundwater level (h) of the groundwate obtained using the following equation. Prototype Operation The code that allows reading the sensor data from the ESP8266 module was written with the Arduino IDE programmer.Using the WiFiManager library [31], through a mobile device via WiFi, it connects to the microcontroller to configure Internet network credential and initial values from well, this data are stored in the internal memory of the ESP8266 module. The PubSubClient library [32] is used to publish the data of the measured variables on a server, while the ESP8266 module is configured to send the information on a server (BROKER), using an address compatible with the protocol MQTT (Message Query Telemetry Transfer) on a public or personal server.The ESP8266 microcontroller processes the information from the sensors and publishes it to a server. Prototype Installation at Monitoring Site The prototype is configured with the site information: name of the site, elevation of a reference point on the ground surface (Zo) or well curb, and the total length of cable introduced into the well (L), measured from the Zo point (Figure 3).These variables are added in situ to the code via a mobile device connected to the ESP8266 through the WiFiManager library.The ESP8266 module was configured to publish data to a server using the MQT protocol every 15 min to prevent saturating with redundant data.The publication of th recorded data was also established once the elevation of the piezometric level (h) unde goes a variation (increase or decrease) of 2 cm with respect to the initial value of the win dow.This is to record changes produced by factors external to the well and internal one due to extraction by pumping.So, during the 15 min window, the sensor samples th piezometric level every second, the last reading is recorded at the limit of the window an then it is sent to the server.But, if the data change with respect to the first reading in th window by two centimeters or more, the new reading is recorded and immediately sen to the server without waiting for the window to end. Communication Diagram Given the restrictions on the number of devices that can be connected to a commerci server consulted for this work, the amount of data that can be sent per unit of time an costs associated with these, a server was implemented using a Raspberry pi 4 B with Broadcom BCM2711 processor, Quad core Cortex-A72 @1.8Ghz, 8GB of RAM, with th Raspberry Pi OS (64-bit) operating system, and Apache Web server (open source) was in stalled as a dedicated HTTP web server [33].The MQTT protocol Eclipse Mosquitto [34 was installed as a broker for receiving data sent by the prototypes with the ESP8266. The MariaDB library [35] was used as relational database to manage, store and con sult data received by various devices.Using the paho.mqtt.clientlibrary [36], a connectio is established from the MQTT client to the Raspberry server and the data are received an stored in the tables. To consult information from the databases, PHP, HTML and JavaScript program ming languages were used.With the Chart.jslibrary [37], the variable data are displaye In this way, the elevation of the groundwater level (h) of the groundwater can be obtained using the following equation. where: Zo = Reference point on the ground surface or well curb (masl).d = Depth of the piezometric level (m).L = Length of cable introduced into the well measured from the reference point (m).Φ = Height of the water column from the sensor to the water level (m).h = Height of the piezometric level (masl).The ESP8266 module was configured to publish data to a server using the MQTT protocol every 15 min to prevent saturating with redundant data.The publication of the recorded data was also established once the elevation of the piezometric level (h) undergoes a variation (increase or decrease) of 2 cm with respect to the initial value of the window.This is to record changes produced by factors external to the well and internal ones due to extraction by pumping.So, during the 15 min window, the sensor samples the piezometric level every second, the last reading is recorded at the limit of the window and then it is sent to the server.But, if the data change with respect to the first reading in the window by two centimeters or more, the new reading is recorded and immediately sent to the server without waiting for the window to end. Communication Diagram Given the restrictions on the number of devices that can be connected to a commercial server consulted for this work, the amount of data that can be sent per unit of time and costs associated with these, a server was implemented using a Raspberry pi 4 B with a Broadcom BCM2711 processor, Quad core Cortex-A72 @1.8 Ghz, 8 GB of RAM, with the Raspberry Pi OS (64-bit) operating system, and Apache Web server (open source) was installed as a dedicated HTTP web server [33].The MQTT protocol Eclipse Mosquitto [34] was installed as a broker for receiving data sent by the prototypes with the ESP8266. The MariaDB library [35] was used as relational database to manage, store and consult data received by various devices.Using the paho.mqtt.clientlibrary [36], a connection is established from the MQTT client to the Raspberry server and the data are received and stored in the tables. To consult information from the databases, PHP, HTML and JavaScript programming languages were used.With the Chart.jslibrary [37], the variable data are displayed in graphs and/or tables.The server uses the Leaflet library [38] to generate interactive maps and show the position of connected devices. Accessibility to the server from any device connected to the Internet was configured to the dynamic domain name system (DDNS) with the NO-IP domain provider responsible of directing device requests to the Raspberry server with the free domain http://piezometriaguasave.ddns.net(accessed on 1 June 2023). The communication operation diagram is shown in Design, Construction, Assembly and Operation of the Prototype The main tool used was computer-aided design (CAD) in SOLIDWORKS 2019 software; this software allowed us to design, analyze and visualize a 3D model. A reel was used so the electronic components can be inside a 4" diameter and 8-inch- Design, Construction, Assembly and Operation of the Prototype The main tool used was computer-aided design (CAD) in SOLIDWORKS 2019 software; this software allowed us to design, analyze and visualize a 3D model. A reel was used so the electronic components can be inside a 4 diameter and 8-inchlong PVC pipe. Figure 5 shows the design for assembling the PCB that allows secure assembly on the PVC pipe.Two bearings were placed (Figure 6a): one exterior that allows the placement of pine wood (Figure 7) and another that goes inside to support the reel and facilitate the cable winding.All designs were printed on a 3D plastic printer (Creality Ender 3) using PLA-type plastic filament.The reel structure was designed with the capacity to wind up to 30 m of cable.The device was installed by an operator who traveled to the site; there, it was configured to The reel structure was designed with the capacity to wind up to 30 m of cable.The device was installed by an operator who traveled to the site; there, it was configured to work correctly.It is not necessary for the well owners to have technical knowledge about maintenance and/or repair; it is only necessary to be aware of the existence of a monitoring device and its importance for household and community water use.So, to reinforce awareness, we informed the well owners about the false belief that groundwater is unlimited and that it is only produced for human needs, also explaining that, in case of poor quality or lack of water, the state will solve the problem but transfer the costs mainly to them, making their economic and social situation precarious.The reel structure was designed with the capacity to wind up to 30 m of cable.The device was installed by an operator who traveled to the site; there, it was configured to work correctly.It is not necessary for the well owners to have technical knowledge about maintenance and/or repair; it is only necessary to be aware of the existence of a monitoring device and its importance for household and community water use.So, to reinforce The device can be submerged for long periods (months) and function correctly; however, the possible failures that could occur are considered from the server.If the device stops transmitting, it is assumed that there is a problem and the operator, after calling the well owners, tries to solve the problem with his support.If this cannot be solved in this way, the operator will make a visit.The most common failures are caused because the main power source has been disconnected, the WiFi signal is weak for relocation of the router or a change in the WiFi password by owners. On the other hand, the elimination of electronic components due to reaching their lifespan or failure due to inappropriate handling conditions will be collected and taken to confinements authorized by the health sector authorities. The lifespan of the mechanical parts in the device is estimated up to ten years.The sensor's lifespan depends to the manufacturer's provisions.Considering the parts' lifespan and cost of the electronic and mechanical components, it is appraised this system is profitable in the long term. Characteristics of the Study Area In Mexico, a rural community is one in which fewer than 2500 habitants live [39].These provide food to the cities, since agriculture is principally developed there, in addition to some zones where aquaculture and fishing are the main sources of economic income.The plots of land where families live in the rural areas of Guasave, Sinaloa, are extensive, around 1600 m 2 , and some have a well to extract water.On these properties, backyard farming activities are performed that contribute to the income of the family economy and its food sustainability [40]. The water supply for domestic and backyard farming activities originated underground and is extracted through wells via electric pumps, since the communities of Guasave, Sinaloa, have electricity supplied by the Federal Electricity Commission (CFE), agency of the Mexican state.There are also Internet connectivity and wireless communications, so it is technically and economically possible to operate the IoT system after basic training is provided to members of the family where the system is installed. Additionally, the municipality of Guasave has a wide network of interconnected roads and paths that give access to all communities within; this facilitates the maintenance of the devices. The presence of the Pacific Ocean and the Sinaloa River in the study area are hydrological expressions that affect the level of groundwater in an unconfined coastal aquifer, combined with meteorological phenomena associated with prolonged periods of drought.Hurricanes that form in the Pacific Ocean and hit the Guasave Valley, Sinaloa, the in-tense pumping for agricultural irrigation, supply to aquaculture farms, backyard farming irrigation and domestic use mainly produce decreases in the piezometric level. Given this scenario, the piezometric level must be monitored to avoid decreases in the level of continental water that could produce saline intrusion and thereby contaminate the aquifer with brackish water, unsuitable for agricultural activities, so a control of the piezometric level and the temperature of water are important, the latter being necessary data for water quality. It is also important to control the level of groundwater to prevent it from dropping below the height of the well pumps or below the depth of the wells; if this happens, the communities would be left without water.The depth of the piezometric level in the Valley ranges between 1 and 19 m and is influenced by the recharge of the Sinaloa River.In the middle portion and close to the coastline, the level is less than 5 m deep [41] and maintains a fragile balance with sea level. In the municipality of Guasave, there are 549 rural locations [42] where the IoT system can be installed, especially in properties with a well where the family lives, usually fenced properties, where vandalism is reduced due to these circumstances.The topographic relief of the valley is light, with a slope of 0.5 m/km, so communication via antennas is good. Data Acquisition On 23 June 2023, in the rural community of Agua Blanca, Guasave, Sinaloa, Mexico, the prototype was installed in a domestic well that has Internet connection via WiFi, compatible with the ESP8266 module.It also could be using a 4G modem connected to a public cellular network with a micro-Sim to provide the WiFi network with Internet access. The elevation of the reference point (Zo) was obtained with differential GNSS in RTK mode (SimpleRTK2B-Ardusimple) that provides centimeter precision linked through the NTRIP protocol to a transmitter base (SimpleRTK2B-Ardusimple) georeferenced to the international reference frame (ITRF08). At the time of the installation of the sensor, to calculate the total cable length that would be submerged for monitoring, we obtained a water depth of 1.95 m (d).Also, the pressure sensor range of measurement is 5 m of water column (ϕ); therefore, in this well and due to the shallow water conditions, it was considered to submerge 4 m of cable, that is, ϕ = 4 m and L = 5.95 m. Table 1 shows the sensor installation data.The Identifier Name (Name ID), Elevation, Total Submerged Cable Length (L) and network credentials were added in situ to the ESP8266 module using WiFiManager.The position of the device, the elevation, and the hmin and hmax values were added directly to the server database for recording and comparing the piezometric level with the hmin and hmax values.If data sent by the sensor are lower than hmin, this means the water level has decreased below the sensor and this would not correspond to a valid measurement.Moreover, if the level sent by the sensor is above hmax, this means the water level has increased above the pressure sensor range of measurement of 5 m of water column (ϕ).For these cases, an alert was programmed indicating the Name ID sensor that requires attention.In the case of being below the hmin, it would be necessary to increase the total submerged cable length and, in the case of being higher than the hmax, the length of the submerged cable must be reduced at the operating threshold of the pressure sensor (5 m). Final View of the Prototype The developed device is shown in Figure 7. Inside the pipe is the PCB and it has the shape of a reel to facilitate the winding of the 30 m cable.The device is powered by a 5 V 2 A micro-USB from an AC regulator. Access to the data is achieved using a device with Internet access in any web browser with the link http://piezometriaguasave.ddns.net/aguablanca.php,accessed on 15 October 2023, connecting with the Raspberry web server.Figure 8 shows a visualization provided by the Raspberry pi web server of the information received by the prototype installed in the domestic well: piezometric level (upper), temperature (central) and atmospheric pressure (lower). Sensors 2023, 23, x FOR PEER REVIEW of the water level with a manual probe and by introducing the pressure se controlled depths from the surface control point (Zo) (Figure 9b).In this wa that the measurement data from the server correspond with the direct me tained through a vertical control of depth of the pressure sensor in the sat the well in Equation ( 2).In addition, once the measuring device is installe the piezometric level in the well is randomly measured and this value is c the one recorded on the server, observing that they are the same values.When the pumping starts, the levels drop in a shorter time, stabilizing to verify the operation of the device, the water level is measured before the every five minutes, later on the sampling becomes more spaced considerin of the piezometric level.This way, the direct measurement is compared w orded by the sensor, observing maximum differences no greater than 2 c may be due to errors in the observation of measurements made with a p sensor response.These differences, whether due to an error in observatio in the precision or accuracy of the device, are within the practical toleranc When inserting the probe with the sensor into the well, verification tests were run on site, so the device readings coincide with reality.This is estimated by measuring the depth of the water level with a manual probe and by introducing the pressure sensor at various controlled depths from the surface control point (Zo) (Figure 9b).In this way, it is verified that the measurement data from the server correspond with the direct measurement obtained through a vertical control of depth of the pressure sensor in the saturated zone of the well in Equation ( 2).In addition, once the measuring device is installed, the depth of the piezometric level in the well is randomly measured and this value is compared with the one recorded on the server, observing that they are the same values. Data Analysis Figure 8 shows the time series from 23 June to 14 October 2023; 49,713 data wer downloaded from the Raspberry server corresponding to piezometric level, ambient tem perature, groundwater temperature and atmospheric pressure.Sudden changes in the p ezometric level are due to the water extraction pump operation.The trend of the curve downward, meaning the piezometric level generally decreases as the summer progresse with a local upward change at the beginning of autumn.It is also clearly observed that fo days in which there is no pumping or pumping ceases, the piezometric level is main tained.The diameter of the well is 4 inches, it has a depth of 12 m and the pump installe has a capacity of 2 L per second. The temperature in the well water is practically stable, the average is 25.90 degree Celsius with a standard deviation of 1.14 degrees Celsius.The temperature is validate with samples taken on site and measured with a Hanna model HI 98331 conductivity me ter. The ambient temperature is oscillating, with an arithmetic mean of 34.24 degrees Ce sius and a standard deviation of 4.96 degrees Celsius and atmospheric pressure varie from 1006 to 1016 hPa. Piezometric Level Figure 10 shows the behavior of the variation in the piezometric level of the "Agu blanca" site.Sudden drops and rises correspond to the pumping system operation.Whe the pump is turned on, the water table drops suddenly and then recovers in a short tim so this phenomenon is observed in the time series.During the recording period, in th process of recovery of the piezometric level, a downward trend of 0.5 m is observed in th recording period, which corresponds to the summer and early autumn period, which in dicates that the aquifer did not recover to its levels, but they were depressed by the de mand for water typical of the indicated seasonal period.The days when there is no pump ing are also observed, even observing a slight rise in the piezometric level when pumpin stops. The groundwater temperature presented temporal regularity with a mean value o 25.90 degrees Celsius and a standard deviation of 1.14 °C (see Figure 11).When the pumping starts, the levels drop in a shorter time, stabilizing afterwards, so to verify the operation of the device, the water level is measured before the pumping, then every five minutes, later on the sampling becomes more spaced considering the variation of the piezometric level.This way, the direct measurement is compared with the one recorded by the sensor, observing maximum differences no greater than 2 cm.Differences may be due to errors in the observation of measurements made with a probe or to the sensor response.These differences, whether due to an error in observation or variations in the precision or accuracy of the device, are within the practical tolerance threshold for the observed variable. The temperature obtained with the device is validated by measurements on water samples taken on site, measured using a Hanna conductivity meter, model HI 98331.As temperature is a variable with minimum changes, the greatest gradient being the change from day to night, samples were taken every half hour for verification, with both measurements being consistent. Data Analysis Figure 8 shows the time series from 23 June to 14 October 2023; 49,713 data were downloaded from the Raspberry server corresponding to piezometric level, ambient temperature, groundwater temperature and atmospheric pressure.Sudden changes in the piezometric level are due to the water extraction pump operation.The trend of the curve is downward, meaning the piezometric level generally decreases as the summer progresses with a local upward change at the beginning of autumn.It is also clearly observed that for days in which there is no pumping or pumping ceases, the piezometric level is maintained.The diameter of the well is 4 inches, it has a depth of 12 m and the pump installed has a capacity of 2 L per second. The temperature in the well water is practically stable, the average is 25.90 degrees Celsius with a standard deviation of 1.14 degrees Celsius.The temperature is validated with samples taken on site and measured with a Hanna model HI 98331 conductivity meter. The ambient temperature is oscillating, with an arithmetic mean of 34.24 degrees Celsius and a standard deviation of 4.96 degrees Celsius and atmospheric pressure varies from 1006 to 1016 hPa. Piezometric Level Figure 10 shows the behavior of the variation in the piezometric level of the "Agua blanca" site.Sudden drops and rises correspond to the pumping system operation.When the pump is turned on, the water table drops suddenly and then recovers in a short time, so this phenomenon is observed in the time series.During the recording period, in the process of recovery of the piezometric level, a downward trend of 0.5 m is observed in the recording period, which corresponds to the summer and early autumn period, which indicates that the aquifer did not recover to its levels, but they were depressed by the demand for water typical of the indicated seasonal period.The days when there is no pumping are also observed, even observing a slight rise in the piezometric level when pumping stops.The atmospheric pressure presented values between 1006 hPa and 1015 hPa (Figure 12), with expected variations.The groundwater temperature presented temporal regularity with a mean value of 25.90 degrees Celsius and a standard deviation of 1.14 • C (see Figure 11).The atmospheric pressure presented values between 1006 hPa and 1015 hPa (Figure 12), with expected variations.The atmospheric pressure presented values between 1006 hPa and 1015 hPa (Figure 12), with expected variations.The atmospheric pressure presented values between 1006 hPa and 1015 hPa (Figure 12), with expected variations. Discussion Wireless sensors for measuring the piezometric level, such as the one described here, exhibit great importance in obtaining data in hydrology [28,[42][43][44].Those developed with IoT technology in groundwater management applications stand out in data acquisition, monitoring and information management [20,24]. In particular, low-cost wireless sensors are accessible to build due to the affordable cost of existing electronic and communication hardware for the generation of data in real time, the use of open-source software for data storage and visualization, the reduction in operational cost for installation and maintenance, the scalability and the volume of obtained measurements [11,22]. Other water level measurement prototypes have been developed with ultrasonic sensors [43], with pressure components (MS5803-14BA) encapsulated in a plastic container [44], and the recent pressure sensor with piezoresistive MEMS technology [45].All these prototypes are in testing and operating stages; in addition, each one of them requires specific control calibration when installed.The most popular technique for measuring water level is commercial sensors (Solinst, HOBO, DIVER) that use pressure transducers [24]. Carderwood et al. [11] describes that a main component for continuous monitoring is found in the contribution of well owners.They observed interest in the owners for the availability of the data in a web interface.This component is presented as a strategy to facilitate the installation of sensors in existing wells.In this way, our prototype to measure the piezometric level of the aquifer in real time with a personal public web interface presents a low-cost scalability scenario.Furthermore, by combining GIS with the IoT system of this prototype, the quality of the generated database will improve [24]. Conclusions A low-cost monitoring system was developed, designed to monitor in real time the piezometric level and temperature of water in wells, as well as atmospheric pressure and temperature.The sensor data are sent to a web page where the user can monitor the behavior of the levels in real time or store the data for later processes.The device was tested in a well, with reliable results, so it is possible to replicate it and place it in other wells to collect necessary data in hydrological studies, by configuring a network for monitoring groundwater and environmental variables such as atmospheric pressure and temperature. The developed device is environmentally friendly and non-toxic.This system has an operating voltage of 5 V and a current consumption of 180 mA equivalent to a power of Figure 1 . Figure 1.Electronic diagram of the connection between components used in the prototyp Figure 1 . Figure 1.Electronic diagram of the connection between components used in the prototype. 1 Φ = Height of the water column from the sensor to the water level (m).h = Height of the piezometric level (masl). Figure 4 . Level 1 represents the sensors connected to the ESP8266 module that send the variables to the Raspberry Broker through the MQTT protocol and Internet connection.At level 2, the Broker receives it, records the identifier name of each ESP8266 and the date-time of reception using Python script designed to give access to database.Subsequently, at level 3, the variables are stored in the database through the use of MariaDB, visualized as a web page with PHP and HTML and the data are graphed with Chart.js-all of these are open-source libraries.The query is made at level 4 by visiting the domain from any PC or device web browser with Internet access; the Raspberry web server shows graphs of the stored data.Sensors 2023, 23, x FOR PEER REVIEW 6 of 15 in graphs and/or tables.The server uses the Leaflet library [38] to generate interactive maps and show the position of connected devices.Accessibility to the server from any device connected to the Internet was configured to the dynamic domain name system (DDNS) with the NO-IP domain provider responsible of directing device requests to the Raspberry server with the free domain http://piezometriaguasave.ddns.net(accessed on 1 June 2023).The communication operation diagram is shown in Figure 4. Level 1 represents the sensors connected to the ESP8266 module that send the variables to the Raspberry Broker through the MQTT protocol and Internet connection.At level 2, the Broker receives it, records the identifier name of each ESP8266 and the date-time of reception using Python script designed to give access to database.Subsequently, at level 3, the variables are stored in the database through the use of MariaDB, visualized as a web page with PHP and HTML and the data are graphed with Chart.js-all of these are open-source libraries.The query is made at level 4 by visiting the domain from any PC or device web browser with Internet access; the Raspberry web server shows graphs of the stored data. Sensors 2023 , 15 Figure 5 . Figure 5.View of designed structure that assembles in 4" PVC tube and assembly position of the PCB. Figure 6 . Figure 6.Final CAD design of the piezometric sensor prototype: (a) exploded view, (b) general view of the design. Figure 7 . Figure 7. Final view of the prototype.(a) View of the structure that holds the PCB to the reel.(b) Figure 5 . 15 Figure 5 . Figure 5.View of designed structure that assembles in 4 PVC tube and assembly position of the PCB. Figure 6 . Figure 6.Final CAD design of the piezometric sensor prototype: (a) exploded view, (b) general view of the design. Figure 7 . Figure 7. Final view of the prototype.(a) View of the structure that holds the PCB to the reel.(b) Developed piezometric level. Figure 6 . Figure 6.Final CAD design of the piezometric sensor prototype: (a) exploded view, (b) general view of the design. Figure 6 . Figure 6.Final CAD design of the piezometric sensor prototype: (a) exploded view, (b) general view of the design. Figure 7 . Figure 7. Final view of the prototype.(a) View of the structure that holds the PCB to the reel.(b) Developed piezometric level. Figure 7 . Figure 7. Final view of the prototype.(a) View of the structure that holds the PCB to the reel.(b) Developed piezometric level. Figure 8 . Figure 8. Raspberry pi server web page display. Figure 8 . Figure 8. Raspberry pi server web page display. Sensors 2023 , 1 Figure 9 . Figure 9. Domestic well under monitoring with identifying name "Agua Blanca" (a); control proces at different depths for pressure sensor validation (b). Figure 9 . Figure 9. Domestic well under monitoring with identifying name "Agua Blanca" (a); control process at different depths for pressure sensor validation (b). Table 1 . Sensor installation data in the domestic well.
9,429
2023-11-23T00:00:00.000
[ "Environmental Science", "Engineering" ]
Lifetime Maximization of Wireless Sensor Networks for a Fault Diagnosis System Using LEACH Protocol : The breakdown in a machine created by a fault will greatly affect the plant operation. The frame work of fault diagnosis of machines using machine learning techniques is an established area. Here, the fault diagnosis system is implemented with the help of wireless sensor networks (WSN). Each machine in the plant is fitted with sensors (node) from which the fault diagnosis is carried out. The signals/messages from each node are transmitted to a base station, which acts as a central control unit for the entire plant. Fault diagnosis using WSN in a factory setup has few challenges. The major issue in WSN is the life time of the nodes, as they are situated far away from base station in many plants like thermal power plant, refinery, and petroleum industries. To address the life time of the nodes many researchers have developed many protocols like LEACH, SEP, ERP, HCR, HEED, and PEGASIS. The plants need customisation in terms of choosing suitable algorithms and choosing location of the base station within the plant for better life time of the nodes. This paper presents results of both experimental and simulation studies of a typical plant, where the vibration signals from each machine are acquired and through machine learning techniques the fault diagnosis is performed with the help of wireless sensor networks. For illustrative purpose, a well reported bearing fault diagnosis data set is taken up and fault diagnosis case study was performed from wireless sensor networks point of view (experimental study). Here, at every stage, the computational time is taken as a primary concern which affects the life time of the sensor nodes. Then, the WSN of 18 sensor nodes representing 18 machines with LEACH protocol is simulated in Matlab© to study the life time characteristics of each node while keeping base station at different locations. The life time of different nodes is heavily dependent on the location of the base station. Finding the right location of the base station for a given plant is another contribution of this work. INTRODUCTION In a plant, there are a number of machines whose health status is critical and responsible for the plant shut down. In order to minimize the shut down time and maximize the utilisation of the plant, many predictive maintenance strategies were proposed, developed and implemented. Out of them, machine learning based fault diagnosis (predictive method) is most recent one. The data (signals) acquired for the purpose of fault diagnosis were transmitted to a central control system through wires/cables. These cables are often disturbed by other plant activities. Also, there is a possibility of collecting noise, especially, when signals are taken near to high power electric lines. To handle these issues, industrialist started using wireless sensor networks. Implementation of the fault diagnosis system through wireless sensor networks solved the wire/cable related issues; however, it posed a new set of problems/challenges [1][2]. For each machine one sensor unit with a radio for transmitting the data is fixed. Normally, these sensor nodes are operated by batteries. As many types of machinery in the plants are generally operated by very high electric power, and continuous power supply is needed for WSNs, industrialists often preferred a battery operated sensor nodes. The batteries in the sensor nodes need frequent replacement if the WSN is not designed properly. In vibration based fault diagnosis systems, a large amount of data needs to be transmitted to base station (central control unit) for processing and decision making. This results in wastage of battery energy and memory space of the sensor nodes, thereby increasing the cost of maintenance and network traffic/collisions and reducing the reliability of the data communication process. The memory related issues are discussed by Bao et al., while dealing with accelerometer signals in structural health monitoring application [3]. However the scope of the study is to increase the sensor node lifetime mainly by improving the data processing techniques and base station location. There are many approaches for lifetime maximization like power saving methods [4], decentralized data processing [5] and data compression techniques [6]. Each one of them has its own pros and cons in terms of throughput, reliability of data received, speed transmission and cost associated with it. The proposed system describes that the wireless solution works at a high refresh rate at low cost. Commercial-Off-The-Shelf (COTS) hardware (XBee module from Maxstream) was also used for this solution. Plastic fabrication is done for a real time industrial application with 4 temperature nodes having a cycle time of 128 ms. The experimental outputs show that the node lifetime of the above 4 nodes is 4 Months. As the temperature varies very slowly the sampling and payload data transmission rates of the above experiment are around 8 Hz and are considered less [7]. The proposed system in [8] uses both wired and wireless communication approaches for SCADA (Supervisory control and data acquisition). Small microprocessors are used in ISM's (Intelligent Sensor Module) and Gateways. Large microprocessors are used in RDAU (Remote Data Acquisition Units). In ISM, the sampling rate of up to 11 samples per cycle maximum of 660 Hz is found and it can be used to analyse till fifth harmonic of the signal. IN RDAU, the sampling rate is 68 samples per cycle which can be used to analyse up to 34 harmonic of the signal. The next level of this is a Dynamic Power Management (DPM) protocol used for a WSN, which can consume less battery; hence the node lifetime is increased by 33%. The system in [9] is proposed for machinery condition-based maintenance which uses commercial WSN products. Single Hop topology is used for shortrange transmissions, latency requirements, to simplify the nodes and for less physical space for machineries. The requirements for the system are four sensor nodes, a base station, and a computer with LabView7.1 for GUI and for processing the signals. A heat generating plant and airconditioning plant can be installed with the proposed system. The data processing is not done at the sensor nodes, as it will consume more energy, still the energy consumption was very high and it was around 2500 mAh for 9 days if each sensor node sends data within 2 seconds. In [10] WSN, online Remote Energy Monitoring and fault diagnosis for industrial applications is proposed. The inputs for the sensor nodes present in the system are Motor terminal data, two line-to-line voltages, two phase current, and a shaft torque, then the nodes transmit the received data and it is finally processed in the Central Supervisory Station (CSS). Rotor eccentricity fault was found using Simulation. The feasibility of the proposed system for fault diagnosis is found using FFT on the single phase current signal. The IEEE 802.15.4 compliant CC2420 radio components are used in the sensor nodes. ZigBee/IEEE802.15.4 based WSN for an Induction motor health monitoring system which has imbalance fault was proposed. The motor vibration signature was measured using tri-axial accelerometer ADXL330. CC2430, an IEEE802.15.4 standard compliant transceiver from Texas Instruments was used for developing a sensor node. The sensor node just communicates to the base station, all other processing was done in the base station using MATLAB. The proposed system has different levels of rotor imbalance faults created in the induction motor to find the results. A wireless vibration monitoring system (WiMon) is about WirelessHART proposed and developed by ABB [11]. The number of sensor units present in the system is 100. It also has Gateway, WiMon Data Manger, Object Linking and Embedding for Process Control server, ABB Analyst, vibration sensor, temperature sensor, battery and WirelessHART radio. The proposed system supports IEEE 802.15.4 radio and WirelessHART network standards. If the waveform was transmitted once a day and vibration root mean square (RMS) and temperature values were given once per hour, the lifetime of the battery was around 5 Years. The proposed system [12] has an essential insight mesh to monitor the conditions as a wireless solution. It is proposed by GE which had wSIM (Wireless mesh network node), wSIM Repeater, Manager Gateway and transducers. IEEE 802.15.4 compliant radio and wireless mesh communications was used and IEEE 802.15.4 was the gateway for the communication interfaces. wSIM sensor node supports maximum of 4 temperature or vibration transducers. The sample interval was minimum 15 minutes for static vibrations and 24 hours for dynamic vibrations. The battery lifetime was 3 years with standard configurations if the static data was sent every 2 hours from all channels and dynamic data was sent once per day from all channels. In WSN, a typical battery capacity might be 10.8 kJ at 3V [13]. The power consumed is about 25 to 100 mW during transmission or reception. This gives the sensor node the life time of about four months. This lifetime depends on the amount of data transmitted and how frequently the data are transmitted. In wired sensor networks, huge amount of data are handled for fault diagnosis purpose which wireless sensor networks systems cannot afford to manage due to limited battery power available. Hence, there is a need to reduce the power consumption in every stage of fault diagnosis process like feature extraction, feature selection, classification, finding the shortest path to base station etc. In order to illustrate the method, an experimental case study of bearing fault diagnosis is taken up. There are 18 machines considered in a plant, each having a sensor radio node for transmitting the data. There is one base station (central control unit), which will collect data from all 18 nodes and present it to the maintenance Engineers. Upon going through the literature, one can find that in machine learning based fault diagnosis, many features like statistical features [14], Histogram features [15], ARMA features [16], wavelet-DWT features [17], and classifiers (18) were used. In most of the study on fault diagnosis using machine learning approach, the focus was on achieving high classification accuracy so that when a fault occurs, it can be identified successfully. However, a small sacrifice in classification accuracy may give computationally effective features and classifiers. Computationally effective techniques use relatively less battery power. Hence, the first part of the study is to find the best feature-classifier combination for fault diagnosis of bearings using machine learning approach. At every stage, the decision is made from wireless sensor networks point of view to save battery energy. Here, along with good condition of the bearings, three fault conditions (inner race fault, outer race fault, inner and out race fault) are also considered [19]. With this set up, one can build the best possible wireless sensor network in an industrial environment. The next step is to reduce power consumption in transmitting the data to base station. There are a number of protocols available for energy efficient hopping like HCR [20], HEED [21], PEGASIS [22], ERP [23], SEP [24] and LEACH [25]. In all these WSNs, the location of the base station plays a critical role in lifetime of individual nodes in the network. One has to study in detail to understand the energy dissipation behaviour of nodes for different configuration of the networks. Hence, the second part of the study is to simulate the plant with 18 sensor nodes in which LEACH protocol is implemented. The location of the base station is varied and WSN performance was studied to find the best suited location for the base station from energy perspective. To sum up, the contribution of the paper is twofold in WSN based fault diagnosis system of a plant: (i) Optimal design of feature-classifier combination from WSN perspective for fault diagnosis of machine component using vibration signals. (ii) Optimal location of the base station in WSN set up for LEACH protocol using simulation studies. EXPERIMENTAL SETUP The general architecture used for the fault diagnosis system uses wireless sensor network which has one base station and a n number of sensor nodes as shown in Figure 1. The radio present in the sensor node uses IEEE 802.15.4 and ZigBee protocols for transmission of vibration [13]. Fig. 1 illustrates the working principle of the proposed system. Each sensor node was used for the measurement of the machine vibration signature. The fault diagnosis process consists of five stages namely, signal conditioning, data acquisition, feature extraction, feature classification and display of the condition. In the present study, piezo-electric transducers (Dytran make) were used, which have in-built signal conditioning circuits (pre-amplification). It was mounted on the flat surface using direct adhesive mounting technique. The accelerometer was connected to the signal-conditioning unit (DACTRAN FFT analyser), where the signal goes through the charge amplifier and an Analogue-to-Digital Converter (ADC). The acquired signals were stored in the node's memory [26]. To illustrate the process, bearing faults like inner race fault, outer race fault and inner and outer race fault were created and the corresponding vibration signatures were taken. The footstep in [13] is followed to process the acquired vibration signals. In our case in Fig. 2, the signal conditioning and data acquisition was done at the sensor nodes. The sensor nodes were used like a wireless DAQ (data acquisition system). Here, a large amount of vibration signals need to be transmitted to the base station, which increases the number of packets and in turn increases the energy dissipation from node's battery. In the light of the above, the case in hand was studied through simulation and energy consumption patterns are reported in detail in the results and discussion section. DESIGN OF FAULT DIAGNOSIS SYSTEM In the present study, machine learning approach was used for fault diagnosis. Machine learning has three distinct phases, namely, feature extraction, feature selection and classification. Effective design of fault diagnosis system involves choosing appropriate items at every phase of machine learning with less computational time. Minimum computational time will minimize the energy dissipated by the processor in the nodes in wireless sensor networks. The commonly used features are statistical features, histogram features, Autoregressive moving average (ARMA) features, and wavelet features. In this study, all four features were used and suitable feature was selected for WSN based systems. Even at feature selection stage, minimal number of features was selected to reduce the computational load at sensor node to save some battery power. There were 26 classifiers used for classifying the condition and the best classifier was chosen which gives high classification accuracy with minimal computation time. H1 K3 V4 3 Range H2 A3 V1 4 Minimum H4 K1 V13 5 Median H8 E1 -6 Skewness ---7 Mean --- Choosing Classifier Referring to Tab. 2, the highest classification accuracy is obtained with wavelet features and multiclass classifier. This was arrived at after classifying the selected Technical Gazette 25, 3(2018), 759-765 features from each type (as shown in Tab. 1) with 26 different classifiers. Here, the computation time is a little high. For efficient sensor node power management, not only high classification accuracy but also low computation time is needed. Hence, Referring to Tab. 2, IBK and Kstar classifiers were chosen as the best classifiers. SIMULATION OF WIRELESS SENSOR NETWORKS Once the fault diagnosis system is designed, then the next step in the sensor node is choosing the radio parameters. The general architecture of the sensor node is shown in Fig. 3. As stated earlier, the sensor here is piezoelectric sensor and power unit is a battery. Any commercially available RF module can be used as radio. Generally, any microprocessor/controller with in-built memory is used in the sensor nodes. In the present study, the wireless sensor node parameters are taken as shown in Tab. 4 for the simulation study. The plant has totally 18 machines in which the sensor nodes are attached and the machines are having fixed location and hence, the sensor nodes also have fixed location. The sensor nodes acquire vibration signals from the machines (bearings -for illustration) and transmit the signals to the base station/central control unit. In the process, the energy of the battery is dissipated. The energy dissipated is affected mainly by the amount of data transmitted, the frequency of the data transmission, the distance between the node to base station and the path travelled by the data (routing). There are many studies already reported in literature for routing, scheduling as the energy dissipated is more in these activities. The distance, frequency of data transmitted and amount of data transmitted also affect the energy dissipation; however, the amount of energy saved is relatively less compared to the routing and scheduling. Thus, this area was not much investigated in detail. Packet length (Bytes) 4500 This paper uses LEACH protocol for illustration and focuses on the energy dissipation of the sensor nodes for the following cases: 1) Various locations of the base station within the plant 2) The sensor node transmits vibration signals directly to base station (Refer Fig. 2). Location of Base Station As mentioned earlier, the locations of sensor nodes are fixed for the given plant layout. The challenge is to find the location of the base station. The objective here is to have maximum 'rounds' before any sensor node becomes dead. The advantage in wireless sensor networks is that even if one node becomes dead, the routing algorithm finds the alternate path to reach the base station. Therefore, it is possible to keep the WSN working, in spite of some dead nodes. However, if the number of dead nodes is higher, then from application point of view, it is not acceptable. Hence, one has to find a threshold value for maximum number of dead nodes that is acceptable for the plant standards. In the present study, it is selected as 9, not only due to 50% of the total nodes, but also beyond this point the rounds are not linearly proportional to the number of packets transmitted. However, for each 'number of dead nodes', the simulation was performed to find the 'rounds' as a measure of amount of data transmitted. A study was carried out to find the best location for the base station. The results are presented in Fig. 4. The number of rounds represent the lifetime of the nodes indirectly. Referring to Fig. 4, one can observe that when base station is in centre, the network is having more life (rounds) with nine dead nodes option. Even, for other number of dead nodes, the maximum life for base station is achieved by the centre position. Hence, it becomes an obvious choice for the base station location. When the number of dead nodes is 18, the rounds are reaching 10000, which is a maximum round set for the simulation. This indicates that the 18th node is not dying even at 10000 rounds. The experiment could have been repeated with more rounds to check the life of the 18th node; however, it is a trivial case from industrial application point of view. Hence, it is left as it is. As per industry standards, if 17 nodes are dead, they will not keep just one node and run the industry. In reality, even if two or more nodes are dead, they will replace the battery. To study the energy dissipation pattern, this study was carried out. Amount of Data Transmitted Once the base station is fixed, the energy transmitted is affected by the amount of data transmitted. To study the effect of amount of data transmitted on the energy dissipated by the battery, a set of simulation studies were carried out with the WSN parameters given in Tab. 4. Till 9 nodes die, the study was conducted and results are presented in Fig. 6. One can observe from Fig. 6 that it takes about 1200 rounds for the first round to die. Due to hopping, the nodes choose different path to reach base station as per LEACH algorithm. For 9 nodes to die, it takes about 1960 rounds and till that time energy is dissipated linearly with rounds. The number of packets transmitted to base station is also proportional to the rounds. Beyond this point, they are not linearly proportional (not shown in Fig. 6). CONCLUSION This paper presented the results of both experimental and simulation studies of a typical plant, where the vibration signals from each machine are acquired and through machine learning techniques the fault diagnosis is performed with the help of wireless sensor networks. In each stage of the machine learning process, maximising the lifetime of the node was kept as one of the objectives. The WSN of 18 sensor nodes representing 18 machines with LEACH protocol is simulated in Matlab© to study the life time characteristics of each node while keeping base station at different locations. The location of the base station was found and the amount of data transmitted versus maximum lifetime of the nodes was studied and reported. For Industrial people, the above results will be very much useful as the framework is laid for optimising WSN for a given plant layout.
4,869.2
2018-06-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Phase-Retrieval in Shift-Invariant Spaces with Gaussian Generator We study the problem of recovering a function of the form f(x)=∑k∈Zcke-(x-k)2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f(x) = \sum _{k\in \mathbb {Z}} c_k e^{-(x-k)^2}$$\end{document} from its phaseless samples |f(λ)|\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|f(\lambda )|$$\end{document} on some arbitrary countable set Λ⊆R\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda \subseteq \mathbb {R}$$\end{document}. For real-valued functions this is possible up to a sign for every separated set with Beurling density D-(Λ)>2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D^-(\Lambda ) >2$$\end{document}. This result is sharp. For complex-valued functions we find all possible solutions with the same phaseless samples. bandlimited functions and have received wide attention in approximation theory and sampling theory [5,11]. Given a generator g ∈ L 2 (R), p ∈ [1, ∞], and a mesh parameter or step size β > 0, let One of the versions of phase-retrieval in shift-invariant spaces is the recovery of a function f (up to a scalar) from its phaseless samples | f (λ)| on some set . The question then is whether the additional information that f belongs to the shift-invariant space V p β (g) suffices to determine f up to a sign. For the prototype of a shift-invariant space, namely the bandlimited functions, this question was solved [21]. Recently Q. Sun and his collaborators have developed a general theory for phase-retrieval in shiftinvariant spaces in a series of articles [7][8][9]. A typical result asserts that the samples of | f | on a sufficiently dense union of shifted lattices suffice to recover real-valued functions in some V 2 (g). These papers also cover some of the numerical aspects of phase-retrieval. The problem of phase-retrieval in shift-invariant spaces from Fourier measurements is studied in [19]. We study the problem of phase-retrieval in the shift-invariant space generated by a Gaussian φ γ (x) = e −γ x 2 . Precisely, we will assume that f is a linear combination of shifts of a Gaussian and belongs to the shift-invariant space We will allow samples from an arbitrary separated set ⊂ R and measure the density of with the standard notion of the lower Beurling density defined as We will first consider real-valued functions and unsigned samples | f (λ)|. For this case we prove the following uniqueness theorem. Theorem 1 Assume that ⊆ R is separated and that D − ( ) > 2β −1 . Then phaseretrieval is possible on for all real-valued functions in V ∞ β (φ γ ). This means that an unknown real-valued f ∈ V ∞ β (φ γ ) is uniquely determined up to a global sign factor by its phaseless samples {| f (λ)| : λ ∈ }. we have either g = f or g = − f , thus the only ambiguity is the (global) sign of f . Our proof yields a reconstruction procedure (see Section 2), but it is well-known that the phase-retrieval problem in infinite dimensions is necessarily ill-posed [4,6,13]. In the second part we will study complex-valued functions in V ∞ β (φ γ ). In this case there is no uniqueness, instead we will classify all functions g ∈ V ∞ β (φ γ ) that satisfy |g(λ)| = | f (λ)| on . Before proceeding, let us discuss some of the fine points of Theorem 1. (i) Remarkably, the uniqueness holds even for bounded coefficients, and not just for square-summable coefficients. We can therefore not rely on the Hilbert space theory for phase-retrieval, but technically we need to rely on the Banach space set-up of Alaifari and Grohs [4]. (ii) The density condition is sharp. For uniform density D( ) < 2β −1 one can produce essentially different real-valued functions f , g with the same phaseless samples |g(λ)| = | f (λ)|, λ ∈ . See below. (iii) A similar statement for bandlimited functions was proved by Thakur [21] and revisited in [4]. Both [21] and Theorem 1 support the intuition that the recovery of phase from magnitude requires twice as many samples as the recovery from the samples f (λ). This is well known for finite-dimensional frames, but in infinite dimensions it is more subtle to formulate and prove. To our knowledge Theorem 1 is one of only two models for which phase-retrieval is possible with a sharp density condition. The investigations in [7][8][9] require a much higher sampling rate or deal with conditions under which phase-retrieval is not even possible. In Section 1 we collect several statements about the shift-invariant space V ∞ β (φ γ ) and then prove Theorem 1. In Section 2 we study phase-retrieval for complex-valued functions. Our main tool is a factorization of period entire functions whose proof is postponed to Section 3. Phase-Retrieval for Real-Valued Functions We set up the proof of Theorem 1. To avoid unnecessary parameters, we set β = 1, without loss of generality. This is possible because Thus it suffices to prove Theorem 1 for β = 1. Our first use of complex variable methods is the following lemma for Fourier series. (ii) Conversely, if D(z) is a periodic entire function D(z + k) = D(z) for all z ∈ C and k ∈ Z with growth |D(ξ + iy)| = O(e π 2 y 2 /γ ), then the Fourier series of D(ξ ) has coefficients of Gaussian decay d k = c k e −γ k 2 for some c ∈ ∞ (Z). Proof For completeness we give the elementary proof. (i) Writing z = ξ + iy and d k = c k e −γ k 2 , we havê Clearlyd is entire. (ii) If D is entire and periodic, then with uniform convergence on compact sets and exponentially decaying coefficients. See, e.g., [20,Theorem 3.10.3]. Consequently the Fourier coefficients of ξ → D(ξ + iy) are d k e −2π ky and satisfy for all k and y, where the last inequality follows from the assumption. Setting y = − γ k π yields the desired estimate The analysis of phase-retrieval in V ∞ 1 (φ γ ) involves several steps. We start with a simple algebraic observation. we find that This calculation shows that the function | f | 2 belongs to a different shift-invariant space generated by φ 2γ with step size 1/2. Set r n = r n e γ n 2 /2 . Step 2. A sharp sampling theorem. Our main tool is the following sampling theorem for shift-invariant spaces with Gaussian generator from [12,Theorem. 4.4]. Thus the coefficientsr of | f | 2 are uniquely and stably determined by the phaseless samples of f on . Note that in (11) we have used the norm equivalence sup x∈R | f (x)| 2 sup n∈Z | r n |. Step 3. A functional equation. The sampling inequality (11) allows us to recover the coefficientsr from the phaseless samples | f (λ)| 2 . Finally we have to recover the coefficients c and d from the coefficientsr of | f | 2 , or equivalently from the r n = r n e −γ n 2 /2 . Letd(ξ ) = k∈Z d k e 2πikξ be the Fourier series of d andr be the Fourier series of r . Then Eq. (7) turns intor Since d k = c k e −γ k 2 has Gaussian decay, Lemma 2 asserts that its Fourier series can be extended to the entire function with growth |D(ξ +iy)| = O(e π 2 y 2 /γ ). Likewised(−ξ) extends to the entire function Consequently, we have to find the entire function D that satisfies the identity In other words, to every solution D of the functional Eq. (13) corresponds a function Assuming that f is real-valued, we can now prove Theorem 1 quickly. Proof of Theorem 1 Since f is real-valued by assumption, its coefficients are also realvalued, c =c. This entails that D(−z) = k∈Z d k e −2πikz = k∈Z d k e 2πikz = D(z) and (13) becomes the identity The uniqueness of f up to a sign is now immediate: assume that two entire (non-zero) functions D 1 and D 2 satisfy D 2 1 = D 2 2 = R. Then (D 1 − D 2 )(D 1 + D 2 ) ≡ 0. Since the ring of entire functions does not have any zero divisors, we conclude that either D 1 = D 2 or D 1 = −D 2 on C. Using formulas (4)-(6) we find that the coefficients c of f , and thus f , are uniquely determined by the phaseless samples | f (λ)|, λ ∈ , up to a sign. Theorem 1 is therefore proved. Alternatively, one could prove Theorem 1 by verifying the following criterium for phase-retrieval [4,6]: permits phase-retrieval, if and only if satisfies the complement property, i.e., if S ⊆ , then either We will produce a counter-example to the complement property. Set S = {λ 2 j : j ∈ Z} and S c = {λ 2 j+1 : j ∈ Z}. Then S and S c are disjoint and D(S) = D(S c ) = 1 2 D( ) < 1. By the necessary density condition for sampling in shift-invariant spaces, e.g. [5], S and S c cannot be sampling sets for V 2 1 (φ γ ), but they are interpolating sets by [12,Thm. 1.3]. These facts imply that both maps f → f (λ) λ∈S and g → g(λ) λ ∈S c are onto 2 (S) with non-trivial kernel. Consequently, there exist non-zero such that f (λ) = 0 for λ ∈ S and g(λ ) = 0 for λ ∈ S c . By taking the real part of f and g, we may further assume that f and g are real-valued. Since for λ ∈ either f (λ) = 0 or g(λ) = 0, we obtain By construction, f + g and f − g are linearly independent, and thus sign-retrieval is not unique. for some r ∈ Z and C ∈ C. The product converges uniformly on compact sets. Theorem 1 states that the unsigned samples {| f (λ)|} determine a unique real-valued function We postpone the proof of this lemma to Sect. 3 and first discuss its application to the phase-retrieval problem. The factorization (15) serves to find all solutions D to the equation D(z)D(−z) = R(z) in (13). The spirit of this argument is similar to the analysis in [1,2,15,17,21]. To avoid spelling out the product in (15), we use the notation Here W = W + ∪ W − is understood as a sequence {w j : j ∈ N} of zeros, where elements may be repeated according to the (finite) multiplicity of the zero. Since D is of order 2, we know that j∈N |w j | −3 < ∞. See also (21) below. With this understanding we obtain the following convenient formulas for (W , m, r ). This implies that the order of the zero at 0 is even and that the factor e 2πiz occurs with an even power. Furthermore, since J (iv) = iv and J (1/2 +iv) = −1/2 +iv ≡ 1/2 +iv for v ∈ R, the zeros on the lines iR and 1/2 + iR also occur with even multiplicity in R. Now assume that R = (Z , 2m, 2r ) is given so that its zero set is symmetric Z = J Z and that the zeros on the lines iR and 1/2 + iR have even multiplicity. Let S + = {x + iy : 0 < x < 1/2} and S 0 = iR ∪ (1/2 + iR). Let Z 0 be the zeros of R in Z ∩ S 0 counted with half their multiplicity. In our notation this means that Z 0∪ Z 0 = Z ∩ S 0 . Next choose V ⊆ Z ∩ S + arbitrary and set Then and consequently Thus every choice V of zeros of R in the strip S + with the corresponding function D V yields a valid factorization of R. Clearly, different zeros sets V 1 , V 2 (counting multiplicities) yield Conversely, every factorization D D * = R arises in this way, because we can always write the zero set of D as To summarize, we state the following lemma. Lemma 7 Let R = R * = (Z , 2m, 2r ) be a periodic entire function of order 2 with all zeros on iR ∪ 1/2 + iR of even multiplicity. Then every solution of D D * = R is given by a unimodular multiple of some D V , as defined in (19) and (20). The analysis of the factorization D D * = R is the key tool to find all possible solutions to the phase-retrieval problem for complex-valued functions in V ∞ β (φ γ ). In contrast to the uniqueness among real-valued solutions, there are always many substantially different solutions. In principle, these can be found by the following procedure. (iii) Choose V ⊆ Z ∩ S + and define D V by (20). (iv) Determine the Fourier coefficients of D V (ξ ): and set c k = d k e γ k 2 and for some α ∈ C, |α| = 1. Since |R(ξ + iy)| = O(e 2π 2 y 2 /γ ) by Lemma 2, |D V (ξ + iy)| = O(e π 2 y 2 /γ ) by Lemma 7. By Lemma 2 the Fourier coefficients of D V have Gaussian decay and consequently c k = d k e γ k 2 is bounded. It follows Remark 1. If f is real-valued, then D * = D, and its zero set is symmetric, W = J W , consequently the zero set Z = W∪J W contains all zeros of D with double multiplicity. Since every zero has even multiplicity, we can set Z ∩ S + = V∪V , where V are the zeros of R in S + with half the multiplicity. Equation (19) then and f V is the real-valued solution of the phase-retrieval problem. We note that other choices of V yield complex-valued functions f V such that For real-valued f the steps (i) -(iv) constitute a reconstruction procedure of f from its unsigned samples | f (λ)|, λ ∈ . Whereas Theorem 1 only asserts the uniqueness up to a sign, the factorization of Lemma 6 also implies a (rather theoretical) reconstruction. 2. If f ∈ V ∞ 1 (φ γ ) is real-valued and even, then c k = c k = c −k . Consequently, d(ξ ) = k∈Z c k e −γ k 2 e 2πikξ =d(−ξ) is real-valued, even, and smooth, and r (ξ ) =d(ξ ) 2 ≥ 0. Thusr ≥ 0, and the only smooth solutions ared = ±r 1/2 . In this case we can obtain the coefficients c k directly as the Fourier coefficients of Of course, for the boundedness of these coefficients we still need the analysis that led to Lemma 7. 3. Stability. The procedure in steps (i) -(iv) is only of theoretical interest, because it is well-known that phase-retrieval in infinite dimensions is inherently unstable [3,4,6]. This is completely obvious in the reconstruction procedure in the shiftinvariant space V ∞ 1 (φ γ ): numerically, the transition from the relevant coefficient sequence c to d, given by d k = c k e −γ k 2 amounts to the truncation of c to a finite sequence. Once the sequence d has been obtained (steps (iii) and (iv) above), the transition c k = d k e γ k 2 leads to an amplification of all accumulated errors. Yet, despite the inherent instability, several steps in the above reconstruction of f are stable. The relevant estimate is (11), for the reconstruction ofr from the phaseless samples | f (λ)|. For coefficient sequences c with small support the reconstruction promises to be reasonably effective, which is consistent with the arguments in [3] and [14]. Proof of the Factorization Lemma As we do not know a precise citation for the statement, we include a proof of Lemma 6. We need to show that every periodic entire function D of order 2 (more generally, of finite order) can be factored into a product with factors of the form where w is a zero of D in the vertical strip S = {x + iy ∈ C : −1/2 < x ≤ 1/2}. The choice of the signs depends on the sign of Im w. As we will see in part (vi) of the proof, the sign is determined by the asymptotic behavior of cot πw as Im w → ±∞. Proof (i) Since D is periodic, its zero set is periodic and, by definition of W ± the zero set is w∈W + ∪W − (w + Z). Since D is entire of order 2, the convergence exponent of its zeros is > 2 [20]. This implies that w∈W ,w =0 k∈Z (ii) LetD be the main part of the right-hand side of (15). We first check the convergence of the product. For this we need to verify that This is easily seen by calculating G(z, w) sin(−πw) sin π(z−w) with the factorization of sin π z. The quadratic polynomial in the exponent is obtained by substituting (24) and (25) in (23). (v) Now consider the ratio of corresponding factors in D andD and simplify the expression. We argue only for w ∈ W + . Next we take the product over w ∈ W − and assume for now that the product converges. The left-hand side is an entire function with period 1, whereas the right-hand side is the exponential of a quadratic polynomial. It is now easy to see that e P is periodic for a polynomial P , if and only if P(z) = 2πir z for some r ∈ Z. We conclude that D(z) =D(z)e 2πir z , and this is precisely (15).
4,239.2
2020-06-01T00:00:00.000
[ "Materials Science" ]
Temperature Rise of Silicon Due to Absorption of Permeable Pulse Laser Blade dicing is used conventionally for dicing of a semiconductor wafer. Stealth dicing (SD) was developed as an innovative dicing method by Hamamatsu Photonics K.K. (Fukuyo et al., 2005; Fukumitsu et al., 2006; Kumagai et al., 2007). The SD method includes two processes. One is a “laser process” to form a belt-shaped modified-layer (SD layer) into the interior of a silicon wafer for separating it into chips. The other is a “separation process” to divide the wafer into small chips. A schematic illustration of the laser process is shown in Fig. 1. Introduction Blade dicing is used conventionally for dicing of a semiconductor wafer. Stealth dicing (SD) was developed as an innovative dicing method by Hamamatsu Photonics K.K. (Fukuyo et al., 2005;Fukumitsu et al., 2006;Kumagai et al., 2007). The SD method includes two processes. One is a "laser process" to form a belt-shaped modified-layer (SD layer) into the interior of a silicon wafer for separating it into chips. The other is a "separation process" to divide the wafer into small chips. A schematic illustration of the laser process is shown in Fig. 1. When a permeable nanosecond laser is focused into the interior of a silicon wafer and scanned in the horizontal direction, a high dislocation density layer and internal cracks are formed in the wafer. Fig. 2 shows the pictures of a wafer after the laser process and small chips divided through the separation process. The internal cracks progress to the surfaces by applying tensile stress due to tape expansion without cutting loss. An example of the photographs of divided face of the SD processed silicon wafer is shown in Fig. 3. www.intechopen.com As the SD is a noncontact processing method, high speed processing is possible. Fig. 4 shows a comparison of edge quality between blade dicing and SD. In the SD, there is no chipping and no cutting loss, so there is no pollution caused by the debris. The advantage of using the SD method is clear. Fig. 5 shows an example of SD application to actual MEMS device. This device has a membrane structure whose thickness is 2 m, but it is not damaged. A complete dry process of dicing technology has been realized and problems due to wet processing have been solved. In this chapter, heat conduction analysis by considering the temperature dependence of the absorption coefficient is performed for the SD method, and the validity of the analytical result is confirmed by experiment. Analysis method A 1,064 nm laser is considered here, and the internal temperature rise of Si by single pulse irradiation is analyzed . Considering that a laser beam is axisymmetric, we introduce the cylindrical coordinate system O rz  whose z -axis corresponds to the optical axis of laser beam and r -axis is taken on the surface of Si. The heat conduction equation which should be solved is where T is temperature,  is density, C p is isopiestic specific heat, K is thermal conductivity, and w is internal heat generation per unit time and unit volume. The finite difference method based on the alternating direction implicit (ADI) method was used for numerical calculation of Eq. (1). The temperature dependence of isopiestic specific heat (Japan Society for Mechanical Engineers ed., 1986) and thermal conductivity (Touloukian et al. ed., 1970) is considered. (Fukuyo et al., 2007;Weakliem & Redfield, 1979) shows temperature dependence of the absorption coefficient of single crystal silicon for a wavelength 1,064 nm. The www.intechopen.com When the Lambert law is applied between a small depth z  from depth The absorption coefficient of molten silicon is 5 7.61 10  cm -1 (Jellison, 1987). Therefore, this value is used for the upper limit of applying Eq. (3). The 2 1 e radius at the depth z of a laser beam which is focused with a lens is expressed by The beam is focused when j  is less than 1, and is diverged when j  is larger than 1. Now, the laser intensity , The formation mechanism of the inside modified layer Concrete analyses are conducted under the irradiation conditions that the pulse energy, p0 E , is 6.5 J, the pulse width (FWHM), p  , is 150 ns and the minimum spot radius, 0 r , is 485 nm. The pulse shape is Gaussian. The pulse center is assumed to occur at 0 t  . The intensity distribution (spatial distribution) of the beam is assumed to be Gaussian. It is supposed that the thickness of single crystal silicon is 100 m and the depth of focal plane 0 z is 60 m. The initial temperature is 293 K. The analysis region of silicon is a disk such that the radius is 100 m and the thickness is 100 m. In the numerical calculation, the inside radius of 20 m is divided into 400 units at a width 50 nm evenly, and its outside region is divided into 342 units using a logarithmic grid. The thickness is divided into 10,000 units at 10 nm increments evenly in the depth direction. The time step is 20 ps. The boundary condition is assumed to be a thermal radiation boundary. For comparison with the following analysis results, the temperature dependence of the absorption coefficient is ignored at first, and a value of 8.1   cm -1 at room temperature is used. In this case, the time variation of the intensity distribution inside the silicon is given by where p E is an effective pulse energy penetrating silicon and   e rz is the spot radius of the Gaussian beam at depth z . The time variation of temperature at various depths along the central axis is shown in Fig. 7. The maximum temperature distribution is shown in Fig. 8. It is understood from Fig. 7 that the temperature becomes the maximum at time 20 ns at depth of 60 m which corresponds to the focal position. In Fig. 8, due to reflecting laser absorption, the temperature of the side that is shallower than the focal point of the laser beam is slightly higher. However, the maximum temperature distribution becomes approximately symmetric with respect to the www.intechopen.com focal plane. At any rate the maximum temperature rise is about 360 K, which is much smaller than the melting point of 1,690 K under atmospheric pressure (Parker, 2004). It is concluded that polycrystallization after melting and solidification does not occur at all, if the absorption coefficient is independent of the temperature and is the value at the room temperature. When the temperature dependence of absorption coefficient (Eq. (3)) is taken into account, the time variation of temperature distribution is shown in Fig. 9. Figure 10 shows the time variation of the temperature distribution along the central axis in Fig. 9. It can be understood from these figures that laser absorption begins suddenly at a depth of 59 z  m at about 45 t  ns and the temperature rises to about 20,000 K instantaneously. The region where the temperature rises beyond 10,000 K will be instantaneously vaporized and a void is formed. High temperature region of about 2,000 K propagates in the direction of the laser irradiation from the vicinity of the focal point as a thermal shock wave. The region where the thermal shock wave propagates becomes a high dislocation density layer due to the shear stress caused by the very large compressive stress. Figure 11 shows the maximum temperature distribution and a schematic of SD layer formation. SD layer looks like an exclamation mark "!". As a result, a train of the high dislocation density layer and void is generated as a belt in the laser scanning direction as shown schematically in Fig. 1. When the thermal shock wave caused by the next laser pulse propagates through part of the high dislocation density layer produced by previous laser pulse, a crack whose initiation is a dislocation progresses. Figure 12 shows a schematic of crack generation by the thermal shock wave. Analyses of internal crack propagation in SD were conducted later using stress intensity factor (Ohmura et al., 2009(Ohmura et al., , 2011 Fig. 12. Schematic of crack generation Figure 13 shows an inside modified-layer observed by a confocal scanning infrared laser microscope OLYMPUS OLS3000-IR before division (Ohmura et al., 2009). It is confirmed that a train of the high dislocation density layer and void is generated as a belt as estimated in the previous studies. It also can be understood that the internal cracks have been already generated before division. Stealth Dicing of ultra thin silicon wafer Here heat conduction analysis is performed for the SD method when applied to a silicon wafer of 50 m thick, and the difference in the processing result depending on the depth of focus is investigated (Ohmura et al., , 2008. Furthermore, the validity of the analytical result is confirmed by experiment. In the analysis, the pulse energy, p0 E , is 4 J, the pulse width, p  , is 150 ns, and the pulse shape is Gaussian. The intensity distribution of the beam is assumed to be Gaussian. It is supposed that the depth of focal plane 0 z is 30 m, 15 m and 0 m. The initial temperature is 293 K. The analysis region of silicon is a disk such that the radius is 111 m and the thickness is 50 m. In the numerical calculation, the inside radius of 11 m is divided into 440 units at a width 25 nm evenly, and its outside region is divided into 622 units using a logarithmic grid. The thickness is divided into 10,000 units at 5 nm increments evenly in the depth direction. The time step is 10 ps. The boundary condition is assumed to be a thermal radiation boundary. In the case of focal plane depth 30 µm The time variation the temperature distribution along the central axis is shown in Fig.14. Figure 14(b) shows the temperature change on a two-dimensional plane of depth and time by contour lines. It can be understood from Fig. 14(a) that laser absorption begins suddenly at a depth of 29 z  m at about 8 t   ns and the temperature rises to about 12,000 K instantaneously. The region where the temperature rises beyond 8,000 K will be instantaneously vaporized and a void is formed. The high temperature area beyond 2,000 K then expands rapidly in the surface direction until 100 t  ns as shown in Fig. 14(b). The contour at the leading edge of this high temperature area is clear in this figure. Also the temperature gradient is steep as shown in Fig. 14(a). Therefore, this high-temperature area is named a thermal shock wave as well. It is calculated that the thermal shock wave travels at a mean speed of about 300 m/s. Propagation of the thermal shock wave is shown in Fig. 15 by a time variation of the twodimensional temperature distribution. The contour of the high-temperature area is comparatively clear until 50 t  ns, because the traveling speed of the thermal shock wave is much higher than the velocity of thermal diffusion. The contour of the high temperature area becomes gradually vague at 100 t  ns when the thermal shock wave propagation is finished. Because the temperature history is similar to the case of thickness 100 m, the inside modified layer such as Fig. 3 is expected to be generated. In the case of focal plane depth 15 µm The time variation of the temperature distribution along the central axis in case of focal plane depth 15 m is shown in Fig. 16. It can be understood from Fig. 16(a) that laser absorption begins suddenly at a depth of 14 z  m at about 10 t   ns and the temperature rises to about 12,000 K instantaneously. As well as the case of focal plane depth 30 m, the region where the temperature rises beyond 8,000 K will be instantaneously vaporized and a void is formed. Then the thermal shock wave propagates in the surface direction until about 25 ns. It is understood from Fig. 16(b) that laser absorption suddenly begins at the surface, once the thermal shock wave reaches the surface. Though the laser power already passes the peak, and gradually decreases, the surface temperature rises beyond 20000 K, which is higher than the maximum temperature which is reached at the inside. Although the thermal diffusion velocity is fairly slower than the thermal shock wave velocity, the internal heat is diffused to the surrounding. However, because the heat in the neighborhood of the surface is diffused only in the inside of the lower half, the surface temperature becomes very high and is maintained comparatively for a long time. Ablation occurs of course in such a hightemperature state. As a result, it is expected that not only is an inside modified layer generated, but also the surface is removed by ablation. Figure 17 shows that the surface temperature rises suddenly after the thermal shock wave propagates in the inside of the silicon, and reaches the surface, by the time variation of two dimensional temperature distribution. In the case of focal plane depth 0 µm When the laser is focused at the surface, as shown in Fig. 18, laser absorption begins suddenly at the surface at 35 t   ns, and the maximum surface temperature in the calculation reaches 5 61 0  K. It is estimated that violent ablation occurs when such an ultrahigh temperature is reached. Because of the pollution of the device area by the scattering of the debris and thermal effect, the ablation at the surface is quite unfavorable. Comparison of the maximum temperature distributions and the experimental results The maximum temperature distributions at the focal plane depths of 30 m, 15 m and 0 m are shown in Fig. 19 in order to compare the previous analysis results at a glance. Because high-temperature area stays in the inside of the wafer when 0 z is 30 m, it was estimated that the inside modified layer as shown in Fig. 3 will be generated. In the case of 0 15 z  m, it was estimated that the surface is ablated although the modified layer is generated inside. In the case of 0 0 z  m, it was estimated that the surface was ablated intensely. It is concluded from the above analysis results that the laser irradiation condition for SD processing should be selected at a suitable focal plane depth so that the thermal shock wave does not reach the surface. In In the case of 0 30 z  m which is shown in Fig. 20 (a), it can be confirmed that voids are generated at the place that is slightly higher than the focal plane and the high dislocation density layer is generated in those upper parts, which are similar to Fig. 3. In the case of 0 15 z  m which is shown in Fig. 20 (b), it is recognized that voids are generated at the place that is slightly higher than the focal plane and the high dislocation density layer is generated in those upper parts. However, it is observed that the surface is ablated and holes are opened from the photograph of the laser irradiated surface. In the case of 0 0 z  m which is shown in Fig. 20 (c), it is seen that strong ablation occurs and debris is scattered to the surroundings. Voids and the high dislocation density layer are not recognized in the divided face. Only the cross section of the hole caused by ablation is seen. These experimental results agree fairly well with the estimation based on the previous analysis www.intechopen.com result. Therefore, the validity of the analytical model, the analysis method, and the analysis results of this study are proven. The processing results can be estimated to some extent by using the analysis model and the analysis method in the present study. It is useful in optimization of the laser irradiation condition. Conclusion In the stealth dicing (SD) method, the laser beam that is permeable for silicon is absorbed locally in the vicinity of the focal point, and an interior modified layer (SD layer), which consists of voids and high dislocation density layer, is formed. In this chapter, it was clarified by our first analysis that the above formation was caused by the temperature dependence of the absorption coefficient and the propagation of a thermal shock wave. Then, the SD processing results of an ultra thin wafer of 50 m in thickness were estimated based on this analytical model and analysis method. Particularly we paid attention to the difference in the results depending on the focal plane depth. Furthermore, in order to compare with the analysis results, laser processing experiments were conducted with the same irradiation condition as the analysis conditions. In the case of focal plane depth 0 30 z  m, the analysis result of temperature history was similar to the case when the wafer thickness is 100 m and the focal plane depth is 60 m. Therefore, it was predicted that a similar inside modified layer will be generated. In the case of 0 15 z  m, it was estimated that not only the inside modified layer is generated, but also the surface is ablated. Because the thermal shock wave reached the surface, remarkable laser absorption occurred at the surface. In the case of 0 0 z  m, it was estimated that the surface is ablated intensely. These estimation results agreed well with experimental results. Therefore, the validity of the analytical model, the analysis method and the analysis results of this study was proven. As conclusion of this chapter, the following points became clear: 1. When the analytical model and the analysis method of the present study are used, the processing mechanism can be understood well, and the processing results can be estimated to some extent. It is useful in optimization of the laser irradiation condition. 2. There is a suitable focal plane depth in the SD processing, and it is necessary to select the laser irradiation condition so that the thermal shock wave does not reach the surface. Heat transfer is involved in numerous industrial technologies. This interdisciplinary book comprises 16 chapters dealing with combined action of heat transfer and concomitant processes. Five chapters of its first section discuss heat effects due to laser, ion and plasma-solid interaction. In eight chapters of the second section engineering applications of heat conduction equations to the curing reaction kinetics in manufacturing process, their combination with mass transport or ohmic and dielectric losses, heat conduction in metallic porous media and power cables are considered. Analysis of the safety of mine hoist under influence of heat produced by mechanical friction, heat transfer in boilers and internal combustion engine chambers, management for ultrahigh strength steel manufacturing are described in this section as well. Three chapters of the last third section are devoted to air cooling of electronic devices.
4,459
2011-12-22T00:00:00.000
[ "Physics" ]
The prognostic effect of LINC00152 for cancer: a meta-analysis No meta-analysis has been performed to evaluate the association between LINC00152 and the survival of patients with cancers. We thus carried out this study. The online databases, such as PubMed, EMBASE, and the Cochrane controlled trials register, were searched to identify relevant articles. Dichotomous data were analyzed using the odds ratio (OR) as the summary statistic. The association between LINC00152 and survival of cancer was analyzed by pooling the hazard ratio (HR) with its corresponding 95% confidence interval (CI). Nine studies with 862 patients with cancer were included in this meta-analysis. The expression of LINC00152 was not associated with the age of patients (OR = 0.79, 95% CI = 0.55–1.14) and gender (OR = 1.08, 95% CI = 0.74–1.58). However, we found significant positive associations between LINC00152 and lymph node metastasis (OR = 2.54, 95% CI = 1.54–4.18) and TNM stage (OR = 2.32, 95% CI = 1.36–3.93). Furthermore, the expression of LINC00152 was significantly associated with tumor recurrence (OR = 3.32, 95% CI = 1.98–5.57) and shorter OS (HR = 1.94, 95% CI = 1.25–3.02). In conclusion, the results of this meta-analysis suggest that LINC00152 might be a biomarker for shorter OS and tumor recurrence in cancers. INTRODUCTION Long noncoding RNAs (lncRNAs) are defined as RNA transcripts of more than 200 nucleotides in length [1]. Many studies have suggested that lncRNAs were played an important role in tumorigenesis, proliferation, and metastasis in cancer development [2,3]. For example, Jiang et al. showed that lnc-epidermal growth factor receptor (EGFR) linked an immunosuppressive state to cancer by promoting Treg cell differentiation [4]. Koirala et al. demonstrates lncRNA AK023948 and DHX9 as important players in the AKT pathway, and that their upregulation may contribute to breast tumour progression [5]. Li et al. indicated that highly up-regulated in liver cancer (HULC) promotes the phosphorylation of Y-box binding protein 1 through the extracellular signalregulated kinase pathway, in turn regulates the interaction of YB-1 with certain oncogenic mRNAs [6]. However, the exact underlying molecular mechanism and the clinical implications of lncRNAs were still largely unknown. Long intergenic non-coding RNA 00152 (LINC00152) was suggested to have oncogenic impacts on several cancers [7][8][9][10][11][12][13][14][15]. It is located on chromosome 2p11.2, which has a transcript length of 828 nucleotides. Recently, Nötzold et al. found that cells depleted of LINC00152 arrested in prometa phase of mitosis and showed reduced Hela cell viability [16]. In RNA affinity purification (RAP) studies, the researchers indicated that LINC00152 interacted with a network of proteins which were associated with M phase of the cell cycle [16]. Meta-Analysis Oncotarget 75428 www.impactjournals.com/oncotarget Chen et al. found that the expression of LINC00152 was significantly associated with tumor invasion depth, lymph node metastasis, and higher tumor-node-metastasis (TNM) stage in gastric cancer [11]. Another group also found that LINC00152 expression was correlated with higher TNM stage, larger tumor size, and lymph node metastasis in lung cancer [7]. In addition, Yu et al. suggested that the increased expression of LINC00152 was significantly correlated with T stage, N stage, TNM stage, and invasion in tongue squamous cell carcinoma [4]. Therefore, we supposed that LINC00152 might influence the survival of patients with cancers. However, no meta-analysis has been performed to evaluate the association between LINC00152 and the survival of patients with cancers. We thus carried out this study. Figure 1 showed the process of identifying relevant studies. Thirty-two studies were found in the initial search. After a detailed evaluation, 23 studies were excluded. Finally, 9 studies with 862 patients with cancer were included in this meta-analysis. Colorectal cancer, gastric cancer, renal cell carcinoma, gallbladder cancer, lung cancer, hepatocellular carcinoma, and tongue squamous cell carcinoma were investigated in the original studies. Table 1 showed the characteristics of the included studies. Only 4 studies could provide the data of overall survival (OS). DISCUSSION Many evidence suggested that LINC00152 may participate in the carcinogenesis of cancers [17]. Thus, we conjectured that LINC00152 could change the prognosis of patients with cancers. To our knowledge, this was the first meta-analysis to evaluate the association between the expression of LINC00152 and clinicopathological parameters in cancers. In the present meta-analysis, we found that LINC00152 was significantly associated with lymph node metastasis and TNM stage. Chen et al. found that LINC00152 overexpression could facilitated gastric cancer cell proliferation by accelerating the cell cycle [14]. Cai et al. suggested that LINC00152 could promote cell migration, invasion and epithelialmesenchymal transition (EMT) progression in vitro [11]. Ji et al. found that LINC00152 could promote cell proliferation in vitro and tumor growth in vivo [18]. However, silencing LINC00152 can suppress the cell proliferation and invasion in hepatocellular carcinoma cells [8]. The mechanism investigation suggested that LINC00152 inhibited the E-cadherin expression via interacting with EZH2 and promoted the Epithelialmesenchymal transition (EMT) phenomenon in HCC cells [8]. These data might explain why the patients with high level of LINC00152 showed lymph node metastasis and TNM stage. Furthermore, we found significant positive associations between LINC00152 and tumor recurrence and shorter OS of patients with cancer. Thus, LINC00152 might be a potential biomarker in patients with cancer. The doctors should pay more attention to the cancer patients with high expression of LINC00152. However, Qiu did not found LINC00152 was a significant predictor of survival in colorectal cancer [16]. Thus, future studies with colorectal cancer patients are requested to determine this issue. Some studies investigated the clinical implications of LINC00152 in cancers. Li et al. indicated that plasma levels of HULC and LINC00152 could be used to diagnose hepatocellular carcinoma [19]. Yang and colleagues suggested that serum H19 and LINC00152 might be potential biomarkers for diagnosis of gastric cancer [20]. Yue et al. found that LINC00152 might be a prognostic indicator of oxaliplatin responsiveness in colon cancer patients [21]. In this study, we found that LINC00152 may be a biomarker of the prognosis of cancers. Thus, detecting of LINC00152 might help doctors to manage their patients. Some limitations of this meta-analysis should be acknowledged. First, only 4 studies could provide the survival data and only 1 study could provide the diseasefree survival data. The sample size recruited in the analysis of overall survival was too small, which could not provide statistical power to make significant conclusion. Second, no study from other countries was included in our study. Third, only studies which were indexed by the selected databases were included for data analysis. Fourth, biomarkers are specific for different cancer types or cancer subtypes, especially for LncRNA, because many LncRNAs function in a cell-type-specific way [18]. However, we could not perform a subgroup analysis in a specific cancer or according to the ethnicity due to limited Oncotarget 75429 www.impactjournals.com/oncotarget Figure 1: Flow diagram of study identification. Oncotarget 75430 www.impactjournals.com/oncotarget data. We also could not do an in vitro experiment. Thus, more studies from other countries are needed to confirm the results from this meta-analysis. In conclusion, the results of this meta-analysis suggest that LINC00152 might be a biomarker for shorter OS and tumor recurrence in cancers. Publication search The online databases, such as PubMed, EMBASE, and the Cochrane controlled trials register, were searched to identify relevant articles published up to MAY 2017 in any language. The electronic search included the terms: LINC00152 and "cancer or carcinoma or neoplasm or tumor". We also reviewed the reference lists of original reports and reviews. Country was not restricted in this search. Inclusion and exclusion criteria The included studies should meet the following criteria: (1) the study should assessed the association between LINC00152 and the clinicopathological parameters of cancers; (2) cancer should be diagnosed according to histopathological evaluation. The study should be excluded: (1) the study was relevant to cancer or LINC00152; (2) the study was an animal study; (3) the study was a review or abstract. Data extraction and quality assessment Two authors reviewed and extracted the data from original studies independently. The following data were extracted: the first author's name, year, gender of the patient, site of cancer, tumor stage, sample size, outcome, and co-variants. We used the Newcastle-Ottawa Scale (NOS) to assess the methodological quality of included studies [22]. Statistical analysis Dichotomous data were analyzed using the odds ratio (OR) as the summary statistic. The association between LINC00152 and survival of cancer was analyzed by pooling the HR with its corresponding 95% CI. The heterogeneity investigted by using the chi-squared based Q-statistic test. The random-effects model was used to analyze the pooled HRs. If the number of included studies was more than 10, funnel plot was used to analyze the publication bias. All the P-values were determined by a 2-sided test. All statistical analyses were conducted using RevMan 5.1 software (Nordic Cochrane Center, Copenhagen, Denmark).
2,017.4
2017-08-10T00:00:00.000
[ "Biology", "Medicine" ]
Ecocultural or Biocultural? Towards Appropriate Terminologies in Biocultural Diversity Simple Summary Biocultural diversity espouses an inseparable link between biological, cultural, and linguistic diversity. Biocultural diversity is not alone in using the term ‘biocultural’. The term has been used in biocultural studies within anthropology decades ahead of biocultural studies. Both biocultural studies and biocultural diversity use the term ‘biocultural’ as adjective to generate new terminologies such as ‘biocultural approach’ with varying connotations. Such a confusing scenario might hinder theoretical advancements in biocultural diversity. Hence, I propose that proponents of biocultural diversity explore possibilities of adapting the term ‘ecoculture’ from cultural studies. Perhaps using the term ‘ecocultural’ instead of ‘biocultural’ as a descriptor to coin terminologies could solve confusions arising from the expanding usage of the term ‘bioculture’. Abstract Biocultural diversity has made notable contributions that have furthered our understanding of the human culture-nature interrelationship. However, the usage of the term ‘biocultural’ is not unique to biocultural diversity. It was first used in biocultural studies within anthropology decades ahead of biocultural diversity. The existing literature on biocultural diversity does not acknowledge the prior existence of biocultural studies, or provide a clear demarcation between usages of the two terms. In this article, I discuss the varying contexts in usage of the term ‘biocultural’ between biocultural diversity and biocultural anthropology. While biocultural diversity deals with the linkages between biological, cultural, and linguistic diversity, biocultural studies in anthropology deal with the deterministic influence of physical and social environment on human biology and wellbeing. In biocultural studies, ‘biocultural’ refers to the integration of methodically collated cultural data with biological and environmental data. ‘Bio’ in biocultural anthropology therefore denotes biology, unlike biocultural diversity where it refers to biodiversity. Both biocultural studies and biocultural diversity apply ‘biocultural’ as descriptor to generate overlapping terminologies such as ‘biocultural approach’. Such a confusing scenario is not in the interest of biocultural diversity, as it would impede theoretical advancements. I propose that advocates of biocultural diversity explore its harmonies with ecoculturalism and the possibilities of suitably adapting the term ‘ecoculture’ in lieu of ‘bioculture’. Using ‘ecocultural’ instead of ‘biocultural’ as a descriptor to coin terminologies could solve confusions arising from the expanding usage of the term ‘bioculture’. Introduction Biocultural diversity is the diversity of life in all of its manifestations: biological, cultural, and linguistic, which are interrelated (and possibly coevolved) within a complex socio-ecological adaptive system [1] (p. 269). The origin of the concept of biocultural diversity could be traced back to 1988, when 'The Declaration of Belém' from the First International Congress of Ethnobiology recognized an 'inextricable link' between biological and cultural diversity [2]. By recognizing cultural diversity, biocultural diversity offers a better approach to understand the interrelationships between humans and nature [3]. In recent years, there has also been an emphasis on the dynamic, reciprocal nature of the human culture-nature relationship [4]. The accordance of prominence to cultural and linguistic diversities distinguishes biocultural diversity from biodiversity (or biological diversity) defined as the "variety and variability among living organisms and the ecological complexes in which they occur" [5] (see also [6,7]). Biodiversity with its species conservation-oriented approach excluded local people and their interrelationship with nature. Anthropologists helped in bridging the gap between biodiversity conservation and local communities [8], while the definition of biodiversity also evolved to accommodate cultural diversity [9]. However, biodiversity falls short of the holistic approach towards the human culture-language-biodiversity complex advocated by biocultural diversity. The number of publications using the biocultural diversity framework has been growing since the 2000s [3,10]. As outlined by Luisa Maffi [3], these studies tend to have four major foci: (i) the relationship between language, traditional knowledge, and the environment; (ii) common threats to biological, cultural, and linguistic diversities; (iii) conservation and revitalization of biocultural diversity; and (iv) biocultural diversity and human rights. Maffi describes biocultural diversity as a 'multifaceted field of research' in her landmark publications that provide the theoretical underpinnings of this area of study [1,3]. Some academics (especially ethnobiologists) approach it as a conceptual framework that bridges the nature-culture divide [11]. However, biocultural diversity is not alone in using the term 'biocultural'. The term 'biocultural' was first used in biocultural studies within anthropology, whose origin can be traced to the 1960s (decades ahead of biocultural diversity) [12][13][14]. Biocultural studies in anthropology deal with the influence of biological and cultural environment on human biology and wellbeing [15]. Biocultural diversity on the other hand, deals with the linkages between biological, cultural, and linguistic diversities. Although biocultural diversity has accumulated a significant volume of literature since the 2000s [10], the current literature on biocultural diversity does not acknowledge the prior existence of biocultural studies in anthropology or provide a clear demarcation between the two concepts. Maffi's article published in the Annual Review of Anthropology in 2005 [3], a reputed journal in the field of anthropology, misses the opportunity to provide a clear demarcation between the two paradigms. Such a scenario creates confusion in the minds of young anthropologists and ethnobiologists getting acquainted with the term 'biocultural'. Clarity at the level of definitions and usage of appropriate terminologies are quintessential for theoretical advancements in any field of enquiry [16]. Therefore, in this article, I discuss the different contexts in usage of the term 'biocultural' as a descriptor in biocultural diversity and biocultural studies within anthropology. I highlight the confusion caused by the usage of the same term in diverging contexts by both the paradigms. Lastly, I conclude by proposing that the term 'ecocultural' is a better alternative to use as a descriptor in studies using the biocultural diversity framework. Biocultural Studies in Anthropology The origins of biocultural studies within anthropology could be traced to 1930s when anthropologists including W. Montague Cobb studied the influence of social environments on human health [17]. In the 1960s, specific research methods in biocultural studies were advanced by the International Biological Program [18]. However, specific usage of the term only appears in the 1970s [19]. There are considerable variations in the conceptualization of biocultural studies, with some anthropologists nesting it within biological anthropology and others explicitly advocating biocultural anthropology as a sub-discipline of biological anthropology [12,19,20]. Given the variations in conceptualizations, there have been calls to focus on what biocultural studies does in the contemporary world, rather than its definition [21]. Prior to 1998, biocultural approaches in anthropology aimed to understand the deterministic pathways by which social, economic, cultural, and ecological factors influence human biology and wellbeing [12,13,15]. Biocultural studies has evolved since then to emphasize reciprocity in human-environment relationship, with the conceptualization of the field expanded to encompass human niche construction [21]. However, the deterministic influence of biological and cultural factors on human biology and wellbeing, especially the study of human physiological and cultural adaptation to environmental conditions [20], continues to be the major focus of the field [22]. Biocultural studies within the realm of biological anthropology and associated fields see humans as 'biological, social, and cultural beings' [12], and the discipline has strong synergies to medical anthropology, ecological anthropology, and political economy [21,22]. A small number of studies also tend to employ evolutionary theory or political economic analyses [19]. Various associations of anthropology have organized conferences in biocultural studies/anthropology, and a plethora of articles have also been published in leading journals in anthropology [19,23]. 'Biocultural' in Biocultural Studies The coining of the term 'biocultural' is believed to be the result of anthropology's quest for turning into a holistic field [19,24]. According to Ann McElroy [12], the ideal sense of 'biocultural' in biocultural studies is the integration of methodically collated cultural data with biological and environmental data. The 'bio' in biocultural studies therefore denotes biology. There appears to be little consensus on the usage of the term within biocultural studies, or a theoretical framework that outlines the constitutive elements and processes. In their bibliometric review, Wiley and Cullin found tremendous variation in its usage [19]. In the majority of studies, the term implies the influence of social environment on human biology. However, there were 180 terms formed using 'biocultural' as a descriptor (adjective) often with little clarity on what they intend to convey. In Table 1, I provide subjective examples of variations in direct usage of the term 'biocultural' in biocultural studies. In these examples, 'biocultural' has been combined with terms such as 'adaptation' and 'approach' to produce varying connotations. Notable usage includes 'biocultural diversity/variations' to denote morphological variations in human populations, induced by cultural practices such as intentional body modification practices [25]. Table 1. Examples of terms coined using 'biocultural' as descriptor in biocultural studies. Terminology Usage Biocultural adaptation Influence of environment and lifestyle on human physiology [26,27]. Biocultural analyses/perspectives Linking culture and biology to unravel how biological phenomena such as birth are affected by cultural interpretations and practices [28]. Biocultural approach Environment influencing obesity and nutritional status [29]. "Humans as biological, social and cultural beings in relation to the environment" McElroy [12] cited in Khongsdier [30]. Biocultural diversity/variations Human morphological variations induced by diverse range of intentional body modification practices [25]. Biocultural evolution Evolution of biological and cultural characteristics [31]. Biocultural model A model that could be useful in "conceptualizing the complex interaction of biological, cultural and psychosocial factors in the process of human pain perception" [32]. Biocultural predictors Combination of biological and cultural factors [33]. Bio-cultural sciences "Bio-cultural sciences highlight the notion that human behaviour is the joint and co-constructive expression of biological-genetic and cultural-societal processes and conditions." [34] Biocultural studies "Questions of human biology and medical ecology that specifically include social, cultural, or behavioural variables in the research design" [12]. Biocultures Re-thinking of culture and history by considering their 'inextricable' relationship with biology [35]. "Cultural spheres where biomedicine extends beyond the formal institutions of the clinic, the hospital, the lab, and so forth and is incorporated into broader social practices and rationalities" [36]. 'Biocultural' in Biocultural Diversity: Similar Terminologies, but Confounding Usage Biocultural diversity deals with the inextricable linkages between biological, cultural, and linguistic diversities [1,3,37,38]. It thus focuses on the manifestation of these diversities, and not the influence of cultural and social environment on human biology advocated by biocultural studies [12]. The 'bio' in biocultural diversity therefore refers to biodiversity and not biology as in biocultural studies. According to Mercon et al. [39], the term 'bioculture' is employed in biocultural diversity "to emphasize tightly intertwined and co-evolving socialecological systems, cultural dimensions and implications in such systems". A subjective scan of published academic literature shows an overlap in terminologies coined by applying the term 'biocultural' as descriptor between both the paradigms, with the context of its usage in biocultural diversity being different from biocultural studies in anthropology (Tables 1 and 2). A 'biocultural approach' in the latter could mean understanding the influence of environment on human health by examining parameters such as obesity and nutritional status [29], while in biocultural diversity it would mean an approach that recognizes the co-existence of biological and cultural diversity, and the linkages between them [39,40]. If biocultural conservation in biocultural diversity means to conserve biological and cultural diversity [41], in the context of biocultural studies it would mean conserving the diverse patterns of environmental influence on human wellbeing when applied. Thus, the possibilities to generate various terminologies incorporating 'biocultural' in both biocultural studies and biocultural diversity are innumerable, leading to a confusing scenario, a complexity that would only grow from here unless resolved. Such a confusing scenario is further exemplified by the usage of 'biocultural diversity' as such in biocultural studies to refer to morphological variations in human populations [25]. Table 2. Examples of terms coined using 'biocultural' as descriptor in biocultural diversity. Biocultural approach Recognising human beings and non-humans as co-habitants of ecosystems [42,43]. "Biocultural approaches are an emergent area of study that conceptualize interrelationships between cultures and the environment" [40]. Biocultural approaches to conservation "Conservation actions made in the service of sustaining the biophysical and sociocultural components of dynamic, interacting, and interdependent social-ecological systems" [41] Biocultural characteristics Undefined [44]. Biocultural conservation Stemming the dual loss of biological and cultural diversity [41]. Biocultural design "People are creative agents with knowledge, values and skills that allow them to shape their everyday lives" [45] Biocultural ethics "Recovering the vital links between biological and cultural diversity, between the habits and the habitats of the inhabitants" [46]. Biocultural heritage Biodiversity and culture as heritage [47]. Biocultural homogenization "Simultaneous and interdigitated losses of native biological and cultural diversity at local, regional, and global scales" [46]. Biocultural importance Biological and cultural importance of plants, animals and landscapes [48,49]. Biocultural indicators Foreseeable seasonal events such as flowering of calendar plants that are culturally significant to local communities [50]. Biocultural interactions Interactions between local people and ecosystems [51]. Biocultural landscape Landscapes that integrate "economic, social, cultural and environmental processes in time and space" [52]. Biocultural learning "Learning complexity in and about nature, particularly to the dimensions and processes involved when people have nature as a workplace" [53]. Terminology Usage Biocultural memory "The human memory is the result of interactions between biological and cultural traits, considered as biocultural memory" [54]. Biocultural refugia/Bio-cultural refugia "Physical places that not only shelter farm biodiversity, but also carry knowledge and experiences about practical management of how to produce food while stewarding biodiversity and ecosystem services" [55]. Biocultural systems Systems moulded jointly by biological and cultural forces [38]. Biocultures "A bioculture is a local collection of humans, other species, and their interactions" [56] Collective biocultural heritage "Knowledge, innovations and practices of indigenous and local communities which are collectively held and inextricably linked to traditional resources and territories, local economies, the diversity of genes, varieties, species and ecosystems, cultural and spiritual values, and customary laws shaped within the socio-ecological context of communities" [57] Indigenous biocultural Knowledge "Knowledge that encompasses people, language and culture and their relationship to the environment" [58] Of the articles published in AAA journals during the 2000-2014 period, Wiley and Cullin found 3% to use the term 'biocultural' [19]. These usages are in a context different from that of biocultural diversity, except for those explicitly dealing with biocultural diversity. Although the authors recognize that biocultural diversity is distinct from biocultural studies/anthropology, they call upon academics to explore ways to harmonize these two paradigms. However, the differences between biocultural studies in anthropology and biocultural diversity are vast to reconcile. If biocultural studies in anthropology are concerned with the influence of biological and cultural factors on human biology and health, then biocultural diversity is about "the living network made up of the millions of species and animals and the thousands of human culture and languages that have evolved on earth" [59]. The conceptualization of biocultural studies in anthropology has been expanded recently to emphasize the reciprocal relationship between humans and nature [21], marking a conscientious shift from environmental determinism to environmental probabilism (See: Lewthwaite [60]). However, it is undeniable that the core area of focus has been human health and wellbeing, especially the deterministic impact of social and physical environment on human health [19,22]. Contrarily, from its inception, biocultural diversity has recognized the ever-evolving complex and reciprocal interaction between nature and humans, thus assuming a probabilistic stand [1,11]. Furthermore, biocultural diversity does not focus on the influence of the physical and biological environments on human biology and health. The evident divergence in conceptualization of 'biocultural', and its confounding usages in biocultural studies and biocultural diversity is unaffordable in academia. In an era where keywords increasingly play an important role in access to knowledge [61], young anthropologists, ethnobiologists, linguists, or geographers undertaking literature review would invariably be confused in the diverse usages. Of the 199 articles tagged with 'biocultural' in the Scopus database for the year 2020 (Supplementary Materials File S1), the majority (n = 122) have used the biocultural diversity framework, while the remaining were from biocultural studies. Although this indicates the increasing popularity and acceptability of biocultural diversity among researchers probing the human-nature nexus, it also points out the confusing scenario in the usage of 'bioculture/biocultural'. Indeed, as the younger of the two paradigms, the onus is on biocultural diversity to differentiate itself from biocultural studies. A radical step here is to debate the possibility of re-branding biocultural diversity as ecocultural diversity. However, this would require serious effort in building consensus among advocates of biocultural diversity. A more acceptable middle-path could be to retain biocultural diversity as such at the conceptual level but to use 'ecocultural' instead of 'biocultural' as descriptor. Thus, the term 'biocultural approach' in biocultural diversity would become 'ecocultural approach' instead, 'biocultural revitalization' would become 'ecocultural revitalization', and so on [62,63]. The Need for Considering 'Ecoculture' in Biocultural Diversity The term 'ecoculture' is popular in cultural studies, environmental communication, and psychology where it signifies the reciprocal and inseparable link between ecology and culture [62,[64][65][66], a paradigm referred to as 'ecoculturalism' [67]. An ecoculturalist perspective advocates that sociocultural identity is inseparable from ecology [68]. It also recognizes that local knowledge and memories of the dynamic link between the non-human component of landscapes and human culture shape ecocultural identities and promote resilience [69][70][71]. In tourism, ecoculturalism offers an opportunity to appreciate both the cultural and ecological aspects of destinations [72]. Like biocultural diversity, ecoculturalism also recognizes that ecological crises leads to cultural crises [70]. Meanwhile in cultural studies, it has also been hotly debated if the term ecoculturalism should be abandoned, as the field of cultural studies addresses the nature-culture dualism adequately [67,73]. Bohm et al. [74] refer to those communities living an ecocultural lifestyle that recognizes, demonstrates, and nurtures the deep linkages between social and ecological environments as 'ecocultures'-an application that is similar to 'biocultures' in biocultural diversity [56] but different from 'biocultures' as used in biocultural studies [35,36,75]. Ecocultures or ecocultural communities appreciate the reciprocal relationship between nature and culture, the need for nurturing ecosystem health, recognition of all lifeforms as sentient beings, and the importance of a healthy nature-culture relationship [76][77][78][79]. These communities are characterized by ethical principles that prioritize the nurturing of sociological and ecological wellbeing, recognize that wellbeing does not constitute of economic wellbeing alone, consider humans as a part of nature, and strive to conserve and sustain natural, human and social capitals [74,80]. The usage of the term 'ecocultural' in cultural studies, psychology and elsewhere, and 'biocultural' in biocultural diversity are remarkably similar as they both recognize the inextricable link between ecosystems and culture [37]. Given that the term 'biocultural' has been used in a different context in biocultural studies within anthropology long before the birth of biocultural diversity, it is in the interest of the latter to suitably adapt the term 'ecoculture' to distinguish its program from that of biocultural studies in anthropology. Conclusions Biocultural studies within anthropology originated decades ahead of biocultural diversity. Modern conceptualizations of biocultural studies have expanded its scope to include reciprocity in human-environment relationships. Yet, the deterministic influence of physical and social environment on human health and wellbeing continues to be the major focus of the field. 'Bio' in biocultural studies refers to human biology, while 'biocultural' refers to the integration of methodically collated cultural data with biological and environmental data. In biocultural diversity, 'bio' refers to biodiversity, and 'biocultural' refers to the co-evolving biological and cultural diversity and the linkages between them. Biocultural diversity is a well-defined paradigm with a robust theoretical framework. From its inception, the paradigm has espoused a probabilistic relationship between biological, linguistic, and cultural diversity. However, the usage of 'biocultural' in a context differing from that of biocultural studies has led to a confusing scenario with overlapping terminologies such as 'biocultural approach', coined in biocultural studies, and biocultural diversity, with varying connotations. The scenario could become more confounding in the future, with the emergence of the usage of 'biocultural diversity' as such in biocultural studies in a context other than that of biocultural diversity. Being the younger of the two paradigms, the onus is on biocultural diversity to demarcate itself from biocultural studies and steer itself clear of confusing terminologies. I propose that advocates of biocultural diversity explore its harmonies with ecoculturalism and the possibilities of suitably adapting the term 'ecoculture' in lieu of 'biocultural'. Using 'ecocultural' instead of 'biocultural' as descriptor to coin terminologies could solve much of the confusions arising from the expanding usage of the term 'biocultural'.
4,699.8
2022-01-28T00:00:00.000
[ "Environmental Science", "Linguistics", "Biology" ]
Solitons in a cavity for the Einstein-SU(2) Non-linear Sigma Model and Skyrme model In this work, taking advantage of the Generalized Hedgehog Ansatz, we construct new self-gravitating solitons in a cavity with mirror-like boundary conditions for the SU(2) Non-linear Sigma Model and Skyrme model. For spherically symmetric spacetimes, we are able to reduce the system to three independent equations that are numerically integrated. There are two branches of well-behaved solutions. The first branch is defined for arbitrary values of the Skyrme coupling and therefore also leads to a gravitating soliton in the Non-linear Sigma Model, while the second branch exists only for non-vanishing Skyrme coupling. The solutions are quasi-static and in the first branch are characterized by two integration constants that correspond to the frequency of the phase of the Skyrme field and the value of the Skyrme profile at the origin, while in the second branch the latter is the unique parameter characterizing the solutions. These parameters determine the size of the cavity, the redshift at the boundary of the cavity, the energy of the scalar field and the charge associated to a U(1) global symmetry. We also show that within this ansatz, assuming analyticity of the matter fields, there are no spherically symmetric black hole solutions. I. INTRODUCTION Non-linear Sigma models appear in many contexts, as for example to describe the dynamics of Goldstone bosons [1], in condensed matter systems [2], in supergravity [3], as well as being the building blocks of classical string theory. In the case of light mesons, it can be shown that the low energy dynamics can be correctly described by a Non-linear Sigma Model for SU (2). In such low energy processes, the mesons can be seen as Goldstone bosons. In flat spacetime, the inclusion of the Skyrme term allows to construct static regular solitons with finite energy, which describe baryons [4]. In the latter scenario the ansatz for the SU (2) group element is given by U sol = exp(iF (r) τ ·x), with τ the SU (2) generators. A more general ansatz is defined by the Generalized Hedgehog Ansatz, which includes U sol as a particular case, and is defined by where 1 is the 2 × 2 identity matrix and with a generalized radial unit vector n 1 = cos Θ(x µ ) sin F (x µ ) , n 2 = sin Θ(x µ ) sin F (x µ ) , n 3 = cos F (x µ ) . Here α, Θ and F are arbitrary functions of the space-time coordinates. This ansatz was originally introduced in the context of the Gribov problem in regions with non-trivial topology [5], and has been shown to provide a very fruitful arena to construct new solutions of the theory. In reference [6], the compatibility of this ansatz on the Einstein-Skyrme theory was thoroughly explored considering a space-time which is a warped product of a two-dimensional space-time with an Euclidean constant curvature manifold. Also, within this ansatz, a novel non-linear superposition law was found in [7] for the Skyrme theory, which was latter extended to the curved geometry of AdS 2 × S 2 in reference [8]. Even more, the ansatz allows for exact solitons with a kink profile [9]. Asymptotically AdS wormholes and bouncing cosmologies with self-gravitating Skyrmions were constructed in [10] as well as other time dependent cosmological solutions with non-vanishing topological charge [11]. Also within the context of the generalized hedgehog ansatz, for the SU (2) Non-linear Sigma Model, topologically non-trivial gravitating solutions were constructed in [12] which cannot decay on the trivial vacuum due to topological obstructions and, more recently, planar asymptotically AdS hairy black hole solutions were found in [13]. In this paper we will explore a new family of solutions within the Generalized Hedgehog Ansatz which describe spherically symmetric, quasi-static configurations in a cavity. By imposing mirror-like boundary condition for the matter field we numerically construct new self-gravitating solitons for the SU (2) Skyrme model and Non-linear Sigma Model. In Section II we introduce the Generalized Hedgehog Ansatz. In Section III we reduce the system to three non-linear equations and argue that in order to have configurations with finite energy it is necessary to introduce a mirror at a finite proper distance from the origin. Section IV is devoted to the numerical integration of the system that leads to two well-behaved branches. The first branch is well behaved for arbitrary values of the coupling constant of the Skyrme term λ, while the second leads to well-behaved solutions only for non-vanishing λ. Section V contain the conclusions and further comments as well as the proof that, within this ansatz, there are no black holes supported by an analytic Skyrme field. II. THE SU (2) EINSTEIN-SKYRME AND EINSTEIN-NONLINEAR SIGMA MODEL In this paper we will be concerned with the gravitating Einstein-Skyrme model as well as with the Einstein-Non-linear Sigma Model systems. The action is given by where R is the Ricci scalar, Here U is a scalar field valued in SU (2) and therefore A µ = A i µ t i , with t i = −iσ i the SU (2) generators, σ i being the Pauli matrices. We work in the mostly plus signature, Greek and Latin indices run over spacetime and the algebra, respectively. Hereafter without loosing generality we set K = 1. The field equations for this theory are the Einstein equations with energy-momentum tensor given by satisfying the dominant energy condition [14], and the Skyrme equations We will consider the generalized hedgehog ansatz (2) and (3) with F (x µ ) = π 2 . The functions α and Θ of the ansatz (2) and (3) are scalar functions: α describes the energy profile of the configuration while Θ describes its orientation in isospin space. One can check that the above ansatz has vanishing baryon charge, thus we are within the pionic sector. The group manifold of SU (2) is the three-sphere S 3 , and our ansatz turns on the field along the S 2 ⊂ S 3 submanifold. The advantage of the Generalized Hedgehog Ansatz is given by the fact that the Skyrme equations reduce to a single equation provided [6], Even though these equations may seem too restrictive, we will show below that they are compatible with the existence of quasi-static solitonic solutions in a cavity. With this, the Einstein and Skyrme equations reduce to with supplemented by These equations can also be obtained from the effective action provided the constraints (7)-(9) are fulfilled. Einstein equations (10) and the Skyrme equation (12) are obtained from the variation of I eff w.r.t. the metric and the scalar α, respectively, and the equation for Θ is trivially satisfied after imposing the constraints (7)- (9). The effective action, as well as the constraints, are invariant under the global transformation where is a parameter. The symmetry transformation δ (1) allows to construct a locally conserved current which, when integrated within the cavity, leads to a finite conserved charge. III. THE SYSTEM AND ITS FINITE ENERGY SOLUTIONS We consider a static spherically symmetric space-time metric and the following dependence for the matter fields where ω is a frequency, leading to a time independent energy-momentum tensor and therefore the whole configuration is quasi-static 1 . For this ansatz, the constraint equations (7)- (9) are automatically fulfilled and the Einstein-Skyrme system reduces to three independent equations. We work with the equations E tt and E rr (with E µν defined in (10)), as well as (12). Introducing for simplicity u(r) = sin α(r), and setting 2κ = 1 one obtains the following non-linear system 1 The kinetic term for the Non-linear Sigma Model , (∂α) 2 +sin 2 α(∂Θ) 2 , is mapped to 1 + |Φ| 2 /4 −2 |∂Φ| 2 with Φ = ρ exp (iχ) via the transformation Θ = χ and α = arccos 4−ρ 2 4+ρ 2 . This makes explicit the fact that Θ is a phase that, according to our ansatz, rotates in time at a frequency ω with respect to the coordinate time t. It is worth pointing out that the parameter ω can be absorbed in the field equations by rescaling the radial coordinate as well as the Skyrme coupling in the form r →r = ωr, λ → λ = ω 2 λ. While in the Non-linear Sigma Model this transformation reduces the number of independent parameters to be provided before numerical integration, in the presence of the Skyrme term the freedom in ω is mapped to the freedom to choose the value ofλ. Assuming that α goes to zero as r goes to infinity, on an asymptotically flat space-time, Consistently, this equation is equivalent to the equation for the radial profile of a massless scalar field in Minkowski space-time, and admits the following asymptotic behavior α(r) → cos(ωr)/r as r goes to infinity. It is a straightforward computation to show that this asymptotic behavior is not compatible with having a finite mass. If we want to construct gravitating solitons in this sector of the Generalized Hedgehog Ansatz for the Skyrme model, it turns out to be necessary to A similar situation occurs for the Einstein-Maxwell system coupled to a massless charged scalar (see e.g. appendix A of [16]). The asymptotic behavior of the scalar field is not compatible with the requirement of asymptotic flatness and finite mass for solitons and black holes, and one is therefore forced to enclose the system into a cavity. This system has been particularly fruitful for the study of the non-linear evolution of the superradiant instability due to the electric charge of a scalar field including a mass term [17]- [18] as well as selfinteraction [19], leading to the formation of hairy black holes [20]. The system in a cavity allows for the existence of solitons as well as black holes, and in the previous references there have been observed dynamical evolutions in both directions in different regimes 2 . IV. THE NEW SOLITONS The requirement of having a regular center at r = 0 leads to two soliton branches. The first corresponds to a branch analytic in the Skyrme coupling, with 3) The second branch, non-analytic in λ, is given by The latter solution is intrinsic to the presence of the Skyrme term. These two branches define the data at the origin which, after numerical integration, will determine the data at the mirror located at r = r m . Note that for the first branch, for a given value of the Skyrme coupling, the free parameters are f 0 , u 0 and ω. Normalizing the time coordinate t to coincide with the proper time of a geodesic observer located at the origin sets f 0 = 1. This region is shared by all the configurations and one can therefore compare their physical parameters in a consistent manner. We therefore fix f 0 = 1 in this branch. For the second branch, the value of the −g tt component of the metric at the origin is not a free parameter any more and is fixed by f 2 (0) = λu 2 0 ω 2 . We can still normalize the time coordinate to coincide with the proper time of a geodesic observer located at the origin by introducing the scaling t → t/ λu 2 0 ω 2 . In this manner, the parameter ω is absorbed from all the functions and the rotation of the phase is locked in terms of the Skyrme coupling and the value of the scalar at the origin as Θ = ωt → Θ = t/ λu 2 0 . Equivalently, this is accomplished if we directly set ω = 1/ λu 2 0 for the integration of this branch. In this manner, for a given value of the Skyrme coupling, u 0 is the unique parameter characterizing the solutions in the second branch. For the numerical integration we proceed as follows: We fix the coordinate t to be the proper time of an observer at the origin, which leaves us with two (ω and u 0 ) and one (u 0 ) free parameter for the first and second branch, respectively. Then, for both branches, we integrate the system (17)-(19) from a regulator ∼ 0 outwards, using the initial conditions for radial integration that come from expansion (21) and (22) and Below, we present the results of the integration for each branch. A. Branch 1: Analytic in λ For the first branch, the free parameters are u 0 and ω. Figure 1 shows the functions integrated from the system for four different combinations of the frequency and the strength of the Skyrme field at the origin. The mirror is located at the first zero of the black curve which represents the field u(r) = arcsin α(r). The dependence of the radius of the mirror r = r m as a function of u 0 and the frequency is depicted in Figure 2. The radius of the mirror is an increasing function of the value of the field at the origin and increases with ω −1 . The later is expected from the asymptotic behavior since the periodicity of the zeros of the field α(r) ∼ cos(ωr)/r is locked in terms of the time periodicity of the phase Θ = ωt. Figure 2 also shows that one could locate the mirror at an arbitrarily large proper distance from the origin as u 0 approaches to 1, notwithstanding as seen in the left panel of Figure 3 as u 0 → 1 the energy and the charge diverge, as expected from the asymptotic analysis, therefore only mirrors located at a finite proper distance from the origin are compatible with having finite energy and charge. As can be seen from the right panel of Figure 3, for all the solitons obtained here the U (1) charge is larger than the energy of the configuration. Upper left pannel of Figure 5 shows the behavior of the radius of the mirror as a function of the amplitude of the Skyrme field at the origin. Again, the radius diverges at u 0 approaches 1 but as shown in the upper right Figure 5 the mass and charge would diverge in that case. For small values of the mass and charge the curves seem to overlap. Lower panel of Figure 5 shows that indeed there is a critical value for u 0 above which the charge surpasses the value of the mass, while below this critical value the mass is larger than the charge. In that figure we have included the curve Q = M only for reference. V. CONCLUSIONS AND FURTHER COMMENTS In this paper we have constructed new solutions of the Einstein-Skyrme model for SU (2) group. We make use of the Generalized Hedgehog Ansatz turning on the fields along the S 2 ⊂ S 3 submanifold. In the absence of the Skyrme term the system effectively reduces to a Non-linear Sigma Model on S 2 . A cavity has been included which is located at the first zero of the Skyrme profile, and we studied the behavior of the mass and U (1) charge as a function of the location of the boundary. The conserved charges for the different cases can be compared since all these configurations share the region located at the origin which allows to define a common normalization for the globally timelike Killing vector ∂ t . The regularity of the solutions at the origin imply the existence of two branches of solutions, and while the first branch exists for any value of the Skyrme coupling, the existence of the second branch is intrinsic to the presence of the term introduced by Skyrme to stabilize the solitons. After normalizing the time coordinate in order it to coincide with the proper time of a geodesic observer located at the origin one is left with solutions parameterized by two constants (u 0 , ω) in the first branch and by a single constant u 0 in the second branch. In the former case we observe that the charge is always greater than the mass, while in the latter the charge is larger than the mass only above a critical value of the mass which induces a lower critical value for the amplitude of the Skyrme field at the origin. One might be tempted to construct black holes in a cavity with non-vanishing Skyrme profile in this system. Assuming the existence of a regular horizon located at r = r + , as well as assuming analyticity for u(r) at the horizon, one can show that the field equations have two branches. In the first branch u(r) = 0, which implies U equals the identity of which is not consistent with the structure of an event horizon. This shows that, within the ansatz here considered, there are no non-trivial, black hole solutions. Therefore, the boson stars constructed in this work cannot decay into a hairy black hole with the same symmetries, because such black hole does not exist 3 . It is interesting to note that the author of reference [23] constructed boson star solutions in the Non-linear Sigma Model case by adding a designed self-interacting potential to (13) without further constraint 4 . The presence of the self-interaction allows to construct configurations of finite mass even when the boundary of the cavity is located at an infinite proper distance from the origin. It would be interesting to include the self-interaction also for a finite cavity.
4,089.4
2017-08-23T00:00:00.000
[ "Physics" ]
Effects of Anticoagulants and Immune Agents on Pregnancy Outcomes and Offspring Safety in Frozen-Thawed Embryo Transfer Cycles—A Retrospective Cohort Study The application of anticoagulants and immune agents in assisted reproduction technology has been in a chaotic state, and no clear conclusion has been reached regarding the effectiveness and safety of this treatment. We aimed to explore the potential association between adjuvant medication and pregnancy outcomes and offspring safety in a retrospective cohort study including 8,873 frozen-thawed embryo transfer cycles. The included cycles were divided into three groups according to the drugs used, namely, the routine treatment group (without anticoagulant agents and immune agents), the anticoagulant agent group, and the immunotherapy group. Among normal ovulatory patients, those who used immune agents had a 1.4-fold increased risk of miscarriage (≤13 weeks), but a 0.8-fold decreased chance of birth (≥28 weeks) compared with the routine treatment group. Among patients with more than 1 embryo transferred, those who used anticoagulant agents showed a 1.2-fold higher risk of multiple birth than those undergoing routine treatment. Among patients without pregnancy complications, anticoagulant treatment was associated with a 2.1-fold increased risk of congenital anomalies. Among young patients (<26 years) with a singleton pregnancy, the neonatal birth weight of the immunotherapy group and the anticoagulant treatment group was 305.4 g and 175.9 g heavier than the routine treatment group, respectively. In conclusion, adjuvant anticoagulants or immune agent treatment in assisted reproductive technology should be used under strict supervision, and the principle of individualized treatment should be followed. INTRODUCTION With the rapid and dramatic development of assisted reproductive technology (ART) (1,2), the demand of infertile couples for ART has expanded from helping to obtain pregnancy to improving the ongoing pregnancy rate and live birth rate and reducing the miscarriage rate per embryo transfer cycle, reflecting people's enthusiastic expectation of a high pregnancy rate with a low risk of adverse events. In particular, the etiologies of recurrent miscarriage, repeated implantation failure, and longterm infertility with unknown reasons remain unclear, resulting in the lack of standardized investigation and management. Therefore, numerous adjuvant therapies have been introduced, such as the application of anticoagulants, immunosuppressants, and immunomodulators (3,4). Due to the lack of strict supervision, the clinical application of such medications lacks standardization, bringing a potential risk of drug abuse. On this issue, some experts suggested that overtreatment should be avoided when prescribing individualized therapy according to couples' preferences (5). The application of anticoagulants and immune agents in ART has been in a chaotic state (5)(6)(7), and no clear conclusion has been reached. Frozen-thawed embryo transfer (FET) cycles are ideal models for investigating the independent effect of adjuvant drugs since the confounding effect of ovarian stimulation is removed. Thus, we aimed to explore the effectiveness and safety of the adjuvant use of anticoagulants and immune agents in this retrospective cohort study on FET cycles. MATERIALS AND METHODS This retrospective cohort study was conducted in the Reproductive Medicine Center of the Second Hospital of Hebei Medical University, a tertiary hospital. A total of 12,053 FET cycles from January 1, 2017 to May 1, 2021 were reviewed for eligibility. Women aged 20-49 years who underwent FET were included in this study. Subjects who met any of the following criteria were excluded: (a) thin endometrium (<7 mm, measured at least three times) (8); (b) uterine malformation; (c) preimplantation genetic testing (PGT); (d) missing essential data and information; and (e) chromosome polymorphism. After excluding 3,180 subjects, a total of 8,873 cycles (resulting in 9,918 newborns) with complete data were included in the study ( Figure 1). The study protocol was approved by the Ethics Committee of the Second Hospital of Hebei Medical University. The Second Hospital of Hebei Medical University provided administrative permission for the research team to access and use the data included in this research. Hormone Replacement Therapy The transfer of thawed embryos was carried out when the endometrial thickness reached 8 mm after a step-up regimen for endometrial preparation. Estradiol valerate (Progynova ® , Bayer) was administered orally at 6-8 mg/day on day 2 of the menstrual cycle, which was followed by vaginal administration of micronized progesterone (Uterogestan, Besins International, France) 400 mg BID or combined administration of oral dydrogesterone (Duphaston ® ; Abbott Biologicals, Netherlands) 10 mg BID and progesterone/oil injection (Progesterone Injection 20 mg/ml, Zhejiang Xianju Pharmaceutical Co., Ltd., China) 40-60 mg QD. Natural Cycle A serial ultrasound scan was performed every 2 days from menstrual cycle day 10-12. Once the dominant follicle reached 16-20 mm in diameter, HCG was injected for the trigger of ovulation, and progesterone/oil injection or oral dydrogesterone was prescribed at 40 mg QD or 10 mg BID, respectively, as luteal phase support. Ovulation Induction Cycle Letrozole 2.5-5 mg QD was started from menstrual cycle day 2-3, followed by human menopausal gonadotropin (HMG) injections for ovulation induction. The starting dose of HMG (37 or 75 IU) was determined by follicular development. When the dominant follicle reached 18 mm in diameter and the endometrial thickness reached 8 mm, HCG was administered at 10,000 IU for the trigger of ovulation. The transfer of frozenthawed cleavage-stage embryo was performed 4 days later and luteal phase support was given as described above. Gonadotropin-Releasing Hormone Agonist Downregulation Combined With Hormone Replacement Therapy Patients received a single injection of 3.75 mg of long-acting triptorelin acetate on menstrual cycle day 2 after an ultrasound scan confirmed ovarian quiescence and the presence of a thin endometrium (<5 mm). After 28 to 30 days, sequential estrogen and progesterone were prescribed as in the HRT cycles. Adjuvant Medication Patients who used aspirin or low-molecular-weight heparin were allocated into the anticoagulant group, while those who used prednisone, hydroxychloroquine, or cyclosporine (whether in combination with anticoagulants or not) were allocated into the immunotherapy group. The remaining patients without anticoagulant and immune agent treatment were allocated into the routine treatment group. Anticoagulants Aspirin: Aspirin Enteric-Coated Tablets (Bayer Health Care Manufacturing S.r.l.), 50-75 mg, QD was started from the day of estradiol valerate tablets or progesterone initiation until 10-12 weeks of pregnancy. There were 85 cases of congenital anomalies involving six major systems; 7 cases were found to have chromosomal abnormalities or abnormal nuchal translucency and 4 cases were without clear description. Detailed information is shown in Table S1 as Supplemental Material. Statistical Analysis Continuous variables were presented as median (interquartile range) or mean ± standard deviation (SD) according to the normality of distribution. Kruskal-Wallis test was used to compare the continuous variables among the three groups. Categorical variables were presented as count (percentage) and compared using the chi-square test. Logistic regression analysis and stratified analysis were used to explore the associations between adjuvant medication and miscarriage, birth, multiple birth, congenital anomaly, and birth weight with the generalized estimation equation (GEE) model to deal with the repeat cycles and data of twins. Two regression models were applied: baseline characteristics of study subjects in model 1, while other confounding factors were adjusted in model 2. Confounders on the basis of their associations with the outcomes of interest or a change in effect estimate of more than 10% were selected. All the analyses were performed using R 3.6.3 (http://www.r-project. org) and EmpowerStats (www.empowerstats.net, X&Y Solutions Inc., Boston, MA, USA), and a two-sided p-value of <0.05 was considered to indicate statistical significance. Baseline Characteristics, Laboratory Data, Pregnancy, and Neonatal Outcomes of Study Subjects As shown in Table 1, there were 8,873 FET cycles included in this retrospective study, among which 4,253 cycles were allocated into the routine treatment group, 3,698 cycles were allocated into the anticoagulant group, and 922 were allocated into the immunotherapy group. In terms of laboratory variables, the routine treatment group had a higher proportion of cleavage-stage embryo transfer [ 5%)). However, there was no significant difference in the gestational weeks at birth, pregnancy location, and pregnancy complications among the three groups. Neonatal outcomes are shown in Table 2, including 9,918 newborns. There were 4,758 newborns in the routine medication group, 4,164 newborns in the anticoagulant treatment group, and 996 newborns in the immunotherapy group. No significant differences were found in congenital anomaly and gender, while the immunotherapy group had greater neonatal weight (3,031.3 ± 684.2 vs. 2,960.9 ± 692.7 vs. 2,965.5 ± 678.4, p = 0.048). Adjuvant Medications Were Associated With Inferior Pregnancy Outcomes by Multivariate Regression Analysis With Stratification Multivariate regression analysis with stratification was used to investigate the effectiveness of adjuvant medication on improving pregnancy outcomes. After adjusting for age of the couples, BMI, infertility duration, the number of transferred embryos, endometrial echogenicity, FET protocol, number of previous miscarriage, and cycle number, normal ovulatory patients undergoing immunotherapy demonstrated a 40% (OR = 1.4, 95% CI: 1.0, 1.8) higher risk of miscarriage ( Table 3) and a 20% (OR = 0.8, 95% CI: 0.6, 1.0) lower probability of birth ( Table 3) compared with those without adjuvant medication. Moreover, patients with more than 1 embryo transferred and anticoagulant treatment showed an increased risk of multiple birth (OR = 1.2, 95% CI: 1.0, 1.4) after controlling for confounding factors including age of the couples, BMI, infertility duration, endometrial thickness, cycle number, and the number of previous miscarriage ( Table 4). Adjuvant Medications Significantly Impact Offspring Safety by Multivariate Regression Analysis With Stratification After controlling for gestational weeks at birth, multiple birth, age of the couples, BMI, developmental stage of transferred embryos, infertility type, the number of embryos transferred, and cycle number, neonates of patients without pregnancy complications but undergoing anticoagulant therapy showed an increased risk of congenital anomalies (adjusted OR = 2.1, 95% CI: 1.0, 4.5) ( Table 5). However, in patients with pregnancy complications, the risk of congenital anomaly was comparable among the three groups. Given the significant influence of maternal age on neonatal birth weight, stratification by female age was performed when investigating the association between adjuvant medication use and neonatal birth weight. The results showed that, among young patients (<26 years) (23) with a singleton pregnancy, the neonatal birth weight of the immunotherapy group was 305.4 g heavier than the routine treatment group (adjusted b = 305.4; 95% CI: 55.2, 555.5), while that of the anticoagulant treatment group was 175.9 g heavier than the routine treatment group (adjusted b = 175.9, 95% CI, 68.1, 283.7) after adjusting for gestational weeks at birth, male age, BMI, developmental stage of transferred embryos, cycle number, infertility type, and the number of embryos transferred ( Table 6). In other age strata and among patients with multiple pregnancy, the neonatal birth weight was comparable among the three groups. DISCUSSION In this retrospective cohort study of 8,873 FET cycles (9,918 newborns), we observed that anticoagulation and immunotherapy had a significant influence on pregnancy outcomes and offspring safety. Compared with the routine treatment group, using immune agents was associated with an increased risk of miscarriage and a decreased rate of birth in normal ovulatory patients. A fetus has antigens of maternal and paternal origins (5). The physiological mechanisms of the immunotolerance of paternal antigens during pregnancy are poorly understood. However, a dysfunction in immune modulation has been hypothesized to be one of the causes of infertility or miscarriage. Several systematic reviews (24)(25)(26) have evaluated the effectiveness and safety of immunological interventions for recurrent miscarriage, and none of such interventions were associated with a reduction in miscarriages or an increase in live births. Thus, there was insufficient evidence to recommend immunotherapy in the management of recurrent miscarriage. In this study, we found that the adjuvant immunotherapy during FET cycles significantly increased the risk of miscarriage, but markedly decreased the probability of birth among normal ovulatory patients only. In contrast, an increase in birth and a decrease in miscarriage were witnessed among non-ovulatory patients undergoing immunotherapy, although both were without statistical significance. This suggested that patients with ovulation disorder may have underlying defects in immunomodulation during embryo implantation, so that they can benefit from immunotherapy. However, among patients with normal ovulation, the administration of exogenous immune agents may in turn disturb their immunotolerance to fetal antigen, resulting in an increased miscarriage rate and a decreased birth rate. Using anticoagulant agents was associated with a higher risk of multiple deliveries and an increased risk of congenital anomalies. In terms of anticoagulant therapy, several systematic reviews and meta-analyses (4,(27)(28)(29) have shown that low-dose aspirin and low-molecular-weight heparin could effectively reduce the miscarriage rate and increase the live birth rate in women with antiphospholipid syndrome or a history of recurrent miscarriage. The combination of low-molecularweight heparin and aspirin during pregnancy may increase the live birth rate in women with persistent anti-phospholipid (aPL) when compared with aspirin treatment alone. In this study, anticoagulant therapy significantly increased twin birth rate in patients with more than 1 embryo transferred. Our finding is consistent with the published studies suggesting the improvement in live birth rate by using anticoagulation therapy. However, there were few articles focusing on the relationship between anticoagulation therapy and congenital malformations. A randomized controlled trial reported few cases of congenital anomalies, but this may be underestimated given the small sample size (30). Our study collected the clinical data of 9,918 neonates, among which 96 cases of congenital anomalies were observed. We classified fetal congenital anomalies according to the human body system (Table S1). There were 44 cases in the routine treatment group, 46 cases in the anticoagulant group, and 6 cases in the immunotherapy group. In the anticoagulant group, 44 cases were exposed to aspirin and 2 cases were exposed to both aspirin and lowmolecular-weight heparin, suggesting that aspirin was associated with congenital anomalies. Aspirin can inhibit prostaglandin synthesis and subsequent reduction of platelet aggregation by inactivating cyclooxygenase (31). According to the latest guideline on low-dose aspirin use during pregnancy by the American College of Obstetricians and Gynecologists (ACOG), low-dose aspirin use during the first and the second trimester was considered to be effective and safe (32). In this study, the incidence of congenital anomalies was comparable among the three groups in patients with pregnancy complications, which was consistent with the ACOG guideline. However, in patients without comorbidities during pregnancy, the risk of fetal malformation increased when adjuvant anticoagulants were prescribed. The relationship between aspirin and genitourinary abnormalities and gastroschisis has been reported (33)(34)(35)(36). In our study, there were four cases of genitourinary abnormalities: two cases of cryptorchidism (one exposed to aspirin, while the other was from the routine treatment group), one case of hypospadias (from the routine treatment group), and one case of gastroschisis (from the routine treatment group). In addition, after excluding cases with parental chromosomal abnormalities, malformations of systems other than the genitourinary system and gastrointestinal systems were also reported, which is interesting and unexpected. Furthermore, high-quality prospective studies and comprehensive neonatal physical examination are warranted to evaluate the safety of aspirin in the field of reproduction. In terms of the effect of adjuvant medication on neonatal birth weight, previous systematic reviews only focused on fetal growth restriction and no firm conclusions were drawn (4,7). In this study, the neonatal birth weight of each group was quantitatively analyzed, and multivariate linear regression was performed to adjust for confounding factors. The results showed that there was a statistically significant increase in neonatal birth weight after the adjuvant use of either anticoagulants or immune agents among patients under the age of 26. This is different from previous reports that fetal weight increases with maternal age (37,38), suggesting that anticoagulation combined with or without immunotherapy has a positive impact on birth weight. To summarize, using immune agents was associated with an increased risk of miscarriage and a decrease in birth among normal ovulatory patients. Using anticoagulant agents was associated with a higher risk of multiple birth and an increased risk of congenital anomalies. Young mothers had heavier newborns after either anticoagulant agent or immune agent treatment during FET cycles. Therefore, adjuvant anticoagulant or immune agent treatment in ART should be used under strict supervision, and the principle of individualized treatment should be followed. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of Second Hospital of Hebei Medical University. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS YF, YW, and GH devised the idea and designed the study. ZL and JZ contributed to the primary data collection. YX and JZ reexamined the data and analyzed the data. YF wrote the original draft, which was revised by GH. WW and NC supervised the study and administered the project. All authors contributed to the article and approved the submitted version. FUNDING This study was supported by the Natural Science Foundation of Hebei Province (H2021206377), S&T Program of Hebei (21377760D), Clinical Medicine Outstanding Talents Program of Government Funds, and Tracking Project of Hebei Province Medical Applicable Technology (GZ2022019). The funding sponsors had no role in the study design; in the collection, analyses, and interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
3,974.6
2022-06-21T00:00:00.000
[ "Medicine", "Biology" ]
Slice-Selective Transmit Array Pulses for Improvement in Excitation Uniformity and Reduction of SAR To overcome challenges of inhomogeneous transmit B1 distribution and high specific energy absorption rate (SAR) in MRI, we compare slice-selective array-optimized composite pulse and RF shimming designed to both improve B1 uniformity and reduce SAR using an 8-channel transmit head array loaded with a head model at various RF pulse excitation times, and compare results with standard quadrature voltage distribution at 3T (128 MHz) and 7T (300 MHz). The excitation uniformity was estimated throughout the 3D brain region and SAR was calculated for the whole head. The optimized composite pulse could produce significantly better homogeneity and significantly better homogeneity when SAR was not constrained, and both significantly better homogeneity and lower SAR when the pulse duration was allowed to be twice that of the quadrature or RF shimmed pulse. When the total pulse durations were constrained to the same length, the relative advantages of the optimized composite pulse for producing better homogeneity and lower SAR simultaneously were diminished. Using the optimization results, the slice-selective composite pulse sequence was implemented on a 3D MRI simulator currently under development, and showed both effective slice selection and improvement in excitation uniformity compared to a conventional quadrature driving method. Introduction High field (3T and greater) magnetic resonance imaging (MRI) systems are used increasingly in clinical diagnosis and scientific research because of high Signal-to-Noise Ratio (SNR) and versatile soft tissue contrast.However, higher main magnetic field (B 0 ) strengths require a higher frequency RF magnetic (B 1 ) field resulting in more dramatic perturbations of the B 1 field and more power absorbed by the human body or a sample for a given B 1 field strength.Regarding the field perturbations, the wavelength inside the human body will be shorter at the higher field strengths, and much shorter in tissue than in free space.For example, in a 7T system (300 MHz), the wavelength inside the brain, having an average relative permittivity   r  of about 52 and conductivity (σ) of about 0.55 S/m, is approximately 13.9 cm, whereas in free space it is close to 100 cm.Because the human body has complex geometry consisting of highly inhomogeneous and lossy materials, strong electromagnetic inter-actions between the RF fields and the human body are expected.These interactions can lead to non-uniform, asymmetric, and complex current distributions on the RF coils as well as the inside of the human body.The distortions of the B 1 field and increased absorbed power present significant challenges to the further advancement of MR. A number of groups have introduced a variety of methods using arrays of coils or antennas in transmission, rather than the conventional single excitation coil, to simultaneously improve excitation homogeneity and reduce specific energy absorption rate (SAR) [1][2][3][4][5][6][7][8][9][10][11].Some methods are limited to low flip angles and/or long pulse durations, some present challenges for slice-selective implementation, and some (those relying only on RF shimming) are limited by what can be accomplished within the bounds of the Maxwell equations for excitation uniformity.In this work, a slice-selective two-pulse array-optimized composite pulse [12] considering both Slice-Selective Transmit Array Pulses for Improvement in Excitation Uniformity and Reduction of SAR 206 B 1 uniformity and SAR with a simple cost function, is designed and compared with RF shimming and the conventional quadrature driving method.The different excitation methods are compared in terms of both safety and excitation homogeneity with and without the constraint that the total duration of the RF pulses be equal.To implement our simulation results, MR images are acquired using a 3D MRI simulator (implementing the Bloch equations in 3D with no small-tip approximations) currently under development.Recent experimental comparison of the ability of composite pulses to achieve uniform excitation in comparison to RF shimming alone (without consideration of SAR) can be found elsewhere [13]. Method The methods used here can be separated into three main portions: 1) finite difference time domain (FDTD) calculations using a transmit array and head model; 2) optimization of the transmit array pulsing method; and 3) implementation of the designed pulse sequences on an MRI system simulator.Each portion, especially the MR simulator, performs several calculations and therefore needs several subroutines and input files.Subsections devoted to each portion are given below.Briefly, the electromagnetic field distributions of each coil element (circularlypolarized magnetic field distributions, 1 and 1 B  B  , as well as electric field intensity, E), were calculated with the FDTD method.Using these calculated values, transmit array pulsing methods (RF shimming and optimized composite pulse) were developed using an optimization routine capable of considering the excitation uniformity and head average SAR simultaneously.The optimized current amplitude and phase of each channel were then used in the MR simulator with other input files (including sequence parameters and subject geometry) to calculate the k-space data using the Bloch equations.Finally, the k-space data from the MRI simulator was converted to image data using a fast Fourier transform (FFT) algorithm.Then the reconstructed image was evaluated with respect to excitation uniformity by comparing it to a simulated image acquired by the quadrature driving method. FDTD Calculation Using Transmit Array and Head Model An 8 channel transmit array (MR Instruments Inc, Minneapolis, Minnesota), having an inner diameter (ID) of 246 mm, a length (L) of 214 mm, and loaded with a human head was simulated at 128 MHz (3T) and 300 MHz (7T) (Figure 1).A conventional quadrature driving method using the same transmit array and head model was simulated for comparison.A human head model having 47 different tissue types with a 5 mm resolution was used for optimization to minimize calculation time (whereas a 2 mm resolution head model was used for the MR simulator).The original voxel-based model was acquired from the IT'IS foundation [14], and then transformed into a 3D grid of Yee cell cubes for use with the FDTD simulation method to calculate the B 1 and electric field (E-field) produced by each element driven individually.Each element was excited with a voltage source having a magnitude of 1 V and with phase equal to the azimuthal position of the element in series with a 50 Ω resistor.All FDTD calculations were performed using commercially available software (xFDTD; Remcom, Inc.; State College, PA). Before optimization, all electro-magnetic fields were normalized so that average 1 within the brain volume (corresponding to a 3.0 ms 90 o pulse) for quadrature driving, RF shimming, and optimized composite pulse having 3 ms duration of each component pulse [15], but with average 1 2 T B    4 T B    for the optimized composite pulse having 1.5 ms duration for each component pulse. Optimization of the Transmit Array Pulsing Method For the optimized composite pulses, the magnitudes and phases of each element in both component pulses were optimized to produce the most homogeneous transverse magnetization   t M at the end of the second component pulse and the lowest SAR throughout the pulse.During optimization, a simple cost function [8,9] was minimized, The value for η was varied from 0 to 1 for evaluation of its effect on both excitation uniformity and head-average SAR.At this point the selection of η is empirically selected to give both good homogeneity and (in SARconstrained cases) significantly reduced SAR compared to when η is 1.The transverse magnetization (M xy ) and degree of spin excitation, ignoring spin-lattice relaxation time (T 1 ) and spin-spin relaxation time (T 2 ) effects during the RF pulse, was calculated by the following portion of the Bloch equation [12]. where represents the net spin magnetization vector, γ is the gyromagnetic ratio (42.58 MHz/T for the 1 H), and 1n is the complex circularly polarized component of the radiofrequency magnetic (B 1 ) field rotating in the same directions of precession.For the purposes of the optimization process (not the MRI simulator), before the pulse was applied, was assigned an amplitude of 1 (arbitrary units, oriented in the z-direction) throughout the brain.Optimization for RF shimming followed a similar procedure, but with only one component pulse. For all pulse types and drive configurations, excitation uniformity using the transverse magnetization (M xy ) was calculated as where N is the total number of voxel in the region of interest. The SAR was calculated as where E x , E y , and E z are amplitudes of orthogonal components of the electric field,  is the mass density (kg/m 3 ) and  is the conductivity (S/m) of the local material. For rectangular pulses, mean SAR for the RF shimming and quadrature driving considering the pulse excitation time is where τ is the RF pulse excitation time, SAR is the average SAR within a head model at the certain time, and TR is the repetition time.Mean SAR for the composite pulse is where subscripts 1 and 2 indicate the first and second RF pulses.The M t was calculated within a 3D brain region, whereas the whole head was used for the SAR calculations.Optimization was performed using home-built code in Matlab (The MathWorks, Inc., Natick, MA). Calculation of the k-Space Data Using an MR Simulator After optimization, amplitudes and phases of optimized currents for each channel were used to acquire MR images using a freely available Bloch-equation-based MR Simulator currently under development [16] with all pertinent information (e.g., T 1 , T 2 and proton density of tissues, head geometry, B 1 distribution of each coil element, and pulse sequences for all RF and gradients).The transverse magnetization vector was tracked through time and space using the Bloch equation considering T 1 , T 2 , gradients and B 0 inhomogeneity.The equation describing an interaction between tissue spin magnetization and applied fields during MRI scanning can be written as: where is the spin magnetization vector (having components of M x , M y and M z ), t is time, are the relaxation constants, and M 0 is the tissue spin magnetization at equilibrium without any applied fields except the main magnetic field (B 0 ) determined by the proton density (ρ). The local magnetic field, , can be written as [16]:  (8) where B 0 is the main magnetic field, is the local field variation caused by the tissue susceptibility and B 0 inhomogeneity within the object, is the applied gradient fields at a certain time, is the radio frequency (RF) magnetic field, and is the spatial coordinate.   In a frame of reference rotating about the z axis at the Larmour frequency corresponding to B 0 , a discrete time solution of the Bloch equation can be expressed, as [17]: (9) where   z Rot G is a rotation matrix about the z-axis associated to the applied gradient, is a rotation matrix about the z-axis associated with the difference between the applied main magnetic field (B 0 ) and the actual local field strength caused by the susceptibility of the local tissue, T T describes the relaxation effect of T 1 and T 2, and represents the rotating effect of the applied effective B 1 field. Rot B The excitation uniformity was estimated using a proton density-weighted image (having parameters of TR = 2000 ms, TE = 20 ms) and a gradient echo sequence.T 1and T 2 -weighted images were acquired using the parameters of TR/TE = 100/10 ms and TR/TE = 2000/100 ms, respectively, where TR is repetition time and TE is the echo time.The optimized composite pulse had better performance, both in excitation uniformity and SAR, compared to a quadrature drive.When η was increased from 0 to 1, the excitation uniformity calculated by Equation ( 2) was increased, whereas mean and maximum value of SAR calculated by Equation ( 5) was increased.A good compromise was seen when η was 0.2675 for 3T (Figure 2 and Table 1) and 0.5 for 7T (Figure 3 and Table 2) for the optimized composite. To evaluate the excitation uniformity of the single slice image acquired by the optimized composite pulse and conventional quadrature drive, we acquired proton density images at 7T.The resulting signal intensity distributions on a 2D axial plane using this head model are shown in Figure 5. Results show that the array-optimized composite pulse has better excitation uniformity than the quadrature driving method.Figure 6 shows the designed pulse sequence for the optimized composite pulse and the acquired proton density weighted images without (top) and with the slice selection gradient of normal rectangular (middle) and triangular (bottom) using the variable-rate selective excitation (VERSE) method to minimize slew rate limitation and pulse duration. Discussion and Conclusions In this work we compare efficacy of a two-component array-optimized composite pulse for achieving homogeneous excitation balanced with constraints on head average SAR with comparison to efficacy of RF shimming alone for a variety of relative total pulse durations. While other pulses have been shown to be able to consider both SAR and excitation homogeneity, they are typically not compared to competing pulses with critical consideration of relative pulse durations. In this work we have chosen to perform evaluations with respect to whole-head SAR.The most recent IEC guidelines [18] clarify the distinction between suggested SAR limits for volume transmit coils and those for local transmit coils with no local (10 g) SAR limits given for volume transmit coils.These most recent guidelines also state that transmit arrays (or multi-channel transmit coils) can be treated as either volume transmit coils or local transmit coils depending on their use.In this case, where the transmit array surrounds the entire head volume and is used to excite the entire brain, it is clear that it is used as a volume transmit coil.For this reason we place first priority on head-average SAR. It has previously been shown that a simple 2-pulse array-optimized composite pulse can provide homogeneous excitation over the entire brain at up to 600 MHz [11], and perform much better than RF shimming alone in im- proving homogeneity [11], or in simultaneously improving homogeneity and reducing SAR [12]. Based on the previous research, we showed RF pulse excitation time (τ) dependent optimization of both image homogeneity and SAR for RF shimming and composite pulses at 128 MHz (3T) and 300 MHz (7T) using an 8-channel transmit array and head model.In agreement with the previous results, when SAR is not a consideration, the optimized composite pulse can produce much better homogeneity than either the quadrature drive or RF shimming alone.If SAR is constrained during the optimization process but pulse duration of the optimized composite pulse is allowed to be longer than that of the other cases, the optimized composite pulse can again produce better homogeneity and lower SAR.If the total pulse duration of the optimized composite pulse is limited to that of the other pulse durations and SAR is a consideration, however, the advantages of the optimized composite pulse are more limited.In this case, it can still produce either better homogeneity or lower SAR than the RF shimmed case when SAR is not constrained, but it cannot necessarily produce both better homogeneity and lower SAR simultaneously.While other pulses that can achieve high homogeneity and slice selection have been published [2-4,9,10], they have not been compared to simple RF shimming or other competing pulses in terms of both homogeneity and SAR when all pulses are constrained to the same total duration.Spoke-type methods [9,10] were developed in parallel to array-optimized composite pulses [11,12] and share some similarities.Compared to current implementations of array-optimized composite pulses, spoke-type methods afford more degrees of freedom with use of transverse (in addition to slice-selection) gradients, but also rely on low-flip angle approximations.While there are methods to allow for fairly high flip angles with spoke-type methods, we believe that the current full-Bloch design for array-optimized composite pulses should afford some greater flexibility (especially if transverse gradients are also incorporated into array-optimized composite pulse design) though at a cost of greater computational requirements. Recent experiments show advantages of the array-optimized composite pulse compared to RF shimming alone for excitation homogeneity, but without constraints on SAR or pulse duration [13].In order to illustrate the possibility of implementing the array-optimized composite pulse for very short pulse durations, here we demonstrated slice selection in the presence of B 0 effects with an optimized composite pulse by implementing it on a 3D MRI simulator using the same coil geometries at both 3T and 7T using the VERSE technique [19].While the simulator has no limitation in slew rate (so the slice-selective pulse with rectangular gradient waveforms could be as short as desired) in experimental implementation triangular gradient waveforms and the VERSE technique could be used to minimize pulse duration under the limi-tations of gradient slew rate (Figure 6) [19].As shown here, the proton density image acquired using the optimized composite pulse has better excitation uniformity (Figure 5) than the quadrature driving.Thus, pulse duration can be very short, slice selection is possible, and both SAR and excitation uniformity can be improved with the array-optimized composite pulse in the head at 3T and 7T. Figure 1 . Figure 1.Geometry of 8 channel transmit array (MR Instruments Inc., MN, USA) and head model. Figures 2 and 3 show at 128 MHz and 300 MHz, respectively, the distribution of SAR and M t for the transmit array with quadrature drive (first column), RF shimming (second column) and optimized composite pulse (third The optimization without considering SAR (when η = 1) is presented in Figure4corresponding to 3T (128 MHz) and 7T (300 MHz). Figure 2 . Figure 2. Distribution of SAR and M t predicted during optimization throughout the selected slices of a head at 128 MHz (3T) for quadrature driving (first column), and transmit array with RF shimming having η of 0.75 (second column) and optimized composite having η of 0.2675 (third column).τ is the pulse excitation time. Figure 3 . Figure 3. Numerical calculation results of SAR and M t throughout the selected slices of a head with η of 0.51 (RF Shimming) and 0.5 (Optimized Composite) at 300 MHz (7T).Other parameters are the same as in Figure 2. Figure 5 . Figure 5. Acquired proton density images of quadrature driving (left) and optimized composite (right) in a 3D MRI simulator at 7T. Figure 6 . Figure 6.Acquired proton density images at 3T without slice selection (SS) gradient (top), with normal rectangular slice selection gradient (middle) and triangular slice selection gradient (bottom) using VERSE method.
4,263
2013-05-20T00:00:00.000
[ "Physics" ]
Nrf2-Inducing Anti-Oxidation Stress Response in the Rat Liver - New Beneficial Effect of Lansoprazole Lansoprazole is a potent anti-gastric ulcer drug that inhibits gastric proton pump activity. We identified a novel function for lansoprazole, as an inducer of anti-oxidative stress responses in the liver. Gastric administration of lansoprazole (10–100 mg/kg) to male Wistar rats produced a dose-dependent increase in hepatic mRNA levels of nuclear factor, erythroid-derived 2, -like 2 (Nrf2), a redox-sensitive transcription factor, at 3 h and Nrf2 immunoreactivity (IR) in whole hepatic lysates at 6 h. Conversely, the levels of Kelch-like ECH-associated protein (Keap1), which sequesters Nrf2 in the cytoplasm under un-stimulated conditions, were unchanged. Translocation of Nrf2 into the nuclei of hepatocytes was observed using western blotting and immunohistochemistry. Expression of mRNAs for Nrf2-dependent antioxidant and phase II enzymes, such as heme oxygenase 1 (HO-1), NAD (P) H dehydrogenase, quinone 1 (Nqo1), glutathione S-transferase A2 (Gsta2), UDP glucuronosyltransferase 1 family polypeptide A6 (Ugt1a6), were dose-dependently up-regulated at 3 h. Furthermore, the levels of HO-1 IR were dose-dependently increased in hepatocytes at 6 h. Subcutaneous administration of lansoprazole (30 mg/kg/day) for 7 successive days resulted in up-regulation and nuclear translocation of Nrf2 IR in hepatocytes and up-regulation of HO-1 IR in the liver. Pretreatment with lansoprazole attenuated thioacetamide (500 mg/kg)-induced acute hepatic damage via both HO-1-dependent and -independent pathways. Up-stream networks related to Nrf2 expression were investigated using microarray analysis, followed by data mining with Ingenuity Pathway Analysis. Up-regulation of the aryl hydrocarbon receptor (AhR)-cytochrome P450, family 1, subfamily a, polypeptide 1 (Cyp1a1) pathway was associated with up-regulation of Nrf2 mRNA. In conclusion, lansoprazole might have an alternative indication in the prevention and treatment of oxidative hepatic damage through the induction of both phase I and phase II drug-metabolizing systems, i.e. the AhR/Cyp1a1/Nrf2 pathway in hepatocytes. Introduction Lansoprazole is a potent proton pump inhibitor that reduces the secretion of gastric acid from gastric parietal cells by inhibition of H + /K + -adenosine triphosphatase. It has been shown that lansoprazole is effective for the treatment and prevention of a broad range of acid-related diseases such as gastro-esophageal reflux disease (GERD), duodenal and gastric ulcers and non-ulcer dyspepsia [1,2]. Recent studies have shown that lansoprazole has acid-independent protective effects in the gastrointestinal mucosa, such as anti-inflammatory effects and anti-bacterial effects on Helicobactor pylori [3]. Both capsaicin-sensitive sensory neurons and nitric oxide are involved in the acid-independent gastrointestinal protective effects of lansoprazole [4]. Induction of the antioxidant defense enzyme, heme oxygenase-1 (HO-1) in human endothelial and gastric cancer cells, rat gastric epithelial cells (RGM-1), or the epithelium of small intestine is also involved in acid-independent gastrointestinal mucosal protection [5][6][7]. These reports demonstrated that lansoprazole, but not omeprazole, induced HO-1 in gastric mucosal cells [6] and the small intestine [7]. Therefore, we used lansoprazole to investigate anti-oxidative stress responses in the liver. HO-1 is a highly inducible, stress-responsive protein (also called heat shock protein 32), which catalyzes the first and rate-limiting step in heme degradation to produce equimolar quantities of biliverdin, carbon monoxide (CO) and free iron [8]. Heme is a potent oxidant, while bilirubin converted from biliverdin and CO exhibits antioxidant activity, vasodilation and inhibition of platelet aggregation, respectively. Therefore, induction of HO-1 can provide cytoprotection against oxidative stress. Induction of HO-1 is controlled by a redox-sensitive transcription factor, nuclear factor, erythroid-derived 2, -like 2 (Nrf2). Nrf2 is a master transcription factor that regulates antioxidant response element (ARE)-mediated transcription of genes involved in the regulation of the synthesis and conjugation of glutathione (glutamate-cysteine ligase catalytic subunit), antioxidant proteins specializing in the detoxification of certain reactive species (HO-1), drug-metabolizing enzymes (UDP-glucuronosyl-transferase 1A1), xenobiotic transporters (multidrug resistance protein 1), and molecular chaperones [9][10][11]. Nrf2 is sequestered in the cytoplasm by Kelch-like ECH-associated protein (Keap1) under un-stimulated conditions, while Nrf2 is translocated into the nucleus and activates the electrophilic response element/antioxidant response element (EpRE/ARE) upon exposure to oxidative insults [9][10][11]. It was shown that significant amounts of the Nrf2-Keap1 complex remained in the bound form after exposure to electrophiles [12,13]. The two mechanisms that have been proposed for Keap1-Nrf2 dissociation are phosphorylation of Nrf2 [14] and modification of Keap1 [12]. Sulfhydryl groups in Keap1 cysteine residues are the main targets of oxidation and electrophilic modification [12]. Modulation of the Keap1/Nrf2/ARE system is a potential pharmacological target for ameliorating oxidative stress. Activators and inhibitors of the Keap1/Nrf2/ARE system include endogenous substances formed in cells/tissues, such as reactive oxygen species (ROS), hydrogen sulfide, lipid peroxidation products, hormones and neurotransmitters (15-deoxy-D 12, 14 -prostaglandin J 2 , catechol estrogens and dopamine), as well as exogenous substances derived from food, air, or other sources (medical procedures, radiation, UV irradiation) [15]. Within exogenous inducers, sulfur-containing glucosinolates derived from cruciferous vegetables (broccoli, Brussels sprouts, horseradish, etc.) are well known [16]. Sulforaphane derived from cruciferous vegetables induces ROS formation through auto-oxidation or disruption of the mitochondrial respiration chain [13]. Most of the chemicals reported to activate this system depend on the dissociation of Nrf2 from Keap1. Few chemicals induce and increase the levels of free Nrf2. It was proposed that increased synthesis of Nrf2 could be a mechanism underlying the Nrf2-activating effects of a-lipoic acid [17]. Mixtures of plant extracts, such as ''Protandim'' (Life-Vantage, USA) and ''Nrf2 activator'' (XYMOGEN, USA) are available as inducers of the Keap1/Nrf2/ARE system. However, these plant extracts have a very wide range of biological effects, making it difficult to distinguish the true contribution of Nrf2 induction. Currently, more specific and effective synthetic chemicals are being developed [11]. In this study we report that lansoprazole is a strong and effective inducer of Nrf2 transcription in hepatocytes, in addition to acting as a proton pump inhibitor. Lansoprazole up-regulated the levels of Nrf2 mRNA and IR without affecting the levels of Keap1 mRNA and IR, thereby promoting the translocation of unbound Nrf2 into the hepatic nuclei. The metabolism of chemicals and drugs involves a series of successive enzymatic reactions [18]. First, in phase I reactions, reactive or polar groups are introduced to the chemicals by a superfamily of cytochrome P450 oxidases (CYPs) such as cytochrome P450, family 1, subfamily a, polypeptide 1 (Cyp1a1), and family 1, subfamily b, polypeptide 1 (Cyp1b1), followed by phase II reactions mediated by detoxifying and antioxidant enzymes. Phase II drug-metabolizing enzymes, such as glutathione S-transferase and UDP glucuronosyltransferase, transfer and conjugate hydrophilic side chains to polar groups. Nrf2 mediates the transcription of mRNAs for phase II enzymes, while the aryl hydrocarbon receptor (AhR) mediates the transcription of phase I enzymes [18]. Lansoprazole also up-regulates the mRNA levels of AhR and Cyp1a1 in the liver. We report for the first time that lansoprazole up-regulates the AhR/Cyp1a1/Nrf2 pathway in hepatocytes and has a potential application in the prevention and treatment of oxidative hepatic damage. Ethics Statement The Wakayama Medical College Animal Care and Use Committee approved all animal manipulations (No 543, No 566 and No 572). Tissue preparation Male Wistar rats, 6 weeks old, were purchased from Kiwa Laboratory Animals Co., Ltd. (Wakayama, Japan). The rats were housed in a temperature-controlled environment. Experiments were performed after providing the rats with free access to food and water for 1 week. The rats were fasted overnight prior to gastric administration of drugs in individual wire-bottom cages. Lansoprazole (supplied by Takeda Pharmaceutical Co., Ltd., Osaka, Japan) was suspended in 0.5% methylcellulose. The rats were intra-gastrically administered (by gastric intubation) lansoprazole 10 mg/kg, 30 mg/kg, 100 mg/kg or vehicle (n = 10). The rats were decapitated at 3 h and 6 h after administration of lansoprazole or vehicle. The liver was rapidly removed, and several pieces were immediately frozen (within 1 min after decapitation) using powdered dry ice. The rest of the liver was fixed in 4% paraformaldehyde in 0.1 M phosphate buffer (pH 7.4) overnight at 4uC, then cryo-protected in phosphate-buffered saline (PBS) containing 30% sucrose for 3 days at 4uC. The tissue samples were mounted in O.C.T. compound (Tissue-Tek, Sakura Finetek Japan Co., Ltd., Tokyo, Japan) and frozen using powdered dry ice. The frozen samples were stored at 280uC until sectioned and assayed. In the second experiment, the rats were subcutaneously (s.c.) administered lansoprazole 30 mg/kg/day for 7 successive days. The rats were decapitated 1 day after the final administration of lansoprazole (n = 5) and the livers were sampled as described above. Other groups of rats (n = 5) were administrated vehicle (s.c., 0.5% methylcellulose) for 7 successive days. Hepatic damage was assessed by measuring serum aspartate aminotransferase (AST) and alanine transaminase (ALT) activities according to the standard methods (SRL, Inc., Tokyo, Japan). Hepatic damage was also assessed by histological examination with hematoxylin-eosin staining. The area of hepatocelluar degeneration/necrosis in the section was assessed with the aid of Image J (http://imagej.nih.gov/ij/). In short, the digitized images were transferred to a personal computer, and the border of the lesion and the total area of the liver in the section were traced on a computer by a single observer who was blind to the treatments. The injured area per total liver area was calculated as lesion index (%). Stannous mesoporphyrin (SnMP, BIOMOL Research Labs. Inc., Plymouth Meeting, PA, USA), an HO-1 inhibitor, was dissolved in 100% ethanol and diluted 10-fold in 7% NaHCO3. The rats received vehicle or SnMP (20 mmol/kg) intra-peritoneally 60 min before administration of TAA [20]. The rats were divided into six groups. Rats in Group A (n = 5) (control) received daily (8AM) s.c. administration of vehicle for 5 successive days. At noon on the 4 th day, the rats received i.p. administration of vehicle, followed by a second i.p. injection of vehicle. Rats in Group B (n = 5) (lansoprazole) were administered lansoprazole (s.c., 30 mg/kg/day) daily (8 AM) for 5 successive days. At noon on the 4 th day, the rats received i.p. administration of vehicle, followed by a second i.p. injection of vehicle. Rats in Group C (n = 4) (acute hepatic damage) were administered vehicle (s.c.) daily (8 AM) for 5 successive days. At noon on the 4 th day, the rats received i.p. administration of vehicle, followed by i.p. injection of TAA. Rats in Group D (n = 4) (lansoprazole and acute hepatic damage) were administered lansoprazole (s.c., 30 mg/kg/day) daily (8 AM) for 5 successive days. At noon on the 4 th day, the rats received i.p. administration of vehicle, followed by i.p. injection of TAA. Rats in Group E (n = 4) (HO-1 inhibitor and acute hepatic damage) were administered vehicle (s.c.) daily (8 AM) for 5 successive days. At noon on the 4 th day, the rats received i.p. administration of SnMP (20 mmol/kg), followed by i.p. injection of TAA. Rats in Group F (n = 4) (lansoprazole, HO-1 inhibitor and acute hepatic damage) were administered lansoprazole (s.c., 30 mg/kg/day) daily (8 AM) for 5 successive days. At noon on the 4 th day, the rats received i.p. administration of SnMP (20 mmol/kg), followed by i.p. injection of TAA. Fortyeight hours after TAA treatment, intracardiac blood collection was performed under anesthesia with medetomidine hydrochloride (0.15 mg/kg), midazolam (4 mg/kg) and butorphanol tartrate (5 mg/kg), and both serum and liver tissue samples were obtained. Extraction of total RNA Total RNA from livers was extracted using the RNeasy Mini Kit (QIAGEN, Tokyo, Japan) and digested with RNase free-DNase (QIAGEN). Using a NanoDrop 1000 (Thermo Fisher Scientific Inc., Shanghai, China), the 260:280 nm absorbance ratio (A260/280) and the 260:230 nm absorbance ratio (A260/ 230) of the RNA samples were measured. In this experiment, we used RNA samples with A260/280 ratios greater than 1.8 and A260/230 ratios greater than 1.5. The quality of purified RNAs was assessed using an Agilent 2100 Bioanalyzer with an RNA 6000 Nano Kit (Agilent Technologies, Palo Alto, CA, USA). Real-time RT-PCR Expression of mRNA was determined using real-time reverse transcription (RT)-polymerase chain reaction (PCR). Primer sets for each gene are listed in Table 1. As an internal control, we also estimated the expression of rat glyceraldehyde-3 phosphate dehydrogenase (GAPDH) mRNA. Total RNA (0.1 mg) was converted into cDNA by reverse transcription using random primers (p (dN) 6 primers) and AMV reverse transcriptase (Roche Diagnostics Corp., Indianapolis, IN, USA) in a total reaction volume of 20 ml. PCR amplification using a LightCycler instrument was carried out in 20 ml of reaction mixture consisting of LightCycler FastStart DNA Master SYBR Green I (Roche Diagnostics GmbH, Penzberg, Germany), 4.0 mM of MgCl 2 , 0.5 mM of each probe, and 2 ml of template cDNA in a LightCycler capillary. Relative mRNA levels in each sample were quantified automatically according to the standard curves constructed according to the LightCycler software. The levels of mRNA were calculated with reference to external standard curves constructed by plotting the log number of 10-fold serially diluted cDNA samples against the respective threshold cycle using the second derivative maximum method. Expression of mRNA levels in each sample was normalized to GAPDH mRNA levels. Western blotting Frozen liver tissues were minced, homogenized in a buffer containing 0.01 M Tris-HCl, pH 7.6, 0.15 M NaCl, 1% TritonX-100, and protease inhibitor cocktail (0.2 mM phenylmethanesulfonyl fluoride, 20 mM leupeptin and 1.5 mM pepstatin A). The homogenates were centrifuged at 10,000 x g for 15 min at 4uC. Protein concentrations were determined using a Bio-Rad Protein Assay kit (Bio-Rad Laboratories Inc., Hercules, CA, USA). Subcellular fractionation Subcellular fractioning was performed using ProteoExtract Subcellular Proteome Extraction kit (Merck KGaA, Darmstadt, Germany) according to the manufacturer's protocol. Fractions of cytosol, membrane/organelle, nucleus and cytoskeleton were separated successively from fresh frozen liver. The cytosolic and nuclear fractions were processed by western blotting with anti-Nrf2. The purity of each fraction was determined by western blotting with anti-Calpain or anti-Histone H1 for the cytosolic and nuclear fractions, respectively. Immunohistochemistry Frozen sections (6 mm in thickness) were cut using a cryostat and thaw-mounted onto silane-coated slides. For fluorescence immunohistochemistry, the sections were incubated in 10 mM sodium citrate buffer (pH 6.0) for 10 min at 120uC (in autoclave), followed by incubation with anti-Nrf2 (10 mg/ml in 0.1 M PBS containing 5% normal goat serum and 0.3% Triton X-100). After rinsing twice with PBS, sections were incubated with the secondary antibody (biotinylated goat antirabbit IgG, Vector Laboratories) diluted 1:200 in PBS for 1 h at 37uC. Finally, the sections were incubated in Texas-Red Avidin D (1:1000; Vector Laboratories) in 0.1 M PBS containing 5% normal goat serum and 0.3% Triton X-100 for 1 h at 37uC, followed by nuclear staining with DAPI (Dojindo, Kumamoto, Japan). For non-fluorescent immunohistochemistry, sections were incubated with 3% H 2 O 2 in distilled water for 20 min to quench the endogenous peroxidase activity. After rinsing twice with PBS, they were incubated with anti-HO-1 (1:1000) primary antibody. After rinsing twice with PBS, sections were incubated with the secondary antibody (biotinylated goat anti-rabbit IgG, Vector Laboratories) diluted 1:200 in PBS for 1 h at 37uC. After rinsing twice with PBS, the sections were incubated with avidinbiotin-HRP complex (ABC Elite kit, Vector Laboratories) for 1 h at 37uC. After washing in 0.05 M Tris-HCl buffer, pH 7.6, immunoreactivity was visualized by incubation in 0.05 M Tris-HCl buffer, pH 7.6, containing 0.02% 3, 3'-diaminobenzidine tetrahydrochloride and 0.005% H 2 O 2 for 2-5 min. Omission of the primary or secondary antibody completely eliminated all immunoreactive staining. Microarray analysis and pathway analysis The analysis of RNA quality showed that the A260/A280 nm absorbance ratio of RNA samples used in this experiment consistently ranged from 1.8 to 2.0. The quality of purified RNAs was assessed by an Agilent 2100 Bioanalyzer using an RNA 6000 Nano Kit (Agilent Technologies). The samples in which RNA Integrity Number (RIN) scores were between 8 and 10 were used in microarray and real-time RT-PCR. An equal amount of RNA from three rats in each group (livers of rats 3 h after treatment with vehicle or lansoprazole) was pooled and used for microarray analysis as described elsewhere [21,22]. Briefly, total RNA (100 ng) was reverse-transcribed using a T7 sequence-conjugated oligo dT primer. Concomitantly, we used the RNA Spike-In Kit One Color (Agilent) to adjust the microarray data. Synthesis, amplification, and labeling of complementary RNA (cRNA) with Cy3 dye were performed according to the manufacturer's protocols. Prepared cRNA was added to a whole rat genome oligo DNA microarray version 3.0 (4644K; Agilent). Hybridization was performed at 65uC for 17 h. After washing, fluorescence intensity was determined using a scanner (G2565BA; Agilent). The Cy3 signal intensities were quantified and analyzed by background subtraction, using Feature Extraction software ver. 10.7.1.1 (Agilent), and the data were normalized using GeneSpring GX11.5.1 (Agilent). We used GeneSpring GX11.5.1 to select 12,134 genes producing florescence intensities . 100 in RNA samples from the livers of vehicle or lansoprazole treated rats. We used Ingenuity Pathway Analysis (IPA; version Fall 2013) to determine the functional pathways of the identified genes. IPA software contains a database of biological interactions among genes and proteins, which was used to calculate the probability of a relationship between each canonical pathway and the identified genes. IPA scans the set of input genes to identify networks using Ingenuity Pathway Knowledge Base (IPKB) for interactions between identified 'Focus Genes', (in this study, the differently expressed genes between the livers treated with vehicle or lansoprazole) and known and hypothetical interacting genes stored in the IPA software. The data obtained was used to generate a set of networks with a maximum network size of 35 genes/proteins. Networks are displayed graphically as genes/gene products ('nodes') and the biological relationships between the nodes ('edges'). All edges are from canonical information stored in the IPKB. In addition, IPA computes a score for each network according to the fit of the user's set of significant genes. The score indicates the likelihood of the Focus Genes in a network from Ingenuity's knowledge base being found together due to random chance. A score of 3, the cutoff for identifying gene networks, indicates that there is only a 10 23 chance that the locus genes shown in a network are due to random chance. Therefore, a score of 3 or higher indicates a 99.9% confidence level for excluding random chance. Data analysis Statistical analysis was performed using one-way ANOVA followed by Fisher's protected least significant difference test, or Student's t-test using StatView software (Abacus Concepts, Berkeley, CA, USA). Up-regulation and nuclear translocation of Nrf2 in the liver following single oral treatment with lansoprazole The levels of Nrf2 mRNA were increased at 3 h in a dosedependent manner, with a 2-fold increase observed at 100 mg/kg, as compared to control levels ( Figure 1A). The levels of Nrf2 IR in hepatic lysates were increased at 6 h in a dose-dependent manner, with a 5-fold increase observed at 100 mg/kg, as compared to control levels ( Figure 1B). Conversely, the levels of Keap1 mRNA and IR were unchanged following treatment with lansoprazole ( Figures 1C and 1D). Nuclear translocation of Nrf2 IR in hepatocytes was demonstrated using western blotting ( Figure 1E) and immunohistochemistry ( Figure 1F). Up-regulation of mRNA for Nrf2-dependent antioxidant and phase II enzymes following single oral treatment with lansoprazole As shown in Figure 2, expression of mRNAs for Nrf2-dependent antioxidant and phase II enzymes such as HO-1, NAD (P) H dehydrogenase, quinone 1 (Nqo1), glutathione S-transferase A2 (Gsta2), UDP glucuronosyltransferase 1 family polypeptide A6 (Ugt1a6) was up-regulated at 3 h in a dose-dependent manner. Up-regulation of HO-1 IR in the hepatocytes following single oral treatment with lansoprazole The levels of HO-1 IR in the liver were increased at 6 h in a dose-dependent manner, with a 3-fold increase observed at 100 mg/kg, as compared to control levels ( Figure 3A). In the control liver, HO-1 IR was detected in macrophages. In response to lansoprazole treatment, HO-1 IR positive hepatocytes were observed ( Figure 3B). Up-regulation of Nrf2 and HO-1 following successive subcutaneous treatment with lansoprazole As shown in Figure 4A, the levels of mRNA for Nrf2, Keap1 and HO-1 were not significantly different between control and animals receiving lansoprazole (30 mg/kg/day) for 7 successive days. However, the levels of IR for Nrf2 and HO-1 were significantly (2-fold) increased following administration of lansoprazole ( Figure 4B). Nuclear translocation of Nrf2 in hepatocytes was also observed in response to treatment with lansoprazole ( Figure 4C). Effects of successive subcutaneous treatment with lansoprazole on acute hepatic damage Treatment with lansoprazole (comparison between group A and group B) did not affect the serum levels of AST and ALT hepatocytes may partially contribute to the amelioration of acute hepatic damage. The % inhibition in group D (acute hepatic damage with lansoprazole) was two-fold higher than that in group F (acute hepatic damage with lansoprazole and SnMP) (AST; 44.5610.6% vs. 20.7623.4, P = 0.39, ALT; 45.464.9 vs. 26.76 11.8, P = 0.19), suggesting that the effect of lansoprazole on the acute hepatic damage is mediated by via both HO-1-dependent and HO-1-independent pathways (Figures 5C and 5D). Hepatic damage was also assessed using histological examination. Following TAA-treatment, hepatocellular degeneration and necrosis with hemorrhage and infiltration of inflammatory cells were observed around the central vein ( Figure 6). In response to pretreatment with lansoprazole, these pathological changes were significantly attenuated (acute hepatic damage; 32.865.3% vs. acute hepatic damage with lansoprazole; 15.962.8%, P,0.05). Microarray analysis and data mining by IPA We investigated liver gene expression profiles 3 h after treatment with lansoprazole (100 mg/kg) using microarray analysis. Among the 30,367 genes that were analyzed, 12,134 genes were detected in the lansoprazole-and/or vehicle-treated livers. We selected genes whose expression differed by more than 2-fold in the lansoprazole group compared with the vehicle group. Using these criteria, we identified 1874 up-regulated genes and 1,700 down-regulated genes in the lansoprazole group compared to the vehicle group. IPA was used to organize the differentially expressed genes into functionally annotated pathways and networks. Using IPA, we identified 3 networks with scores greater than 20 (Table 2). IPA indicated that 7 genes (Acox1, AhR, Ppara, Keap1, IL1b, Mafg and Rxra) were identified as the upstream of Nrf2 (Figure 7). Within these genes, AhR and Ppara (peroxisome proliferator activated receptor a) were significantly increased (ratio.2). To confirm these networks, the expression of these genes was evaluated using real-time RT-PCR ( Figure 8). The mRNA level of AhR was significantly increased. Expression of Ppara was elevated in a non-significant manner. The mRNA level of Cyp1a1, a target gene of AhR, was significantly and markedly up-regulated. Discussion Lansoprazole is available worldwide as a potent proton pump inhibitor. In this study, we identified a novel acid-independent, extra-gastrointestinal function as a potent activator of antioxidative stress responses. First, we found that gastric or subcutaneous administration of lansoprazole induced the transcription of Nrf2 mRNA and the translation of Nrf2 protein without affecting Keap1 levels in the liver. Second, un-complexed Nrf2 was translocated to the hepatic nuclei, thereby initiating the transcription of Nrf2-dependent antioxidant and phase II enzymes, such as HO-1, Nqo1, Gsta2 and Ugt1a6, in the liver. Third, pretreatment with lansoprazole attenuated TAA-induced acute hepatic damage via both HO-1-dependent and HO-1independent pathways. Fourth, the AhR-Cyp1a1 pathway was associated with the up-regulation of Nrf2 mRNA. Induction of HO-1 by proton pump inhibitors was first reported by Becker et al [5], who found that both lansoprazole and omeprazole up-regulated the mRNA, IR and activity of HO-1 in human gastric cancer cells (AGS cells and KATO cells), RGM-1 cells and human endothelial cells (ECV304). Lansoprazole but not omeprazole induced HO-1 IR at the surface of the small intestinal epithelial cells and prevented indomethacin-induced small intestinal ulceration [7]. In our preliminary study, the induction of HO-1 mRNA in the small intestine by lansoprazole was not observed at 3 h. Although we missed the peak of mRNA induction, the mechanism of increased HO-1 IR in the small intestine might not be associated with the induction of Nrf2 mRNA. Discrepancies between the levels of mRNA and those of IR were also observed in the levels of Nrf2 and HO-1 following successive subcutaneous treatment with lansoprazole. This might be also due to the delay in sampling causing some degradation of induced Nrf2 and HO-1 mRNAs. Using RGM-1 cells, it was also demonstrated that phosphorylation of extracellular signal-regulated kinase (ERK) and Nrf2, as well as the activation and nuclear translation of Nrf2 and oxidation of Keap1, were involved in Figure 7. Up-stream networks of Nrf2 in the liver at 3 h following oral administration of lansoprazole (100 mg/kg) in the liver. The number below each symbol indicates fold change observed in the microarray analysis. Pink color indicates significant up-regulation, and green color indicates no significant change in gene expression. Acox1, acyl-CoA oxidase 1, palmitoyl; Ppara, peroxisome proliferator activated receptor a; IL1b, interleukin 1b, Mafg, v-maf avian musculoaponeurotic fibrosarcoma oncogene homolog G; Rxra, retinoid X receptor a. doi:10.1371/journal.pone.0097419.g007 lansoprazole-induced HO-1 up-regulation [6]. A similar mechanism could not be excluded in the liver. However, in contrast, the mechanism of HO-1 induction by lansoprazole in the liver was considered to be due to de novo Nrf2 mRNA synthesis without affecting the levels of Keap1, rather than due to the dissociation of the Keap1-Nrf2 complex. Lansoprazole is metabolized by Cyp2c19 and Cyp3a4 [23]. Cyp3a4 catalyzes both 5-hydroxylation and sulfoxidation of lansoprazole and Cyp2c19 catalyzes 5-hydroxylation in the human liver. In contrast, lansoprazole, omeprazole, and pantoprazole induced Cyp1a1, Cyp1a2, Cyp2b and Cyp3a in primary human hepatocytes, human liver, human hepatoma cells and rat liver [24][25][26]. AhR is involved in the induction of Cyp1a1 and Cyp1a2 by omeprazole via a common regulatory region containing multiple AhR-binding motifs [26]. Our study has confirmed the extensive induction of Cyp1a1 in rat liver by lansoprazole. Complex mutual interactions between AhR and Nrf2 occur during the induction of phase I and phase II drug-metabolizing enzyme genes, exemplified by the mutual induction of gene expression. AREs are present in many phase II genes, whereas the xenobiotic response elements (XRE) are present in both phase I genes and phase II genes [27]. The AhR and AhR nuclear translocator (ARNT) heterodimer binds to XRE, resulting in the induction of both phase 1 genes and Nrf2, with Nrf2 subsequently activating phase II genes [28]. Conversely, Nrf2 regulates the expression of AhR mRNA and subsequently modulates several downstream genes in the AhR signaling pathway, including transcriptional control of phase 1 genes (Cyp1a1 and Cyp1b1) [29]. 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) binds to the AhR, and this complex translocates to the nucleus [30]. Via activation of the AhR/XRE system, TCDD induces Cyp1a1, Nqo1, Ugt1a6 and Gsta1 as well as Nrf2 in the liver [31]. In contrast, most of the TCDD-induced enzymes, excluding Cyp1a1 and Ugt1a1, are not induced in the Nrf2-null mice [31]. In this study, we found that lansopazole was a mixed inducer of both phase I and phase II drug-metabolizing systems, i.e. the mRNA levels of AhR, Cyp1a1, Nrf2, and phase II enzymes. The molecular mechanism of these complex interactions remains to be elucidated. The other pathway in relation to the induction of Nrf2 mRNA by lansoprazole is the Ppara pathway, although its contributions are limited. The absence of the Ppara gene results in downregulation of Nrf2 in the liver of fasted animals [32]. These observations may be related to lansoprazole-induced Nrf2 expression. A summary schematic of the lansoprazole-induced AhR/ Cyp1a1/Nrf2 pathway in the liver is shown in Figure 9. Study limitations In this study, we first determined mRNA levels at 3 h and IR at 6h after gastric administration of lansoprazole. Since we did not examine the expressions of these substances at other time points, it is possible that we overlooked peak expression levels of each substance. In previous studies, we performed detailed estimates of the time course of HO-1 induction by acute gastric injury [33] and polaprezinc [20]. These studies indicated that HO-1 mRNA levels peaked at 3h and HO-1 IR levels peaked at 6 h. To minimize the number of animals used in this study, we selected sampling points and the minimal replicate numbers required to achieve statistical significance. Second, the dosages of lansoprazole (10-100 mg/kg) were higher than those used in clinical settings. We selected dosages that were used in several experimental studies involving rat ulcer models [34]. Third, there are no reports of the effects of lansoprazole on hepatic function in human, specifically during hepatic dysfunction. Clinical assessments of lansoprazole in hepatic diseases are currently in progress in our group. Fourth, the effects of other proton pump inhibitors have not yet been examined. It has been reported that lansoprazole, but not omeprazole, induced HO-1 in gastric mucosal cells [6] and the small intestine [7]. In contrast, another study demonstrated that both lansoprazole and omeprazole induced HO-1 in gastric and endothelial cells [5]. In addition, proton pump inhibitors (including omeprazole, lansoprazole and pantoprazole) were found to induce Cyp1a1, Cyp1a2, Cyp2b and Cyp3a in various types of hepatocytes [24][25][26]. These effects were observed in cultured cells, suggesting that the effects are independent of gastric acid suppression. It is reasonable to consider that up-regulation of the AhR/Cyp1a1/Nrf2/phase II system by lansoprazole is also independent of gastric acid suppression. Microarray analysis showed that the levels of mRNA encoding the proton pump H + /K + -adenosine triphosphatase were below the detection limit (data not shown), suggesting that a functionally active proton pump is not expressed in the liver. Therefore, it is highly unlikely that this novel effect of lansoprazole is attributable to proton pump inhibition. We are investigating a number of chemical structures within lansoprazole that may contribute to the up-regulation of the AhR/Cyp1a1/Nrf2/phase II system in the liver. Extensive screening of a chemical library, not limited to proton pump inhibitors, is also in preparation by our group. Fifth, the precise molecular mechanisms by which lansoprazole activates both the phase I and phase II drug-metabolizing systems, i.e., the AhR/ Cyp1a1/Nrf2 pathway, are unknown. Studies in Nrf2-null mice may provide a possible cue to answer these questions [31]. Sixth, anti-oxidative stress effects on other organs have not been clarified. Further extensive studies are required to answer these questions.
6,953.8
2014-05-20T00:00:00.000
[ "Biology", "Medicine", "Chemistry" ]
Basic Framework and Main Methods of Uncertainty Quantification Since 2000, the research of uncertainty quantification (UQ) has been successfully applied in many fields and has been highly valued and strongly supported by academia and industry. (is review firstly discusses the sources and the types of uncertainties and gives an overall discussion on the goal, practical significance, and basic framework of the research of UQ.(en, the core ideas and typical methods of several important UQ processes are introduced, including sensitivity analysis, uncertainty propagation, model calibration, Bayesian inference, experimental design, surrogate model, and model uncertainty analysis. Introduction In weapon manufacturing [1], aerodynamics modeling [2], detonation modeling [3], inertial confinement fusion [4], and other natural science and engineering fields, test, modeling, and simulation are three main approaches to understand a complex process. e basic element of scientific exploration is experiment and observation. For example, cylinder test is a reliable and representative measure of an explosive ability to accelerate metal [5]. However, due to the limitation of test cost, lack of test conditions, environmental damage caused by test, political constraints, and other reasons, sometimes we have to rely on other means, such as modeling and simulation (M&S) [6]. e process of mathematical and physical modeling is to decompose and refine the complex system and then attempt to reveal the basic principle behind a phenomenon. Often, complex mathematical models, such as partial differential equations (PDEs), have no closed-form solution [7]. Hence, we need to rely on some numerical simulation methods, such as finite element or finite difference schemes, to obtain the results. e importance of simulation is that it allows parameters to be changed in the models to understand the cause and effect of some complex phenomena which might be too expensive or dangerous for conventional experimental methods [8]. In these processes, many uncertainties are introduced. For the experiment, the main sources of uncertainty are observation error and random perturbation of some experimental conditions. en, because of the complexity of reality, the incompleteness of information, and the limitation of cognition, the mathematical model will neglect some influence factors and can only approximate the real behavior at a certain level of accuracy, which naturally introduces discrepancies between the physical system and the mathematical model [7]. e discrepancies are the source of uncertainty, including multiple model forms describing the same physical process, random variables, unknown distribution parameters, and initial and boundary conditions. In addition, in the process of numerical solution, time-space discretization, truncation error, rounding error, and the inherent accuracy of digital system all introduce uncertainties to the system [9]. Generally, uncertainties are broadly classified into two categories: aleatory and epistemic uncertainty. Aleatory uncertainty describes the natural/intrinsic variability of a quantity of interest (QoI). Epistemic uncertainty, on the other hand, describes the lack of knowledge and is potentially reducible by acquiring more knowledge. is lack of knowledge comes from many sources [10], for example, inadequate understanding of the processes, incomplete knowledge of the phenomena, and imprecise evaluation of the related characteristics. e uncertainties are major obstacles for the predictive capability and the reliability of simulations [11]. Different types of uncertainties may present in a given problem and interact with each other, which will affect almost all aspects of engineering modeling and design. Hence, it is important to quantify the errors in order to be able to interpret the results [9], including identifying the main sources of uncertainty, analyzing how the uncertainty propagates in complex systems, and finding stable optimized solutions across a wide range of inputs, and then make better decisions at a known level of confidence, so as to reduce development time, prototype cost, and unexpected failure. e research on the uncertainty in the deterministic engineering modeling of complex physical processes dates back to around 1980 [12]. After nearly four decades of development, uncertainty quantification (UQ) has played an important role and has been successfully applied in many fields. For example, NASA Ames Research Center has carried out a research on Mars atmospheric entry condition [13] to isolate the rate-limiting mechanisms and identify the chief sources of aeroheating uncertainty. rough Monte Carlo sensitivity analysis and uncertainty analysis, a total of 130 input parameters are statistically varied to shortlist a handful of parameters that essentially control the heat flux prediction. In the research of theoretical nuclear physics, Lawrence Livermore National Laboratory has carried out a research on the reliable UQ of nuclear forces [14], which is an urgent necessity in ab initio nuclear structure and nuclear reaction calculations. Based on the discrepancy between the systematic uncertainty of the Skyrme parameters and the sample standard deviation of the six interactions, they discussed and showed how a detailed statistical scrutiny of the nucleon-nucleon scattering data may provide valuable hints on the interplay between theory and experiment and their assumed uncertainties. In addition, UQ and aeroelasticity are important ingredients for the design optimization of more reliable aircraft [15]. rough designing under uncertainty, designers can seek to minimize structural weight while meeting probabilistic safety constraints. At the same time, a surrogate model with high accuracy and low computational complexity is also an important tool for aerodynamic optimization design [16]. Moreover, due to the uncertainty involved, the credibility of risk assessment results is still a major issue [17]. Hence, the uncertainty quantification in risk assessment is crucial in high-risk fields such as the nuclear and chemical industries. Parameter uncertainty characterization (representation) must be managed correctly so that it can then be propagated through risk models and produce a satisfactory output. en, by considering all the uncertainties in the risk estimation, the decision with lower risk will be obtained. Since 2000, the research of UQ has been highly valued and strongly supported by academia and industry and has become one of the important research directions of Applied Mathematics [18]. At the same time, different approaches of UQ have been developed, for instance, sensitivity analysis, Monte Carlo simulation, response surface approaches, evaluation of classical statistical confidence bounds, Dempster-Shafer theory, and Bayesian inference. At present, the methods of UQ are relatively isolated and have not formed into a complete system. is paper will give the basic framework of research on UQ and systematically review the core ideas and typical methods of several important UQ processes. Figure 1 shows the connections between some of these essential components of UQ. It is obvious that the forward UQ process, such as sensitivity analysis and uncertainty propagation, always starts with the characterization of the input uncertainties. Unfortunately, this information is not always readily available. Such condition is known as the "lack of input uncertainty information" issue. Up to now, in the uncertainty, sensitivity, and validation studies of engineering, "expert opinion" or "user self-assessment" has been predominantly used [19]. Such ad hoc specifications of input uncertainty information have been considered reasonable for a long time. However, these approaches are subjective and lack mathematical rigor and can lead to inconsistencies. The Framework of UQ e "lack of input uncertainty information" issue necessitates the research on inverse UQ (backward problem [20] or inverse uncertainty propagation [21]), including model calibration and Bayesian Inference. According to Oberkampf [20], "the backward problem asks whether we can reduce the output uncertainty by updating the statistical model using comparisons between computations and experiments." While the simulation-based uncertainty quantification seems straightforward, the slow convergence rate poses a major challenge in applications where the computational cost of each sample is high [11]. Experimental design is an effective method, which focuses on how to achieve the same effect as full factorial design with fewer samples [22]. Another approach for overcoming the difficulty of expensive model simulations is surrogate model or response surface method [11]. e surrogate models provide an approximate functional mapping M that replaces the true mapping M. Once constructed, the surrogate models can be evaluated at negligible computational costs. Model uncertainty also needs attention. is is mainly aimed at some phenomenological models or data-driven models with insufficient theoretical support coupled in complex physical systems or engineering problems, as well as the influence of numerical discretization techniques, such as the density and type of discrete mesh. Multiple models with similar effects but different structures can be evaluated and integrated using model averaging and model selection method combining the differences between the test and the simulations [11]. Note that, in some instances, it also depends on the optimization and calibration of model parameters, so as to eliminate the influence of other factors, such as parameter uncertainty. Sensitivity Analysis Sensitivity analysis (SA) is to study the impacts of the input parameters on the outputs of a mathematical model or system. rough SA, the importance ranking of the input parameters can be given according to their contribution to output uncertainties. is ranking can provide two main ideas for system optimization design. First, by neglecting the uncertainty of input factors with small importance, we can effectively reduce the dimension of high-dimensional complex system to solve the "disaster of dimension," and reduce the cost of trial and error to provide more efficient guidance information for system optimization design. Second, by controlling the uncertainty of the important input factors, the designer can reduce the uncertainty of the model output with the minimum economic and time cost, so as to improve the robustness of the model prediction or reduce the failure probability of the structure system to the greatest extent and directly achieve the optimal design of the structure. ere are generally two types of sensitivity analysis: local sensitivity analysis and global sensitivity analysis [23]. Both statistical and deterministic methods are used for sensitivity analysis [24]. In principle, both types of procedures can be used for either local or global sensitivity and uncertainty analysis, but in practice, deterministic methods are used mostly for local analysis, while statistical methods are mostly used for global analysis. Local Sensitivity Analysis. e local sensitivity analysis is to analyze the effects of local changes of a parameter in the system [25]. It can further gain more insight of the system for the local structure of the system. e main method for the local sensitivity analysis is the partial derivatives method [26], calculating the exact slopes provided by zy/zx i , i � 1, . . . , m of the model response y � f(x) with respect to the i-th model parameters x i . For some complex systems, the partial derivatives can be determined indirectly by recalculations of the response using parameter values that deviate by a small amount, δx i , 1% from their nominal values x 0 i as follows: where x 0 � (x 0 1 , . . . , x 0 m ). Although this indirect (or bruteforce) method [27] is conceptually simple to use and requires no additional model development, it is expensive for models based on nonlinear partial equations with many sensitivity coefficients because each solution of the equations is itself expensive to obtain. Additionally, it involves a trial-anderror process when selecting the parameter perturbations δx i . Note that erroneous sensitivities will be obtained if (i) δx i is chosen to be too small, in which case the computational round off errors will overwhelm the correct values and (ii) the parameter dependence is nonlinear and δx i is chosen too large, in which case the assumption of local linearity is violated. Another method of calculating the sensitivity coefficients is the direct method. In the direct method, the sensitivity coefficients are calculated from an auxiliary set of equations derived from the model equations. For some applications of the direct method in kinetics, the model equations and auxiliary equations have been coupled and solved together. is coupled solution procedure, however, has been found to be unstable or has failed completely for several stiff problems, and it is also quite inefficient [27]. For other applications, the auxiliary equations have been decoupled from the model equations and the two sets of equations solved separately. Among these, the most advanced and computationally economical method is the decoupled direct method (DDM) [23], which is originally proposed by Dunker Mathematical Problems in Engineering [27]. In this approach, the Jacobian matrix needed to solve the original system at a given time step is also used to solve the sensitivity equations at the respective time step before proceeding to solve both the original and sensitivity systems at the next time step. e decoupling procedure greatly increases the efficiency of the method by taking advantage of the fact that the auxiliary equations for different sensitivity coefficients are quite similar. To overcome the difficulties connected with the early coupled direct method, the Green's function method (GFM) was developed [28]. e basis for the method is the wellknown Green's function technique. By working with the same differential equations as the conventional direct method, this new approach reduces the number of differential equations to be solved and replaces them by a set of integrals. Sensitivity coefficients of all orders are expressed in integral form and evaluated in a recursive manner. Since evaluating well-behaved integrals is usually much easier than solving stiff differential equations, substantial savings can be achieved by the method when the number of system parameters is large. Afterwards, a modification of the GFM with the analytically integrated Magnus (GFM/AIM) [29] was presented, which dramatically reduces the computational effort required to determine linear sensitivity coefficients. e technique employs the piecewise Magnus method for more efficient calculation of Green's function kernels and treats the sensitivity integrals analytically. For large-scale systems, in which the number of system parameters to be considered exceeds the number of responses, the adjoint sensitivity analysis procedure (ASAP) is, by far, the most efficient method even though it can only be implemented with an appropriately constructed adjoint sensitivity system [30]. e remarkable efficiency of the ASAP stems from the fact that the adjoint sensitivity system is linear in the adjoint function and is independent of any parameter variations. Hence, the adjoint sensitivity equation needs to be solved only once, for each response, in order to obtain the adjoint function. In particular, if the original model is linear in the state (i.e., dependent) variables, then the adjoint sensitivity equation can be solved independently of the original model. In turn, once the adjoint function has been calculated, it is used to obtain the sensitivities to all system parameters, by simple quadratures, without needing to solve repeatedly differential and/or integral equations. In addition, there is another kind of sensitivity, namely, implicit sensitivity. In some cases, the implicit sensitivity was ignored because it is always difficult to be quantified. But this may lead to wrong conclusions in some cases. For example, in nuclear reactor physics calculation, the perturbation of micro cross section may impact the resonance calculation and consequently impact the macro cross section and then to the eigenvalue and neutron flux distribution. Response (eigenvalue, reaction rate, etc.) sensitivity with respect to cross sections can be divided into two parts, namely, explicit sensitivity and implicit sensitivity. e former is the direct impact of cross section perturbation on the responses through neutron transport equation, while the latter is the indirect impact of cross section perturbation on the responses through resonance self-shielding procedure [31]. As an indirect impact related with resonance calculation, implicit sensitivity is often neglected in many sensitivity and uncertainty analysis, and many sensitivity and uncertainty analysis codes lack the ability to perform implicit sensitivity calculation. However, from the original research of Greenspan et al. [32] to the subsequent research of Williams et al. [31], the results indicated that the implicit sensitivity had a nonnegligible importance relative to the explicit sensitivity and the implicit effect had a magnitude that was more that 40% of the explicit effect in some cases. Up to now, however, most implicit sensitivity studies are mainly established for simple resonance-calculation methods such as Bondarenko method [31], generalized Stammler method [33], and so on [34], which are not applicable for complex fuel and core designs. In order to expand the implicit sensitivity analysis method to wider application domain, Liu et al. [35] proposed a method based on the generalized perturbation theory (GPT) to calculate the implicit sensitivity coefficients by using the subgroup method in the resonance selfshielding calculation. e numerical results show that it is necessary to perform implicit sensitivity analysis in sensitivity and uncertainty analysis to obtain more rigorous results. Global Sensitivity Analysis. To overcome the limitations of local methods (linearity and normality assumptions, local variations) [26], another class of methods has been developed in a statistical framework. In contrast to local SA, global SA aims to evaluate the entire parameter space [24] and measure the contribution of input variables to the output from the average point of view. In the global SA framework, the uncertainty of the inputs is modeled by random vectors. Common approaches for global sensitivity analysis include variance-based methods, moment-independent importance measures, and reliability-based SA. e variance-based methods, such as the correlation ratio-based methods and the Sobol method [36], use variance as a measure of the importance of a parameter x i in contributing to the overall uncertainty of the response y. For the correlation ratio-based method, the sensitivity measure (first-order effect) is written as follows: where E(y | x i ) denotes the conditional expectation of y while keeping x i fixed, Var(y) denotes the variance of y, and If x i has great influence on y, then the fluctuation of y will be basically determined by that of x i . It means that the variation of y at any given x i , i.e., Var(y | x i ), should be really small as well as E[Var(y | x i )]. From the following decomposition of variance we can see that Var[E(y | x i )] should be large as well as S i . Hence, S i can reflect the importance of the impact of x i on y. e advantage of this index is that it avoids the specific form of y and is applicable to all systems with quadratic integrability. e disadvantage is that it can only describe the importance of a single variable x i to y, but cannot reflect the joint influence of a group of variables. To supplement the insufficient, Sobol et al. proposed some more generalized indexes based on the orthogonal decomposition, Hoeffding decomposition [37], of the objective function [38]. Under the assumption that x i is independent with each other and f(x) is quadratic integral, the objective function y � f(x) � f(x 1 , . . . , x m ) can be divided into the sum of several functions with different dimensions: where , and so on for a total of 2 k terms, including f 0 . e unicity condition of the decomposition is granted by [39] where and similarly for higher orders: . , x k s ), and E(y | x M ) denotes the conditional expectation of y at given x M . Because of the orthogonality of the decomposition, where en, we can get the following conclusions: where D M � Var(f M (x M )). In this way, the fluctuation of output variable y is decomposed into the sum of the fluctuation of several items. Based on this variance decomposition, several indexes are proposed: Sobol (1993) [39]: known as the Global Sensitivity Index, well reflects the interaction effect of several input variables x M on the response y. Homma and Saltelli (1996) [40]: where S Ti denotes the total effect of x i on y and M i denotes all the subsets of 1, 2, . . . , m { } that including i. ese indexes can be used to measure the importance of the variables. And they can identify whether the interaction effect of a group of input variables on output variables exists. More importantly, they have no restrictions on the specific form of the model. e model can be nonlinear, nonmonotonic, and nonadditive. However, due to the highdimensional integration of complex functions involved in the calculation of expectation and variance, this method is sometimes difficult to achieve. e class of moment-independent importance measures comprises sensitivity measures based on discrepancies between density functions, cumulative distribution functions, and value of information [41]. e name moment independent communicates the intuition that these sensitivity measures take into account the change in the entire distribution (density) of the model output, instead of the variation of one of its particular moments (e.g., variance). e common idea of these methods is to construct an index which can measure the difference between conditional distribution (by fixing a variable) and unconditional distribution to reflect the importance of the variable on the system. e following are some examples. (i) Density-Based Importance Measure: e first class of moment-independent sensitivity measures introduced is the class of density-based sensitivity measures: where the factor 1/2 is inserted for normalization purpose, p(y) denotes the density function of y, and p(y | x i ) denotes the conditional density function by fixing x i . In particular, δ i assumes the null value if and only if y is independent of x i . In fact, if y and x i are independent, fixing x i leaves the distribution of y unchanged. e δ sensitivity measure is defined by the L 1 -norm. Similarly, several other distance measurements, such as Kullback-Leibler (K-L) divergence, can also be used to define the probabilistic sensitivity measures: Mathematical Problems in Engineering where b i (x i ) represents the K-L divergence between the conditional and unconditional model output distributions. Finally, while full details about estimation cannot be given due to space limitations, we observe that the estimation of any density-based sensitivity measure involves the problem of estimating the empirical density. (ii) Cumulative Distribution-Based Sensitivity Measures: A second class of moment-independent sensitivity measures is represented by taking the separation between cumulative distribution functions into account. In general, we write where F(y) is the unconditional model output cumulative distribution function, F(y | x i ) is the cumulative distribution function conditional on fixing model input x i , and h(·, ·) is a distance or divergence between cumulative distribution function, such as generic L p -distance metric, Kolmogorov-Smirnov distance, and Kuiper's distance. ese metrics hold different properties, including scale invariance and transformation invariance. In Baucells and Borgonovo [42], the choice of the metric is thoroughly discussed. In many practical problems, the primary interest of the analyst may be focused on a particular mode of failure of the system under consideration, while the detailed spectrum of probabilistic outcomes may be of secondary concern [43]. For such problems, the so-called reliability-based algorithms [41] provide much faster and more economical solutions regarding the particular mode of failure. e concept of "failure" is characterized by a threshold level that specifies mathematically. e reliability algorithms most often used are first-order reliability methods and second-order reliability methods [44]. Both of these methods use optimization algorithms to seek "the most likely failure point" in the space of uncertain parameters, which is defined by the mathematical model and the response function. Once this most likely failure point (referred to as the "design point") has been determined, the probability of failure is approximately evaluated by fitting a first-(or second-) order surface at that point. Reliability algorithms have been applied to a variety of problems including structural safety, offshore oil field design and operation, and multiphase flow and transport in subsurface hydrology. Uncertainty Propagation Uncertainty propagation (UP, or error propagation) [45] aims to measure the impact of disturbances in the input variables on the system output. is information is critical when the system needs to quantify the confidence of the outputs [9]. rough appropriate methods, we can make horizontal comparisons of the uncertainties generated by several different models or analyze which steps are the key factors of the diffusion of uncertainty in a complex multiphysical process coupled system. Unlike SA, which focuses on the importance ranking of input, UP pays more attention to the result of error propagation in the system and its influence on system stability. In general, probabilistic method is a mature methodology for uncertain problems when sufficient sample information is available to construct the accurate probability distributions of random inputs [46]. However, due to the expensive experimental cost, the sample data in many engineering practices are always scarce. In this case, there is no enough information to make accurate probabilistic representation of the different types of uncertainties existing in the system. erefore, some other promising uncertainty theories have emerged, for example, nonprobabilistic interval process [47] and Dempster-Shafer theory [48]. Compared with the traditional probability theory, these new representations of uncertainty are able to represent the epistemic uncertainty more accurately. Probability Boxes. In many cases, knowledge on the input is incomplete and probability theory is not sufficient to describe the uncertainty. is motivates the introduction of so-called probability boxes (p-boxes) which account for aleatory as well as epistemic uncertainty in the description of a variable [49]. ere are two distinguished types of p-boxes, namely, free and parametric p-boxes. Mathematically speaking, a free p-box is defined by lower and upper bounds denoted by F(x) and F(x) on the cumulative distribution function (CDF) of a variable. A free p-box can be constructed by Kolmogorov-Smirnov confidence bound [50], Chebyshev's inequalities [51], and robust Bayes' method [52]. is implies that the true CDF can have an arbitrary shape as long as it fulls the characteristics of a generic CDF and lies within the bounds of the p-box. Because the shape of the true CDF is not specified, different types of curves are possible, including nonsmooth ones [49]. Parametric p-boxes (or distributional p-boxes) are defined as distribution function families, the parameters of which are known in intervals: where D Θ is the interval domain of the distribution parameters. And the lower and upper boundary curves of the parametric p-box are obtained by A parametric p-box can be generated by determining confidence bounds onto the distribution parameters. Parametric p-boxes allow for a clear separation of aleatory and epistemic uncertainty [7]: aleatory uncertainty is represented by the distribution function family, whereas epistemic uncertainty is represented by the intervals in the distribution parameters. However, parametric p-boxes are more restrictive than free p-boxes because they require knowledge on the distribution family. Dempster-Shafer eory. Dempster-Shafer theory (D-S theory) is also known as evidence theory. In particular, evidence theory considers two measures, i.e., belief and plausibility, for each event E in the event space Ω: where the belief measure Bel(E) is defined as the minimum amount that must be associated with an event E, whereas the plausibility is defined as the maximum amount that could be associated with E. And m(J) is interpreted as the amount of likelihood that is associated with event J but without any specifications of how this likelihood might be appointed. Note that e advantage of using evidence theory lies in the fact that it can be successfully used to quantify the degree of uncertainty when the amount of information available is small. And the evidence theory loosens the strict assumption of a single probability measure P in probability theory. Apart from that, since there is uncertainty in the given information, the evidential measure for the occurrence of an event and that for its negation do not have to sum to unity. However, Dempster-Shafer theory is based on the assumption that these sources are independent. is rule has been subject to some criticism in the sense that it tends to completely ignore the conflicts that exist between the available evidence from different sources. Interval Process. Interval process is considered as a flexible method for uncertainty propagation because only the variational ranges need to be well defined without any detailed statistical characteristics of uncertain parameters [47]. Along with the widespread concern in the recent two decades, many interval approaches have been successfully developed [53]. By treating the response bounds as two extreme value models, theoretically the optimizationbased methods could obtain accurate results [54]. It means that when the system response was monotonic with respect to the uncertain parameters, the accurate response bounds could be derived by the interval vertex methods, where only the endpoint combinations of interval parameters were simulated [55]. When the system does not satisfy monotonicity, the response bounds can be directly simulated via substantial sample processes, but sometimes also obtained via monotonic interval determination with the help of sensitivity analysis to avoid the huge computational cost. In practice, p-boxes and D-S theory provide measures of the uncertainty of the variables. However, in order to perform the uncertainty propagation, we need to combine the interval process. To obtain more details about the response on each of the variables with different domains and weaken the presupposition of monotonicity, slicing algorithm needs to be introduced. e slicing algorithm transforms the propagation of p-boxes into the propagation of a large number of intervals [7]. First, each p-box is discretized into a number of intervals and associated probability masses. For variable x i , the interval [0, 1] is divided into n x i subintervals with corresponding thickness m en, let K be a set of multi-indices defining a combination of intervals of each input parameter x i : Let D k be the hyperrectangle defined by For each D k , two optimization problems are solved to define the associated the bounds of y: When f and n x i become large, this quickly becomes intractable due to the large number of optimizations. Hence, sometimes we need surrogate models to simplify the process. Model Calibration Here we focus on the difference between computer simulations and physical experiments. Model calibration aims to reduce the parameter uncertainty and improve consistency between the computer models and physical experiments by adjusting some selected tunable parameters through estimation, optimization approaches, and so on. Tunable parameters refer to the modeling or calculation parameters that have no physical meaning or have physical meaning but have great cognitive defect and have obvious influence on Model and Simulation (M&S) results. Calibration can be classified as deterministic and statistical calibration [56]. Deterministic calibration merely determines the point estimates of best-fit input parameters to minimize the discrepancies between code output and experimental data [57]. e earliest calibration work is mostly manual, or based on the modeler's experience. For example, for the parameter calibration of the equation of state in detonation test, Lee et al. [58] put forward an initial guess for the parameters and varied their values in a hydrocode and then repeatedly calculated Mathematical Problems in Engineering until a satisfactory agreement between calculation and experimental result is obtained. In these research studies, parameters were "guessed" [58] or "fiddled" [59] based on experience. Current deterministic calibration models range from manually tuning parameters [60], to brute-force search methods [61]; all rely on repeated simulation. An underlying challenge in simulation-based calibration is the curse of dimensionality, which characterizes problems in which the search space (and therefore computation time) increases drastically-often exponentially-with the number of parameters being simultaneously estimated. Hence, simulationbased calibration method is more suitable for small-scale problems. As for large-scale applications, we need to rely on high-precision surrogate models [62]. Moreover, the calibration problem requires the location of the minimum of an objective function that measures the fit of the model results to the data (e.g., the sum of squares of errors). Unfortunately, the objective function may have many local minima and the global minimum may reside in a part of the parameter space that the modeler considers undesirable [60]. However, the statistical method can get a more global calibration result. We will review the statistical inverse UQ in detail in the next section. Bayesian Inference e inference method based on Bayesian is a typical UQ method under probabilistic representation. It can be used to analyze how the uncertainty is transferred from input to output in a complex multilevel system. It can also be used to quantify the uncertainty of the input parameters through a posteriori probability analysis to reduce the discrepancy between experimental and numerical simulation. Hierarchical Bayesian Approach. Simulation of a complex physical system involves multiple levels of modeling, such as material (lowest level) to component to subsystem to system (highest level). Different interdependent physicsbased predictive models and their implementation in computer simulation codes are developed at each level. e uncertainties in the input variables are propagated through the simulation codes from one level to next level. e challenges are to identify the relationships between the models at various levels and uncertainty propagation between the computational models. To quantify the uncertainties in multilevel models, a hierarchical Bayesian approach is proposed [63], which is a structural equation modeling (SEM) with latent variables supplemented with Bayesian regression and inference methods. e variabilities and uncertainties are quantified at individual levels, and the probability of meeting requirements may be assessed by model extrapolation from the subsystem to the full system. By using Bayesian estimation, unbiased estimates are derived for the relationships between latent variables at different levels and the measurement and prediction errors are modeled explicitly, while the variabilities of input variables in the computational model are updated effectively [64]. Multilevel Bayesian methods allow using more flexible assumptions regarding both the model parameters and prediction error probability distributions [65]. Development of hierarchical Bayesian models has brought about many successful applications in different scientific disciplines [66]. In molecular dynamics, hierarchical models have recently been developed for calibrating parametric models and fusing heterogeneous experimental data from different system operating conditions [67]. In structural dynamics, Behmanesh et al. [68] have developed a hierarchical framework to model and consider the variability of modal parameters over dissimilar experiments. is framework has found extensive applications in uncertainty quantification and propagation of dynamical models. Nagel et al. [69] have proposed a unified multilevel Bayesian framework for calibrating dynamical models for the special case of having noise-free vibration measurements. Sedehi et al. [70] introduce a novel Bayesian hierarchical setting, which breaks time-history vibrational responses into several segments so as to capture and identify the variability of inferred parameters over multiple segments. Bayesian Inference for Inverse UQ. Inverse UQ aims to quantify the uncertainty in input parameters such that the discrepancies between code output and observed experimental data can be reduced [71]. In this sense, it is similar to model calibration or parameter estimation. However, unlike the deterministic calibration methods, the statistical calibration methods also capture the uncertainty of the estimates rather than merely determining point of the best-fit input parameters [19]. e inverse UQ mainly employs the Bayesian inference theory and explores the posterior PDF with Monte Carlo sampling. According to the Bayesian inference theory, the information, posterior PDF, for the input parameters x can be obtained given the observation of the data: where π(x) is the prior, p(y | x) is the likelihood function, and p(x | y) is the posterior. In brief, prior and posterior probabilities represent degrees of belief about possible values of x, before and after observing the data y. A simple model for the likelihood assumes that independent additive errors account for the deviation between predicted and observed values of y: where the components of ε are i.i.d. (independent and identically distributed) random variables. e posterior PDF p(x | y) is the Bayesian solution to the inverse problem. Compared with most of the deterministic inverse methods, it results in not just a single value but a PDF. Various moments and marginal densities can be computed from the posterior PDF. However, such defined posterior PDF is nonstandard and implicit, not normalized, and we need a numerical sampling method to explore this posterior PDF. 8 Mathematical Problems in Engineering Monte Carlo sampling is commonly used to explore the posterior PDF by approximating the integrals with the numerical form Plain Monte Carlo methods are rarely used due to its difficulty to randomly sample from complex or high-dimensional distributions. Instead, Markov chain Monte Carlo (MCMC) methods are commonly used, which are a class of sequential sampling strategies in which the next sampled state only depends on the current state [11]. e MCMC algorithm samples from a given distribution by constructing a Markov chain whose stationary distribution coincides with the target distribution, such as Metropolis-Hastings sampling and Gibbs sampling. A major challenge of MCMC is that it requires a large number of samples to achieve statistical convergence. Typically the required number of samples range from O(10 5 ) to O(10 6 ), with the specific number depending on the shape of the posteriori distribution and the effectiveness of the sampling. In CFD applications, each evaluation involves a simulation that takes hours or even weeks to run. Clearly, it is impractical to perform a full simulation for each evaluation of likelihood in the MCMC sampling. erefore, surrogate models are commonly used for likelihood evaluation in MCMC-based model uncertainty quantification to alleviate the high computational cost of simulations [72]. Besides, when the exact probability is not critical and only the low-order moments such as the mean and the variance are important, various approximate Bayesian inference methods can be used. ese methods use maximum a posteriori (MAP) probability estimate to obtain the mode (peak) of the posterior and not the full posterior distribution. e MAP point can be obtained by finding the optimal values for x that maximize the log posterior (minimize the negative log posterior): is effectively eliminates the burn-in procedure for an MCMC chain where some initial portion of the Markov chain is discarded, as the MCMC chain can instead be initiated from a high probability starting point: the MAP solution. Further, the MAP estimate can be computed in several other ways, among which the most commonly used are variational methods [73] and ensemble methods [74]. In variational methods, the minimization problem is often solved by using gradient descent methods, with the gradient obtained with adjoint methods. In contrast, ensemble methods use samples to estimate the covariance of the state vector, which is further used to solve the optimization problem. Variational methods have been the standard in data assimilation and still dominate the field, while ensemble methods such as ensemble Kalman filtering have matured in the past decades [75]. Hybrid approaches combining both approaches are an area of intense research and have been explored in CFD applications. Moreover, mathematicians have performed analyses to shed light on why they have worked well in practice even with theoretical limitations [76]. Experimental Design In M&S procedure, the choice of the experimental design, i.e., the set of input samples, is crucial for an accurate representation of the computational model [77]. Various approaches are available, from purely deterministic to fully stochastic sampling techniques. e performance of a sampling strategy and the quality of its resulting samples directly control the efficiency and robustness of any associated sampling-based analysis. e key point is how to estimate the behavior of the computational model with a few representative samples, which should be carefully chosen so that the experimental design covers the entire space of input parameters. Initial Experimental Design. Intuitively, the regular grid is a good choice to cover the whole space of input variables in a deterministic way [77]. It is a full factorial design, i.e., all regions are covered regularly with the same density of samples in each subdomain. A drawback of the regular grid is that the design is full factorial. is implies that it involves a large number of samples, i.e., N � l m , where l is the number of levels of each dimension and m is the number of variables (dimensions). erefore, we need some more efficient methods to generate samples. In theory, Monte Carlo sampling is a purely stochastic design of experiments. e samples are generated randomly according to an assumed probability density function. For more efficient sampling strategies, semirandom designs were created, which include randomness and a deterministic part [77]. Orthogonal experimental design is one method of the deterministic part. When there exists interaction between variables, the workload of the experiment will become very large, even difficult to implement. To solve this problem, orthogonal experimental design is undoubtedly a better choice [78]. A highly efficient tool of orthogonal experimental design is orthogonal table [79]. e experimenter can find out the corresponding orthogonal table according to the number of factors, the level number of factors, and whether there is interaction and then select some representative points from the comprehensive experiment. e orthogonality makes it possible to achieve the equivalent result with the least number of experiments as a large number of comprehensive experiments. erefore, orthogonal table design is an efficient, fast, and economic multifactor test design method [80]. Latin hypercube sampling (LHS) is the special case of an orthogonal array [77], which is the most widely used method at present. e property of this experimental design is that the projection onto any axis in the p-dimensional space results in a uniform distribution [81]. e space of input parameters is defined by a regular grid. e samples are then arranged so that there is one sample in each dimension of the grid. Inside each square, the sample coordinates are chosen randomly [82], such as random Latin hypercube (RLHS) and median Latin hypercube (MLHS). LHS is typically used to save computer processing time when running Monte Carlo simulations. Due to the multiple dimensions of the input parameters in reactor-physics M&S, the sample size required by the conventional sampling methods is very huge and it is impossible to directly estimate the exact sample size which can provide the converged UQ results. erefore, it is an issue to notably reduce and determine the sample size on the basis of ensuring the consistent UQ results with those under infinite sample size and infinitesimal statistical fluctuations. In this context, Sui et al. [83] have proposed a covariance-oriented sample transformation (COST) to generate multivariate normal distribution samples for uncertainty analysis. In this method, samples from the standard normal distribution are transformed linearly in which the mean and covariance of the transformed samples are ensured equal to that of the input parameter population, respectively. In this way, the transformed samples can fully describe the uncertainty information of the input parameters. From the numerical result comparisons, it can be observed that the consistent uncertainty analysis results can be provided by COST with very small sample size, compared with the conventional sampling methods with very huge sample size. Adaptive Experimental Design. In practice, we hope to have a "proper sample size" from the initial sampling process for a given simulation model and sampling-based analysis [84]. First, it refers to a sufficiently large number of sample points that can ensure the convergence or robustness of the analysis results. en, on the premise of achieving the previous goal, we hope the sample size can be as small as possible. However, the "proper sample size" is not typically known a priori. It means that we may have to supplement some samples in the subsequent analysis process according to the information we get from the initial samples and calculations. One major drawback of traditional LHS and many other sampling strategies is that they generate the entire sample points at once [84], which is referred to as one-stage or oneshot sampling. is requires users to specify the sample size prior to the associated sampling-based analysis. Also, it is often the case that the user is not satisfied with the resulting sampling-based analysis (e.g., convergence criteria are not met), and needs to enlarge the sample size and resumes the sampling-based analysis with the updated/new sample. e need that maintains the desired distributional proprieties while the sample size grows progressively warrants the development and application of multistage or sequential sampling. is way, sequential sampling will allow the user to monitor the performance of the sampling-based analysis and assess the stopping criteria (e.g., convergence criteria) in an online manner. To address these, Sheikholeslami et al. [84] introduced a novel strategy, called PLHS (progressive Latin hypercube sampling), for sequentially sampling the input space while progressively maintaining the Latin hypercube properties. e proposed PLHS is composed of a series of smaller slices generated in a way that the union of these slices from the beginning to the current stage optimally preserves the desired distributional properties and at the same time achieves maximum space-filling. In addition, unlike space-filling DoE techniques (e.g., LHS), there is another special kind of adaptive design technique generating more design points in areas of interest, for example, the areas with high gradients of the response function. It starts with generating initial design by one of the space-filling techniques and builds an approximation based on this initial sample. Adaptive design technique then enriches sample iteratively by adding the next best design point to the most interesting regions, minimizing the uncertainty of the approximation [85]. e question then becomes one of determining the meaning of best. In information theory, the mutual information is a measure of the reduction in the uncertainty of one random variable due to the knowledge of another [86]. Recast into the context of experimental design, the mutual information represents how much the proposed experiment and resulting observation would reduce the uncertainties in the model parameters. erefore, given a set of experimental design conditions, that which maximizes the mutual information is the most desirable. is is the premise that motivates the Bayesian experimental design algorithm implemented in Dakota [85]. Similarly, Sudret [7] introduced an efficient adaptive experimental design strategy to add multiple samples at each iteration to increase the accuracy in the estimation of the QoI. Further, this algorithm is equipped with a new stopping criterion which monitors the convergence of the QoI better than existing ones. is further reduces the total computational resources needed for a more accurate estimation. Surrogate Model State-of-the-art numerical simulations are often characterized by a vast number of input parameters [87]. at means these applications always require thousands to millions of runs of the high-fidelity (parameterized) computational models, which is not affordable in many practical cases even with high-performance, parallel computing architectures [7]. Hence, the idea of using a simpler surrogate model to represent a complex phenomenon has gained increasing popularity over past three decades [88]. Surrogate model, also known as response surface or meta-model, is a set of easy-to-evaluate mathematical functions that approximates the actual simulation model based on pairs of input-output samples [89]. Here, we focus on surrogate models which treat the computational model as a black box and just need the input values and the corresponding output values of the QoI. Among them, there are several types of models that are widely used and have received sustained attention from researchers: polynomial chaos expansions (PCEs), kriging, reduced-order model (ROM), and artificial neural network (ANN). Polynomial Chaos Expansions. PCE models approximate the computational model by the linear combination of a series of multivariate polynomials: where α � (α 1 , . . . , α m ) are the multi-indices with α i , i � 1, . . . , m denoting the degree of the univariate polynomial in x i , are multivariate orthonormal polynomials which are built in coherency with the distribution of the input random vector x, and a α , α ∈ N m are the associated deterministic coefficients. If the simulation response function can be assumed as a continuous and well-behaved function of the input variables, then one may represent the function as a Taylor series about some particular environment condition [90]. However, since the computation model is not explicitly expressed, the coefficients of a Taylor series cannot be obtained by derivation, but rather by regression methods such as least square (LS) and maximum likelihood (ML) [91]. In practice, it is not tractable to use an infinite series expansion. In the early days, people focused on how to find the orthonormal basis [92,93]. e so-called projection methods were developed [94] for a comprehensive review. eir findings open the possibility of representing stochastic processes with different orthogonal polynomials according to the property of the processes. In practice, the use of infinite-dimensional PCEs is not tractable. And typically for smooth functions, a small number of polynomials are able to represent accurately the output of the computational model. Hence, some other researchers pay attention to obtain approximate representations by means of truncation. ey introduced some truncation schemes to reduce the number of candidate polynomials [95]. e truncated representation is given by in which A is a truncation set and ϵ is the truncation-induced error. A classical truncation scheme consists in selecting all polynomials of the total degree less than or equal to p, when the truncation set reads where where 0 < q ≤ 1 is a hyperparameter and a decreasing q leads to a smaller number of interactive polynomials. Later, some variable selection methods in the field of statistics are gradually introduced into the construction process of a PCE model to screen out variables that have an important impact on response. By means of submodel testing, adding penalty term to the likelihood function or to the least square objective function [96], the variables are reduced and a sparse model is constructed. Kriging. Kriging [97], also known as Gaussian process modeling, assumes that the computational model is a realization of a Gaussian random process. It is a regression algorithm for spatial modeling and prediction of stochastic process or random field based on covariance function [98]. e property of the best linear unbiased predictor makes it widely applied in various fields related to spatial statistics, such as geostatistics, environmental science, and atmospheric science. e general form of a kriging surrogate can be formulated as the summation of two terms: a trend of mean prediction defined by some known independent basis functions at specified location and a random error with zero mean distribution implied the correlation of two distinct samples: where β ⊤ g(x) � n T j�1 β j g j (x) is the mean value (a.k.a trend) of the Gaussian process, and β j is the trend coefficient corresponding to the j-th function g j . σ 2 is the process variance and z(x, ω) is a zero-mean, unit-variance stationary Gaussian process. z(x, ω) is characterized by an autocor- Various correlation functions can be found in the literature [99], including linear, exponential, Gaussian (also called squared exponential), and Matérn autocorrelation functions. According to the complexity of the trend function, there are three popular kriging models [100], namely, simple, ordinary, and universal kriging. Simple kriging assumes that the trend has a known constant value, i.e., β ⊤ g(x) � β 0 . Ordinary kriging assumes that the trend has a constant but unknown value, i.e., n T � 1, f 1 (x) � 1 and β 1 is unknown. e most general and flexible is universal kriging, which assumes that the trend is composed of a sum preselected functions g j (x). However, specifying a trend or a value for the mean when the underlying function is unknown may lead to inaccuracy in prediction. To avoid this problem, blind kriging is proposed [101] using a Bayesian technique to select models that have maximum posterior probability. Overall, kriging focuses on the local behavior of the computational model, resulting in high prediction accuracy close to sample points of the experimental design, but its global behavior can be poor. To make up for this, Schöbi et al. [77] proposed a novel PC-kriging approach combining kriging with the classical PCE: which holds a good global approximation property. And numerical results show that PC-kriging performs better than the two traditional meta-modeling techniques taken separately [7]. Reduced-Order Model. In the study of many complex physical processes, the solution is usually high-dimensional, which poses many mathematical challenges for traditional statistical methods. For example, despite tremendous progress seen in the computational fluid dynamics community for the past few decades, numerical tools are still too slow for the simulation of practical flow problems, consuming thousands or even millions of computational corehours. To enable feasible multidisciplinary analysis and design, the numerical techniques need to be accelerated by orders of magnitude. Reduced-order modeling (ROM) has been considered one promising approach for such purposes. From a mathematical perspective, the problem of dimensionality/order reduction can be formulated as follows: given the m-dimensional vector q � q 1 , . . . , q m ⊤ , find a reduced representation q r � q r 1 , . . . , q r m r ⊤ which can retain the geometry of the data as much as possible. Here, m r is the intrinsic dimensionality, and m r ≪ m is expected. Moreover, intrinsic dimensionality indicates that the original data q is lying on or near a manifold with dimensionality m r which is embedded in the M-dimensional space. In general, dimensionality reduction techniques can be divided into two groups: i.e., linear and nonlinear methods. Linear techniques assume that the data lies on or near a linear subspace of the high-dimensional space, which can be written as q � Wq r , where W ∈ R m×m r is a linear transformation matrix. For nonlinear methods, the linear transformations are replaced with q � G(q r ), where G denotes the nonlinear mapping function. Here we introduce two popular ROM technologies; for a detailed review of other ROMs, the readers are referred to work by Benner et al. [102] and Yu et al. [103]. One of the most popular linear techniques is the proper orthogonal decomposition (POD) [103], which is also known as Karhunen-Loéve procedure, principal component analysis (PCA), Hotelling analysis, empirical component analysis, quasiharmonic modes, and empirical eigenfunction decomposition in different fields. POD was originally introduced to fluid problems to help analyze coherent structures of turbulence. rough singular value decomposition (SVD), this yields an orthonormal basis, the POD basis. It is optimal in the sense that, for an orthonormal basis of size r, it minimizes the least squares error of snapshot reconstruction. Due to its broad applicability to linear and nonlinear systems, as well as to parametrically varying systems, the proper orthogonal decomposition (POD) has become widely used in many different application domains as a method for computing the reduced basis. Dynamic mode decomposition (DMD) has been designed to decompose time-resolved data into modes, each one of which corresponds to a single characteristic frequency and growth/decay rate. In principle, DMD is the eigendecomposition of a best-fit linear operator that approximates the underlying dynamics embedded in the datasets. A number of improved variants of DMD have occurred in the literature recently. Hemati et al. [104] proposed an efficient DMD method for large datasets. Also, a parallel version of DMD was developed by Belson et al. [105]. Moreover, it should be noted that the original DMD modes are not orthogonal, which is especially undesirable for a reduced-order model. Orthogonalized DMD modes can be obtained with a recursive method [106]. For problems with highly intermittent dynamics, the original DMD method shows poor performance, for which the multiresolution [107] and time-delay DMD [108] methods are promising. Artificial Neural Network. ANN is also a good choice for surrogate model due to its proved universal approximation, that is to say, a standard multilayer feedforward network with a locally bounded piecewise continuous activation function can approximate any continuous function to any degree of accuracy if and only if the network's activation function is not a polynomial [109,110]. ANN has been widely applied in modern surrogate modeling with its advantage of consuming trivial computational effort; thus, it can be used as the assistant for CFD calculation [111,112] for a large number of designed geometries, which greatly increases the optimization efficiency. Its accurate generalization and parallel computation capabilities in complex engineering design problems are helpful in the rapid investigation of design space and searching for the optimal solution [113]. For example, ANN has been used to expedite decision-making process in early stages of aircraft design process and to select proper combination of engine thrust, wing area, and the aircraft weight without going through elaborate details of other direct approaches [114]. However, more often than not, in the course of analyzing complex physical, the cost of data acquisition is prohibitive and we are inevitably faced with the challenge of drawing conclusions and making decisions under partial information. In this small data regime, the vast majority of state-ofthe-art ANN techniques are lacking robustness and fail to provide any guarantees of convergence [115]. erefore, unsupervised learning methods for complex mathematical model (PDEs) will become the focus and difficulty of future research. Recently, Raissi et al. [111] introduced physicsinformed neural networks, which are data-driven algorithms for inferring solutions to general nonlinear partial differential equations. Results showcase a series of promising results for a diverse collection of problems in computational science, which open a new path for constructing surrogate models for mathematical physics under zero sample circumstance. Model Uncertainty Analysis Nowadays, model uncertainty has become one of the most important problems in both academia and industry [116]. e modeling of complex environmental stochastic systems is a difficult task. Because of the complexity of reality, the incompleteness of information, and the limitation of cognition, the mathematical model will neglect some influence factors and can only approximate the real behavior at a certain level of accuracy [7]. While scientists have made substantial progress in getting a better insight into environmental relationships and changes, model uncertainties are still a major obstacle for the predictive capability of simulations [11]. ere are many ways in which model uncertainty may occur: the structure of the model may be misspecified a priori, or the model may be identified incorrectly from the data, etc. [117]. All these situations could cause serious problems. For example, in the simulation of turbulent flow, since it is often not possible to know beforehand if one or more of such flow features will be present in a new flow configuration, predictions based on the Reynolds averaged Navier-Stokes equations are flawed by a structural (i.e., model-form) uncertainty [11]. Here we introduce three main approaches to remove or reduce structural uncertainty. Model Correction. Here we first consider the fact that the observed quantities differ from the true ones by the experimental (observational) noise, which may be expressed through the relation: where ζ is the true value for z and ϵ a random vector representative of the experimental noise. e experimental data noise ϵ is often assumed to be independently distributed without spatial correlation, and it is modeled as a Gaussian process with diagonal covariance matrix. eoretically, the true value for ζ could be obtained as an output of the model y, once a suitable set of parameters x has been identified, i.e., ζ � y(x). In practice, however, few models are perfect [11], even if there is no parameter uncertainty, i.e., the predicted value will not equal the true value of the process [118]. e discrepancy is due to model inadequacy. A general framework to include the model inadequacy term in the stochastic model was first proposed in Kennedy and O'Hagan [119]. Model discrepancy can be taken into account by introducing an additional error term to the statistical model as in equation (36), which could be of additive nature, i.e., or of multiplicative nature: e symbol ∘ denotes the Hadamard (element-wise) multiplication. e choice of model inadequacy formulation largely depends on the nature and prior knowledge about the observed quantity. η is a random field representative of the model inadequacy. In several cases, η may involve additional parameters introduced for describing the error behavior, referred to as hyperparameters, which can be estimated based on likelihood maximization criteria or calibrated from the data along with the physical model parameters. When an additive model inadequacy term is used, it becomes difficult to separate its effect from that of the observational error. As for the multiplicative one, if the competing multiplicative statistical models describe the inadequacy term as Gaussian process, the observations can also be modeled as a Gaussian process based on the δ-method [120]. Although the use of model inadequacy terms is helpful in alleviating parameter overfitting problems, the approach suffers from several limitations: the correction terms are specific to the observed QoI and depend on the spatial distribution of the observed data for the specific scenario [11,121]. Model Selection. Model selection is to choose the best model in some statistical sense among a class of competing models. is consists in providing estimates of the posterior probability of each model in the considered set of models M 1 , M 2 , . . . , M I given the observed data. e "model" here should be interpreted in a broader sense, including not only physical models with associated coefficients but also statistical models. Each model M k consists of a family of distributions p(D | x k , M k ) indexed by x k . For such setups, the Bayesian approach provides a natural and general probabilistic framework that simultaneously treats both model and parameter uncertainty. Coupled with the advent of MCMC methods for posterior computation, the development and application of Bayesian methods for model uncertainty have seen remarkable evolution over the past decade [122]. e comprehensive Bayesian approach for multiple model setups proceeds by assigning a prior probability distribution p(x k | M k ) to the parameters of each model and a prior probability p(M k ) to each model. Margining out the parameters x and conditioning on the data D yields the posterior conditional model probabilities: where is the marginal likelihood of M k . Based on these posterior probabilities, pairwise comparison of models is summarized by the posterior odds Insofar as the priors provide an initial representation of model uncertainty, the model posterior p(M k | D) provides a complete representation of postdata model uncertainty that can be used for a variety of inferences and decisions. By treating p(M k | D) as a measure of the "truth" of model M k , a natural and simple strategy for model selection is to choose the most probable M k , the modal model for which p(M k | D) is largest. However, the drawbacks of this approach exist. e selection of one particular model may lead to riskier decisions. In other words, if we choose a wrong model, the consequence will be disastrous. Moral-Benito [123] already pointed out the concern. From a pure empirical viewpoint, model uncertainty represents a concern because estimates may well depend on the particular model considered. erefore, combining multiple models to reduce the model uncertainty is very desirable. Mathematical Problems in Engineering 13 Model Averaging. e difficulty in making predictions with a single calibrated model clearly calls for a framework based on multimodel ensembles. Multimodel approaches have been used in aerodynamics [124] and many other applications [125,126]. Bayesian modeling averaging is among the most widely used multimodel approaches, which combines multiple models enabling researchers to draw conclusions based on the whole universe of candidate models. Rather than choosing a single best model, the analyst may wish to utilize several models that are thought to be plausible a priori, or that seem to provide a reasonable approximation to the given data for the required objective. Prior knowledge is used to select a set of plausible models, and prior probabilities are attached to them. e data are then used to evaluate posterior probabilities for the different models, after which models with low posterior probabilities may be discarded to keep the problem manageable. Finally, a weighted sum of the predictions from the remaining competing models is calculated. ere are two different approaches to model averaging in the literature, including frequentist model averaging (FMA) and Bayesian model averaging (BMA) [116]. Frequentist approaches focus on improving prediction and use weighted mean of estimates from different models while Bayesian approaches focus on the probability that a model is true and consider priors and posteriors for different models. e FMA approach does not consider priors, so the corresponding estimators depend solely on data. For its simplicity, the FMA approach has received some attention over the last decade [127]. e BMA approach includes a posterior of the predicted quantity ψ: given calibration data D and a set of models. In this framework, the posterior of ψ is an average of I posterior predictive distributions corresponding to I competing models weighted by their respective model posterior. e Bayesian approaches have the advantage of using arbitrary domain knowledge through a proper prior. However, how to set prior probabilities and how to deal with the priors when they are in conflict with each other are still problems [128]. e probably approximately correct-(PAC-) Bayes theory, first proposed by McAllester [129], combines the performance measurement of PAC, and hence, it can make full use of prior information, provide the tightest generalization error boundary for various learning algorithms, and evaluate the generalization performance of learning algorithms [130]. Generally speaking, compared to model selection methods, the result of an averaged model will not be as good as the (a priori unknown) best model but will not be as bad as the worst one [116]. Conclusion At present, in the complex physical and engineering problems, M&S research has gradually developed a mature system, but also encountered new challenges. e uncertainties introduced by many different sources become a major obstacle for the predictive capability and the reliability of simulations. e research of UQ aims to make better decisions, reduce the cost of trial and error during code development, and improve the reliability of simulation through identifying the main source of uncertainty, analyzing how the uncertainty propagates, searching for stable optimized solutions, and so on. is paper gives a comprehensive review on the goals, ideas, and principle methods for each of the UQ processes. According to the data flow, the research of UQ can be divided into two categories: forward analysis concerning about the uncertainty propagation from input to output and backward analysis concerning about how to obtain the inference of input from experiment data and simulation output. In addition, as an aid, experimental design and surrogate models are introduced to improve the efficiency of UQ. In practice, most of the processes of UQ are related to and dependent on each other. For example, the good analysis or approximation requires representative samples, while the process of adaptive sampling depends on a good surrogate model and a sufficient understanding of the input, which needs to be obtained through sensitivity analysis, Bayes inference, etc. Last but not least, there comes a new trend that machine learning methods, which are suitable for the analysis of big data (e.g., parameter selection and high-dimensional approximation), are found to have great potential in the UQ research for complex physical processes, and will probably become an important research direction in the future. Finally, it should be noted that this paper aims to highlight the main conceptual ideas of the whole system of UQ research and focus on innovative concepts, and hence, some details of the methods are intentionally/unintentionally ignored to avoid the impact on the integrity and coherence of the article. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
15,561
2020-08-31T00:00:00.000
[ "Engineering", "Mathematics" ]
Analysis of Stray Light and Enhancement of SNR in DMD-Based Spectrometers Due to advantages such as the high efficiency of light utilization, small volume, and vibration resistance, digital micro-mirror device (DMD)-based spectrometers are widely used in ocean investigations, mountain surveys, and other field science research. In order to eliminate the stray light caused by DMDs, the stray light in DMD-based spectrometers was first measured and analyzed. Then, the stray light was classified into wavelength-related components and wavelength-unrelated components. Moreover, the noise caused by the stray light was analyzed from the perspective of encoding equation, and the de-noising decoding equation was deduced. The results showed that the accuracy range of absorbance was enhanced from [0, 1.9] to [0, 3.1] in single-stripe mode and the accuracy range of absorbance was enhanced from [0, 3.8] to [0, 6.3] in Hadamard transform (HT) multiple-stripe mode. A conclusion can be drawn that the de-noising strategy is feasible and effective for enhancing the SNR in DMD-based spectrometers. The Background of the Enhancement of SNR in DMD-Based Spectrometers As novel digital transform spectrometers, the digital micro-mirror device (DMD)based Hadamard transform (HT) spectrometers have been widely used in ocean investigation and other field science studies because of advantages such as the high efficiency of light utilization, the resistance to vibration, and the small volume [1][2][3][4][5]. However, stray light caused by the micro-mirrors lowered the signal-to-noise ratio (SNR) of the spectrometer [6,7]. Consequently, strategies for eliminating the stray light in DMD-based spectrometers have attracted more and more attention [8,9]. Carrying out a series of measurement with stray light, Kenneth D. et al. quantitatively analyzed the impact of stray light on SNR in DMD-based spectrometers. This study laid the foundation for eliminating stray light and enhancing the SNR in DMD-based spectrometers [10][11][12]. Wang X. et al. measured the diffraction of light in optical systems and enhanced the SNR in DMD-based spectrometers by optimizing the optical system of spectrometer [13][14][15]. Zhang Zhihai et al. enhanced the SNR in DMD-based spectrometers by changing the Hadamard transform order [16,17]. Rasmussen et al. introduced an absorption pool to reduce the stray light caused by the "off" state micro-mirrors, but the stray light and background light beyond the absorption pool still existed [18,19]. Quan X. et al. presented a new system of compound parabolic concentrator to suppress the stray light of micro-mirrors beyond the acceptance angle. However, stray light still existed within the acceptance angle [20,21]. Our Work In this paper, based on the measurement and analysis of the stray light, the stray light was classified into the variable stray light related to the wavelength and the intrinsic stray light unrelated to the wavelength. Moreover, the impact of stray light on the encoding equation was analyzed and the decoding equation of eliminating stray light was deduced. Finally, the absorbance was corrected both in single-stripe mode and HT multiple-stripe mode. The results showed that the accurate range of absorbance was enhanced from [0, 1.9] to [0, 3.1] in the single-stripe mode, and the accurate range of corrected absorbance was enhanced from [0, 3.8] to [0, 6.3] in the HT multiple-stripe mode. The Analysis and Classification of Stray Light in DMD-Based Spectrometers The stray light in DMD-based spectrometers is mainly divided into the diffraction of DMD, the reflection light of micro-mirrors on "off" state, the reflection light caused by mechanical structure, the background light, and so on. The prototype of DMD-based spectrometer is shown in Figure 1. The polychromatic light from fiber is collimated by the collimating lens. Collimated light is split by the grating and focused onto the DMD by the imaging lens. After encoding with the DMD according to the modulation mode, the different spectrum components are concentrated onto the single detector by the converging lens. Eventually, the spectra are decoded by the computer. DMD consist of 1024 × 768 micro-mirrors with a pixel size of 13.68 µm × 13.68 µm and tilt angle of ±12 • mounted on a 14.68 µm × 14.68 µm pitch. The dimension of grating is 12.8 mm × 6.4 mm with a groove density of 300 lines/mm. The light source is a nearinfrared lamp with the spectrum range covering from 1.35 to 2.45 µm; then, the light is passed through the fiber with a numerical aperture of 0.2. The light source power is 12 W. The detector is an InGaAs detector with area of 2 mm 2 . The size of the spectrometer is 150 × 150 × 120 mm 3 . The stray light I off (when all the micro-mirrors are in the "off" state) and the signal light I on (when all the micro-mirrors are in the "on" state) were measured by inserting the filters in the sampling pool. Moreover, by changing the light source intensity, the relationships between stray light I off with signal light I on were derived. Our Work In this paper, based on the measurement and analysis of the stray light, the stray light was classified into the variable stray light related to the wavelength and the intrinsic stray light unrelated to the wavelength. Moreover, the impact of stray light on the encoding equation was analyzed and the decoding equation of eliminating stray light was deduced. Finally, the absorbance was corrected both in single-stripe mode and HT multiple-stripe mode. The results showed that the accurate range of absorbance was enhanced from [0, 1.9] to [0, 3.1] in the single-stripe mode, and the accurate range of corrected absorbance was enhanced from [0, 3.8] to [0, 6.3] in the HT multiple-stripe mode. The Analysis and Classification of Stray Light in DMD-Based Spectrometers The stray light in DMD-based spectrometers is mainly divided into the diffraction of DMD, the reflection light of micro-mirrors on "off" state, the reflection light caused by mechanical structure, the background light, and so on. The prototype of DMD-based spectrometer is shown in Figure 1. The polychromatic light from fiber is collimated by the collimating lens. Collimated light is split by the grating and focused onto the DMD by the imaging lens. After encoding with the DMD according to the modulation mode, the different spectrum components are concentrated onto the single detector by the converging lens. Eventually, the spectra are decoded by the computer. DMD consist of 1024 × 768 micro-mirrors with a pixel size of 13.68 μm × 13.68 μm and tilt angle of ±12° mounted on a 14.68 μm × 14.68 μm pitch. The dimension of grating is 12.8 mm × 6.4 mm with a groove density of 300 lines/mm. The light source is a near-infrared lamp with the spectrum range covering from 1.35 to 2.45 μm; then, the light is passed through the fiber with a numerical aperture of 0.2. The light source power is 12 W. The detector is an In-GaAs detector with area of 2 mm 2 . The size of the spectrometer is 150 × 150 × 120 mm 3 . The stray light off I (when all the micro-mirrors are in the "off" state) and the signal light on I (when all the micro-mirrors are in the "on" state) were measured by inserting the filters in the sampling pool. Moreover, by changing the light source intensity, the relationships between stray light off I with signal light on I were derived. The prototype of stray light measurement in a DMD-based spectrometer: the light from the light source passes through the collimating lens, grating, imaging lens, DMD, and converging lens to the detector. When all the micro-mirrors are in the "on" state and "off" state, the detected light signals I on and I off can be derived, respectively. The relationships of I on and I off at wavelengths of 1.44 µm, 1.72 µm, 1.92 µm, 2.10 µm, and 2.29 µm were measured, as shown in Figure 2. The relationships of on I and off I at wavelengths of 1.44 μm, 1.72 μm, 1.92 μm, 2.10 μm, and 2.29 μm were measured, as shown in Figure 2. According to the ratio of variable stray light to signal light at the sampling wavelength, the fitting curve of the variable stray light ratio is shown in Figure 3. As shown in Figure 2, with the change in light source intensity, the relationship between I off_i and I on_i is linear. Here, i represents the serial number at different wavelengths. The relationship between I off_i and I on_i is fitted with a linear function, as shown in Equation (1). where δ i is the gradient of the fitted linear function which corresponds to the variable stray light related to the wavelength. ε i is the intercept of the fitted linear function at different wavelengths, which corresponds to the intrinsic stray light unrelated to the wavelength. The fitted linear function is written in the form of a matrix as Equation (2): where I off = [I off_1 I off_2 · · · I off_n ] T is the stray light matrix composed of I off_i , I on = [I on_1 I on_2 · · · I on_n ] T is the signal light matrix composed of I on_i , δ = [δ 1 δ 2 · · · δ n ] T is the matrix composed of the ratio coefficients of variable stray light to signal light, and ε = [ε 1 ε 2 · · · ε n ] T is the matrix composed of intrinsic stray light. According to the ratio of variable stray light to signal light at the sampling wavelength, the fitting curve of the variable stray light ratio is shown in Figure 3. Corresponding to the code equation, the ratio of the variable stray light to signal light at the entire spectrum band is extended as Equation (3) in the matrix form: where N represents the time of measurement which corresponds to the sampling wavelength. The intrinsic stray light matrix is shown in Equation (4): where ε is the average value of the measured intrinsic stray light. Additionally, the longitudinal coordinate shows the ratio of variable stray light to signal light, which is represented by "δ" in this paper. Corresponding to the code equation, the ratio of the variable stray light to signal light at the entire spectrum band is extended as Equation (3) in the matrix form: where N represents the time of measurement which corresponds to the sampling wavelength. The intrinsic stray light matrix is shown in Equation (4): where ε is the average value of the measured intrinsic stray light. The Impact of Stray Light on the Encoding Equation and Decoding Equation There are two common coding modes: single-stripe mode and Hadamard multiple-stripe mode; the single-stripe mode has the advantages of simplicity and the Hadamard multiple-stripe mode has advantage of a high SNR. HT spectrometers boost the SNR according to specific encoding patterns. The H-matrix, the S-matrix, and the com- where N is the Hadamard order. In this paper, we adopted the S-matrix in the Hadamard transform coding mode. The Impact of Stray Light in Single-Stripe Mode There are two types of spectrum acquisition modes in DMD-based spectrometers: single-stripe mode and multiple-stripe mode. The single-stripe coding process is shown in Figure 4. Additionally, the longitudinal coordinate shows the ratio of variable stray light to signal light, which is represented by "δ" in this paper. The Impact of Stray Light on the Encoding Equation and Decoding Equation There are two common coding modes: single-stripe mode and Hadamard multiplestripe mode; the single-stripe mode has the advantages of simplicity and the Hadamard multiple-stripe mode has advantage of a high SNR. HT spectrometers boost the SNR according to specific encoding patterns. The H-matrix, the S-matrix, and the complementary S-matrix boost the SNR by factors of 2 , respectively, where N is the Hadamard order. In this paper, we adopted the S-matrix in the Hadamard transform coding mode. The Impact of Stray Light in Single-Stripe Mode There are two types of spectrum acquisition modes in DMD-based spectrometers: single-stripe mode and multiple-stripe mode. The single-stripe coding process is shown in Figure 4. . The sketch map of the single-stripe mode: in the single-stripe mode, only one column of micro-mirrors is turned on, corresponding to the single encoding matrix. The first detected light signal is reflected by mask1. Accordingly, the second detected light signal is reflected by mask2, and so on. After the last detected signal is detected with the reflection of maskN, the spectra can be derived. Ideally, the single-stripe scanning process is expressed in a matrix, such as in Equation (5): The encoding matrix is a unit matrix. The number of matrix columns . The sketch map of the single-stripe mode: in the single-stripe mode, only one column of micro-mirrors is turned on, corresponding to the single encoding matrix. The first detected light signal is reflected by mask 1 . Accordingly, the second detected light signal is reflected by mask 2 , and so on. After the last detected signal is detected with the reflection of mask N , the spectra can be derived. Ideally, the single-stripe scanning process is expressed in a matrix, such as in Equation (5): where E 1 -E N is the spectral intensity unfolded on the DMD. I 1 -I N is the detected light intensity. The encoding matrix is a unit matrix. The number of matrix columns represents the time of the measurements, and the number of matrix rows represents the value of the measured wavelength. The element "1" in the matrix indicates that the micro-mirror is in the "on" state. The element "0" in the matrix indicates that the micro-mirror is in the "off" state. The encoding matrix could be described as Equation (6) in a simplified form: where I is the detected light intensity matrix, E is the spectral intensity matrix, and U is the N-order unit matrix for coding. However, due to the existence of stray light, the element "0" in the encoding matrix may be greater than 0. Based on the variable stray light ratio matrix (3), the spectral encoding matrix with variable stray light is shown as Equation (7). The equation could be expressed in a simplified form as Equation (8). where I δ is the detected light intensity matrix with variable stray light, and O is the N-order square matrix with all elements as "1". Based on the intrinsic stray light in DMD-based spectrometers, the encoding equation is modified to Equation (9). The equation could be expressed in a simplified form as Equation (10): where I δ+ε is the detected light intensity matrix with two types of stray light. The decoded spectral intensity matrix could be derived as Equation (11): where E δ+ε is the spectral intensity matrix with two types of stray light. The spectral intensity matrix eliminating two types of stray light can be expressed as Equation (12): To simplify the calculation, we substituted E ≈ E δ+ε into Equation (12). Then, the decoding matrix could be derived as Equation (13): The Impact of Stray Light in Hadamard Multiple-Stripe Mode The Hadamard coding process of DMD-based spectrometers is shown in Figure 5. decoding matrix could be derived as Equation (13): The Impact of Stray Light in Hadamard Multiple-Stripe Mode The Hadamard coding process of DMD-based spectrometers is shown in Figure 5. Figure 5. The sketch map of Hadamard multiple-stripe mode: in Hadamard multiple-stripe mode, multiple columns of micro-mirrors are turned on, corresponding to the Hadamard encoding matrix. The first detected light signal is reflected by mask1. Accordingly, the second detected light signal is reflected by mask2, and so on. After the last detected signal is detected with the reflecting of maskN, the raw data of the spectrum are acquired. By decoding with the computer, the spectra can be derived. Corresponding to the single-stripe mode, the encoding matrix is changed from U to H . Accordingly, the encoding matrix with two types of stray light is shown as Equation (14): where H is the Hadamard encoding matrix. The decoding matrix eliminating two types of stray light is shown as Equation (15): Figure 5. The sketch map of Hadamard multiple-stripe mode: in Hadamard multiple-stripe mode, multiple columns of micro-mirrors are turned on, corresponding to the Hadamard encoding matrix. The first detected light signal is reflected by mask 1 . Accordingly, the second detected light signal is reflected by mask 2 , and so on. After the last detected signal is detected with the reflecting of mask N , the raw data of the spectrum are acquired. By decoding with the computer, the spectra can be derived. Corresponding to the single-stripe mode, the encoding matrix is changed from U to H. Accordingly, the encoding matrix with two types of stray light is shown as Equation (14): where H is the Hadamard encoding matrix. The decoding matrix eliminating two types of stray light is shown as Equation (15): Then, the decoding matrix eliminating two types of stray light can be shown as Equation (16): To simplify the calculation, we substituted E ≈ E δ+ε into (16); then, the decoding matrix was derived as Equation (17): Experiments and Methods To certify the efficiency of our SNR enhancement strategy in DMD-based spectrometers, we carried out a series of experiments and simulations. The process and method are shown as follows: • Firstly, we assumed that the light source spectrum was the constant of "1", whereas the ideal absorption spectrum was a normal curve. With this assumption, we could obtain the referential spectrum and absorbance. • Secondly, the Hadamard transform order was set to 255 during the data processing. Based on the stray light detected in the experiment (as shown in Figure 2), the spectrum with the noise of stray light was calculated and the absorbance with the noise could be derived in both single-stripe mode and Hadamard multiple-stripe mode. Consequently, we could obtain the raw spectrum and absorbance data with the noise. By contrasting the raw spectrum and absorbance with the referential spectrum and absorbance, the effect of the noise could be calculated quantitatively. We defined that the accuracy of absorbance was less than 0.1, then the accuracy range could be derived. • Thirdly, based on the decoding matrix of eliminating the two types of stray light, the spectrum and absorbance of eliminating the stray light noise were derived. By contrasting the corrected spectrum and absorbance with the referential spectrum and absorbance, the accuracy range could be derived both in the single-stripe mode and the Hadamard multiple-stripe mode, respectively. • Finally, by contrasting the accuracy range of absorbance between raw absorbance with corrected absorbance, the efficiency of this strategy was certified. Figure 6 shows the impact of stray light on the spectrum in the single-stripe mode. The light source spectrum was assumed to have a constant value of "1", while the ideal absorption spectrum was a normal curve (a in Figure 6). Then, the spectrum with wavelengthrelated stray light is shown as curve b (in Figure 6) and the spectrum with two kinds of stray light is shown as curve c (in Figure 6). The calculation of the absorbance is shown as Equation (18): Single-Stripe Mode AU lg(1/ ) T = (18) where T is the transmittance of the spectrum. The absorbance under different circumstances is shown in Figure 7. The ideal absorbance is shown as a (in Figure 7). The absorbance with wavelength-related stray light is shown as b (in Figure 7). The absorbance with two kinds of stray light is shown as c (in Figure 7). Assuming that the deviation between the calculated absorbance and the standard absorbance was defined to be less than 0.1, it could be derived that the accurate range was [0, 1.9]. Figure 7. Result of stray light on the absorption spectrum in single-stripe mode: the horizontal ordinate shows the wavelength with units of micrometers. The longitudinal coordinate shows absorbance, which is represented by "AU" in this paper. Curve a shows the ideal absorbance, curve b Figure 6. Impact of stray light on the spectrum in single-stripe mode: the horizontal ordinate shows the wavelength with units of micrometers. Additionally, the longitudinal coordinate shows the spectrometric intensity. Curve a shows the ideal spectrum, curve b shows the spectrum with the noise of variable stray light, and curve c shows the spectrum with the noise of two types of stray light, including variable stray light and intrinsic stray light. The calculation of the absorbance is shown as Equation (18): where T is the transmittance of the spectrum. The absorbance under different circumstances is shown in Figure 7. The ideal absorbance is shown as a (in Figure 7). The absorbance with wavelength-related stray light is shown as b (in Figure 7). The absorbance with two kinds of stray light is shown as c (in Figure 7). Assuming that the deviation between the calculated absorbance and the standard absorbance was defined to be less than 0.1, it could be derived that the accurate range was [0, 1.9]. Based on Equation (13), the corrected spectrum is derived as shown in Figure 8. Additionally, the corrected absorbance is shown as c (in Figure 8). The ideal absorbance is shown as a (in Figure 8). The absorbance with two kinds of stray light is shown as b (in Figure 8). The results showed that the accuracy range was enhanced to [0, 3.1]. where T is the transmittance of the spectrum. The absorbance under different circumstances is shown in Figure 7. The ideal absorbance is shown as a (in Figure 7). The absorbance with wavelength-related stray light is shown as b (in Figure 7). The absorbance with two kinds of stray light is shown as c (in Figure 7). Assuming that the deviation between the calculated absorbance and the standard absorbance was defined to be less than 0.1, it could be derived that the accurate range was [0, 1.9]. Figure 7. Result of stray light on the absorption spectrum in single-stripe mode: the horizontal ordinate shows the wavelength with units of micrometers. The longitudinal coordinate shows absorbance, which is represented by "AU" in this paper. Curve a shows the ideal absorbance, curve b shows the absorbance with the noise of variable stray light, and curve c shows the absorbance with the noise of two types of stray light, including variable stray light and intrinsic stray light. Based on Equation (13), the corrected spectrum is derived as shown in Figure 8. Additionally, the corrected absorbance is shown as c (in Figure 8). The ideal absorbance is shown as a (in Figure 8). The absorbance with two kinds of stray light is shown as b (in Figure 8). The results showed that the accuracy range was enhanced to [0, 3.1]. Figure 9 shows the impact of stray light on the spectrum in the Hadamard mode. The ideal absorption spectrum is a normal curve as a (in Figure 9). The spectrum with wavelength-related stray light is shown as curve b (in Figure 9). The spectrum with two kinds of stray light is shown as curve c (in Figure 9). Figure 8. Results of absorbance correction in single-stripe mode: the horizontal ordinate shows the wavelength with units of micrometers. The longitudinal coordinate shows absorbance, which was represented by "AU" in this paper. Curve a shows the ideal absorbance, curve b shows the absorbance with the noise of two types of stray light, including variable stray light and intrinsic stray light, and curve c shows the corrected absorbance. Figure 9 shows the impact of stray light on the spectrum in the Hadamard mode. The ideal absorption spectrum is a normal curve as a (in Figure 9). The spectrum with wavelength-related stray light is shown as curve b (in Figure 9). The spectrum with two kinds of stray light is shown as curve c (in Figure 9). light, and curve c shows the corrected absorbance. Figure 9 shows the impact of stray light on the spectrum in the Hadamard mode. The ideal absorption spectrum is a normal curve as a (in Figure 9). The spectrum with wavelength-related stray light is shown as curve b (in Figure 9). The spectrum with two kinds of stray light is shown as curve c (in Figure 9). The absorbance values in different scenarios are shown in Figure 10. The ideal absorbance is shown as a. The absorbance with wavelength-related stray light is shown as b. The absorbance with two kinds of stray light is shown as c. It could be derived that the accurate range was [0, 3.8]. The absorbance values in different scenarios are shown in Figure 10. The ideal absorbance is shown as a. The absorbance with wavelength-related stray light is shown as b. The absorbance with two kinds of stray light is shown as c. It could be derived that the accurate range was [0, 3.8]. The Hadamard Multiple-Stripe Mode Sensors 2022, 22, x FOR PEER REVIEW 10 of 12 Figure 10. Result of stray light on the absorbance in Hadamard multiple-stripe mode: the horizontal ordinate shows the wavelength with units of micrometers. The longitudinal coordinate shows absorbance, which was represented by "AU" in this paper. Curve a shows the ideal absorbance, curve b shows the absorbance with the noise of variable stray light, and curve c shows the absorbance with the noise of two types of stray light, including variable stray light and intrinsic stray light. The correction of absorbance in Hadamard mode is shown in Figure 11. Curve a is the standard absorbance. Curve b is the absorbance with stray light. Compared with the single-stripe acquisition mode, the range of absorbance accuracy in Hadamard acquisition mode is [0, 3.8]. Based on the decoding matrix in Equation (17), the corrected absorbance could be calculated as curve c. The range of absorbance accuracy was enhanced from [0, 3.8] to [0, 6.3]. Figure 10. Result of stray light on the absorbance in Hadamard multiple-stripe mode: the horizontal ordinate shows the wavelength with units of micrometers. The longitudinal coordinate shows absorbance, which was represented by "AU" in this paper. Curve a shows the ideal absorbance, curve b shows the absorbance with the noise of variable stray light, and curve c shows the absorbance with the noise of two types of stray light, including variable stray light and intrinsic stray light. The correction of absorbance in Hadamard mode is shown in Figure 11. Curve a is the standard absorbance. Curve b is the absorbance with stray light. Compared with the single-stripe acquisition mode, the range of absorbance accuracy in Hadamard acquisition mode is [0, 3.8]. Based on the decoding matrix in Equation (17), the corrected absorbance could be calculated as curve c. The range of absorbance accuracy was enhanced from [0, 3.8] to [0, 6.3]. The correction of absorbance in Hadamard mode is shown in Figure 11. Curve a is the standard absorbance. Curve b is the absorbance with stray light. Compared with the single-stripe acquisition mode, the range of absorbance accuracy in Hadamard acquisition mode is [0, 3.8]. Based on the decoding matrix in Equation (17), the corrected absorbance could be calculated as curve c. The range of absorbance accuracy was enhanced from [0, 3.8] to [0, 6.3]. Figure 11. Results of absorbance correction in Hadamard mode: the horizontal ordinate shows the wavelength with units of micrometers. The longitudinal coordinate shows the absorbance, which was represented by "AU" in this paper. Curve a shows the ideal absorbance, curve b shows the absorbance with the noise of two types of stray light, including variable stray light and intrinsic stray light, and curve c shows the corrected absorbance. Conclusions The aim of this study was to eliminate the stray light in DMD-based spectrometers. Based on experiments measuring stray light, it was classified into two types, including variable stray light related to the wavelength, and the intrinsic stray light unrelated to the wavelength. Then, the impacts of stray light on spectrum were analyzed from the perspective of the encoding equation, and the decoding equation of eliminating stray light was derived. Finally, the spectrum absorbance accurate range was enhanced from [0, 1.9] Figure 11. Results of absorbance correction in Hadamard mode: the horizontal ordinate shows the wavelength with units of micrometers. The longitudinal coordinate shows the absorbance, which was represented by "AU" in this paper. Curve a shows the ideal absorbance, curve b shows the absorbance with the noise of two types of stray light, including variable stray light and intrinsic stray light, and curve c shows the corrected absorbance. Conclusions The aim of this study was to eliminate the stray light in DMD-based spectrometers. Based on experiments measuring stray light, it was classified into two types, including variable stray light related to the wavelength, and the intrinsic stray light unrelated to the wavelength. Then, the impacts of stray light on spectrum were analyzed from the perspective of the encoding equation, and the decoding equation of eliminating stray light was derived. Finally, the spectrum absorbance accurate range was enhanced from [0, 1.9] to [0, 3.1] in the single-stripe mode, and the spectrum absorbance accuracy range was enhanced from [0, 3.8] to [0, 6.3] in the Hadamard multiple-stripe mode. A conclusion can be drawn that the denoising strategy is feasible and effective for enhancing the SNR in DMD-based spectrometers. Author Contributions: Conceptualization, X.C. and X.Q.; methodology, X.C. and X.Q.; analysis, X.C.; manuscript writing, X.C. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: The data presented in this study are available from the corresponding author upon request.
6,840.4
2022-08-01T00:00:00.000
[ "Physics" ]
Capturing human categorization of natural images by combining deep networks and cognitive models Human categorization is one of the most important and successful targets of cognitive modeling, with decades of model development and assessment using simple, low-dimensional artificial stimuli. However, it remains unclear how these findings relate to categorization in more natural settings, involving complex, high-dimensional stimuli. Here, we take a step towards addressing this question by modeling human categorization over a large behavioral dataset, comprising more than 500,000 judgments over 10,000 natural images from ten object categories. We apply a range of machine learning methods to generate candidate representations for these images, and show that combining rich image representations with flexible cognitive models captures human decisions best. We also find that in the high-dimensional representational spaces these methods generate, simple prototype models can perform comparably to the more complex memory-based exemplar models dominant in laboratory settings. functional form sense) than prototype models, I seem to recall that the difference in that kind of complexity is not nearly as large as that between prototype and exemplar models and other kinds of models, but it has been a while since I reviewed that work. In a reply to a previous review, the authors note that exemplar models can predict "more complex … categorization boundaries". To be clear, exemplar models do not learn complex or simple categorization boundaries, those boundaries are entirely implicit based on the nature of the similarities to exemplars from various categories. That's not a sense of "complexity" that to me would disqualify a model, except perhaps in the eye of certain beholders. There seems, perhaps, to be an attempt to appeal to some common-sense notion that "everyone" believes that exemplar models are "more complex" than prototype models, perhaps to embrace the prototype model as the "winner" in a relatively tied race of model comparison. That's not a very compelling argument statistically or theoretically, at least in my view. - (lines 48-54) The distinctions between work in categorization and work in object recognition is not now as sharp as the authors suggest, in my opinion. I would have agreed with them 15 years ago perhaps. Not as much today. While it is true that a significant amount of past work used simple, lowdimensional, highly-controlled stimuli (in much the same way that work in attention, memory, and other domains used simple, low-dimensional, highly-controlled stimuli), a fair amount of more recent work (with objects, with faces) is using more complex real-world stimuli. -(lines 53-56) I'm not quite sure what's gained by being so critical of the recent work by Nosofsky and colleagues (with rocks). I think that work is complementary to what's being done here (that complementarity is acknowledge in the discussion, not in the introduction). I hope that Nature Communication does not set a bar for publication that requires disparaging other recent work in order to demonstrate novelty. While the present work collects data from more subjects and more stimuli per category, the Nosofsky work has more categories of objects (30) than the present work (10), and the rocks in the Nosofsky work have a hierarchical structure. -I'm not sure Figure 1 is necessary. -One dimension of difference between the present work (CIFAR images) and the past work using simply, low-dimensional, highly-controlled stimuli, is that the latter often examines what is more like subordinate-level (or sub-sub-ordinate) categorization. In the classic work (and its modern instantiations) the stimuli/objects all have the same repertoire of features that differ in shape or have fairly subtle quantitative differences along particular dimensions (like the angle and spatial frequency of a gabor patch). It's like telling apart a Stratocaster from a Telecaster, or a Cabernet from a Merlot. Telling apart an airplane from a frog (in CIFAR) is like telling apart a Cabernet from milk. I guess my real point is that it is fine to have this present work stand on its own merits given the size of the human dataset and the kind of categorization task it represents and the approach the work takes combining CNN representations and cognitive models. The present work does not increase its potential impact by critiquing past work rather clumsily. -I would also note that the nature of the difficulty of the CIFAR stimuli is different from the nature of the difficulty in the more classic experiments using simple stimuli. The CIFAR stimuli are 32x32 and are expanded to 160x160. They're highly pixelated and distorted. The difficulty is from this low pass filtering of a sort. By contrast, with an artificial set of gabors varying in orientation and spatial frequency that belong to two different categories, the difficulty doesn't stem from some filtering of the stimuli or noise masking the stimuli, but because two stimuli that belong to different categories are visually very similar in their dimensional representation (say varying in subtle way by one degree of orientation). -pp. 11-12 The authors are likely aware of past work showing both mathematically and using simulations the conditions under which exemplar and prototype models mimic one another (a fair amount of that by Nosofsky and by Ashby, and one by Rosseel). -While I am confident that the authors realize that it is quite easy to come up with category structures where prototype and exemplar models perform similarly (for example, a simple family resemblance structure), much of the work addressing contrasts between prototype and exemplar models using simple stimuli has been aimed at constructing novel category structures where prototype and exemplar models make different predictions and showing that human behavior is most often more consistent with exemplar than prototype models. I am not sure that a casual reader, who is not an expert in this literature, would appreciate that important point. -So I wonder to what extent the CIFAR images (and their categories) are really designed to distinguish prototype from exemplar models (and indeed, they seem to perform quite similarly here). Are there many instances that are near boundaries, creating the kinds of structures that McKinley and Nosofsky studies and that Ashby and Waldron studies, that led both to argue in favor of more exemplar or exemplar-like representations? In part that depends on the instances and the categories used. Images of an ostrich might easily be classified using a single prototype representation for bird when the alternative categories are cat, deer, frog, and the like. But what if there was a dinosaur? Or some other two-legged creatures? Are there El Caminos (cars) and pick-up trucks (trucks)? I just don't know if the CIFAR dataset has the kind of structure (especially given the diversity of basic-level categories, ranging from airplanes to frogs) to provide the kind of leverage needed to distinguish exemplar models and prototype models convincingly. -Perhaps more important, to what extent are the representations learned by the CNNs (which have a logistic hyperplane on the outputs) themselves "prototype-like"? Have the complex manifolds of the objects in the CIFAR dataset been untangled over learning (in the sense of the DiCarlo TiCS paper) that it is simply enough to plop down a linear decision boundary (which is mathematically equivalent to a simple prototype model) and classify with reasonable accuracy? One of the strengths of exemplar models (that isn't highlighted in the paper) is that the same object representations (in a multidimensional psychological space) and the same exemplar representations can be used for object categorization (more abstract), object identification (more specific), category typicality, recognition memory, albeit with different weights on dimensions depending on their diagnosticity. The CIFAR object representations learned by CNN models may be specific to doing the kinds of categorizations (the 10 categories) those CNNs were trained on. Response: We are sincerely grateful to Reviewer #1 for their patience and recommendations, and believe the manuscript to be much improved as a result. If a more expert reviewer on those issues supports publication too, I'm very happy to join them. But if those other concerns remain, it's not my place to say whether this revision has addressed them or not. Response: Reviewer 2, below, is such an expert, and we have endeavored to comprehensively address all of their comments, technical and otherwise, hopefully in a suitably accessible manner. While I think the approach of combining CNN models and cognitive models is an excellent one, I have a number of concerns, questions, and comments that give me some pause in recommending publication. Response: We thank Reviewer #2 for their time and insight in reviewing the work, and hope we have addressed their comments in a satisfying manner below and in the highlighted changes to the manuscript. In general, we felt these comments fell under two themes: our coverage and portrayal of existing work, and the nature of the categorization task given the set of images for which we have collected data. To address the former, we have added a number of sections and rephrasings to the text that better communicate the literature on categorization models and offer our findings in a more complementary manner. Indeed, this was always our intent: to highlight the changes to existing approaches and debates that necessarily accompany a move to naturalistic stimuli, rather than claim any single strategy was better than others. To address the latter, we have mainly added text to the results and discussion that incorporates the comments, as well as giving our justifications below. We really do feel that using these images (and using large numbers of them) have important advantages. Reviewer #2 (point 1): I'm not quite sure what the authors mean by "complexity" when they say that the exemplar model is more complex than the prototype model. A prototype model requires some form of abstraction at storage, which is arguable a more complex learning process than simply remembering all or a subset of experienced exemplars An exemplar model does require more items to be stored, but as many have argued, for example Barsalou, being able to create abstractions on the fly using stored exemplars may provide more cognitive flexibility than requiring the right abstraction to be formed during learning. Now if the authors meant complexity in the sense of the work by Myung, Pitt, Navarro, and others, while it is true that exemplar models are somewhat more complex (in a functional form sense) than prototype models, I seem to recall that the difference in that kind of complexity is not nearly as large as that between prototype and exemplar models and other kinds of models, but it has been awhile since I reviewed that work. In a reply to a previous review, the authors note that exemplar models can predict "more complex … categorization boundaries". To be clear, exemplar models do not learn complex or simple categorization boundaries, those boundaries are entirely implicit based on the nature of the similarities to exemplars from various categories. That's not a sense of "complexity" that to me would disqualify a model, except perhaps in the eye of certain beholders. There seems, perhaps, to be an attempt to appeal to some common-sense notion that "everyone" believes that exemplar models are "more complex" than prototype models, perhaps to embrace the prototype model as the "winner" in a relatively tied race of model comparison. That's not a very compelling argument statistically or theoretically, at least in my view. Response: Thank you for the comment-we appreciate the opportunity to provide clarity here. Our characterization of complexity is based on statistical learning theory, and can be expressed in terms of the size of the space of functions that a decision boundary is selected from based on a sample. Prototype models historically consider only linear functions (i.e., linear boundaries), but some variants can also support quadratic functions (as mentioned in the paper). On the other hand, exemplar models are members of the class of kernel-based models, which are known to be "universal function approximators" (i.e., given enough data, they can approximate any measurable or continuous function up to any desired accuracy). The difference in complexity is therefore the difference between the space of boundaries characterized by linear and quadratic functions and the space of all possible boundaries. We have added a short sentence clarifying this assessment in the results ( lines 236-247 ). Indeed, although not mentioned explicitly in, for example, Myung, Pitt, and Navarro, 2007, it is captured implicitly in the functional form term of their model complexity score, which grows with the number of exemplars. However, it is not our intention to use this kind of complexity to argue that one class of models is better than the other. For example, linear boundaries have less representational capacity (i.e., they simply can't represent complex nonlinear boundaries); however, they require drastically less data to generalize well and suffer much less from the "curse of dimensionality". Exemplar models have the opposite advantages and disadvantages. Further, there are relevant considerations outside of the classic framework of statistical learning (e.g., as the reviewer mentions, keeping exemplars around can be useful for on-the-fly representation construction etc.). For these reasons, we agree with the reviewer that exemplar models should not be "disqualified" on any such basis; nor would we disqualify prototype models for being fundamentally limited. More generally, we don't see our findings as helping prototype models "win out" over exemplar models (this seems both unlikely and unhelpful), but as an interesting assessment of alternate strategies in a novel context. Finally, we agree that exemplar models represent decision boundaries only implicitly, and this is also true for prototype models. However, implicit or explicit, the correspondence between a density estimator and its class of corresponding boundaries is exact and well-understood (e.g., each corresponds to a Bayes optimal decision boundary, as established by Ashby & Alfonso-Reese, 1995). While space is more limited than we would like, we have revised and added references to the second paragraph of the introduction ( lines 36-42 ), and elsewhere draw attention to the advantages of exemplar models that are not encompassed by the framework discussed above. (lines 48-54) The distinctions between work in categorization and work in object recognition is not now as sharp as the authors suggest, in my opinion. I would have agreed with them 15 years ago perhaps. Not as much today. While it is true that a significant amount of past work used simple, low-dimensional, highly-controlled stimuli (in much the same way that work in attention, memory, and other domains used simple, low-dimensional, highly-controlled stimuli), a fair amount of more recent work (with objects, with faces) is using more complex real-world stimuli. Reviewer #2 (point 2): Response: To clarify, while there is increasing empirical work using more naturalistic stimuli in both of these literatures, the theoretical literature on cognitive models of categorization has almost exclusively focused on experiments with simplistic stimuli. This makes sense -these models require identifying the features of stimuli, and cannot simply be applied in pixel space directly. Our goal is to bring theoretical work on cognitive models of categorization into line with the advances that have been made in other areas of cognitive science and neuroscience using naturalistic stimuli. We discuss this point in the third paragraph of the introduction ( lines 52-65 ).
3,511.2
2020-10-27T00:00:00.000
[ "Computer Science" ]
Point process convergence for symmetric functions of high-dimensional random vectors The convergence of a sequence of point processes with dependent points, defined by a symmetric function of iid high-dimensional random vectors, to a Poisson random measure is proved. This also implies the convergence of the joint distribution of a fixed number of upper order statistics. As applications of the result a generalization of maximum convergence to point process convergence is given for simple linear rank statistics, rank-type U-statistics and the entries of sample covariance matrices. Introduction In classical extreme value theory the asymptotic distribution of the maximum of random points plays a central role.Maximum type statistics build popular tests on the dependency structure of high-dimensional data.Especially, against sparse alternatives those tests possess good power properties (see [15,18,34]).Closely related to the maxima of random points are point processes, which play an important role in stochastic geometry and data analysis.They have applications in statistical ecology, astrostatistics and spatial epidemiology [1].For a sequence (Y i ) i of real-valued random variables, we set where ε x is the Dirac measure in x.Let K := (0, 1)×(u, ∞) with u ∈ R.Then, M p (K) counts the number of exceedances of the threshold u by the random variables Y 1 , . . ., Y p .If Y (k) denotes the k-th upper order statistic of Y 1 , . . ., Y p , it holds that { M p (K) < k} = {Y (k) ≤ u}, and in particular { M p (K) = 0} = {max i=1,...,p Y i ≤ u}.Therefore, the weak convergence of a sequence of point processes gives information about the joint asymptotic distribution of a fixed number of upper order statistics.If the sequence (Y i ) i consists of independent and identically distributed (iid) random variables, maximum convergence and point process convergence are equivalent, but if the random variables exhibit dependency, this equivalence does not necessarily hold anymore.In this sense, point process convergence is a substantial generalization of the maximum convergence.Additionally, the time components i/p deliver valuable information of the random time points when a record occurs, i.e., the time points when Y j > max i=1,...,j−1 Y i .Our main motivation comes from statistical inference for high-dimensional data, where the asymptotic distribution of the maximum of dependent random variables has found several applications in recent years (see for example [6,7,8,9,15,17,18,34]).The objective of this paper is to provide the methodology to extend meaningful results with reference to the convergence of the maximum of dependent random variables, to point process convergence. To this end, we consider dependent points T i := g n,p (x i 1 , x i 2 , . . ., x im ), where the index i = (i 1 , i 2 , . . ., i m ) ∈ {1, . . ., p} m .The random vectors x 1 , . . ., x p are iid on R n and g n,p : R mn → R is a measurable, symmetric function.Important examples include U-statistics, simple linear rank statistics, rank-type U-statistics, the entries of sample covariance matrices or interpoint distances. Additionally, we assume that the dimension of the points n is growing with the number of points p.Over the last decades the environment and therefore the requirements for statistical methods have changed fundamentally.Due to the huge improvement of computing power and data acquisition technologies one is confronted with large data sets, where the dimension of observations is as large or even larger than the sample size.These highdimensional data occur naturally in online networks, genomics, financial engineering, wireless communication or image analysis (see [11,14,21]).Hence, the analysis of high-dimensional data has developed as a meaningful and active research area. We will show that the corresponding point process of the points T i converges to a Poisson random measure (PRM) with a mean measure that involves the m-dimensional Lebesgue measure and an additional measure µ.If we replace the points T i with iid random variables with the same distribution, the (non-degenerate) limiting distribution of the maximum will necessarily be an extreme value distribution of the form exp(−µ(x)).Moreover, the convergence of the corresponding point process will be equivalent to the condition However, since the random points T i are not independent, we additionally need the following assumption on the dependence structure P(g n,p (x 1 , x 2 , . . ., x m ) > x, g n,p (x m−l+1 , . . ., where l = 1, . . ., m − 1. In the finite-dimensional case where n is fixed, several results about point process convergence are available in similar settings.In [31], Silverman and Brown showed point process convergence for m = 2, n = 2 and g 2,p (x i , x j ) = a p ∥x i − x j ∥ 2 2 , where the x i have a bounded and almost everywhere continuous density, a p is a suitable scaling sequence and ∥ • ∥ 2 is the Euclidean norm on R 2 .In the Weibull case µ(x) = x α for x, α > 0, Dehling et al. [12] proved a generalization to points with a fixed dimension and g n,p (x i , x j ) = a p h(x i , x j ), where h is a measurable, symmetric function and a p is a suitable scaling sequence. Also in the finite-dimensional case, under similar assumptions as in (1.1) with µ(x) = βx α for x, α > 0, β ∈ R and under condition (1.2), Schulte and Thäle [29] showed convergence in distribution of point processes towards a Weibull process.The points of these point processes are obtained by applying a symmetric function g n,p to all m-tuples of distinct points of a Poisson process on a standard Borel space.In [30], this result was extended to more general functions µ and to binomial processes so that other PRMs were possible limit processes.In [13], Decreusefond, Schulte and Thäle provided an upper bound of the Kantorovich-Rubinstein distance between a PRM and the point process induced in the aforementioned way by a Poisson or a binomial process on an abstract state space.Notice that convergence in Kantorovich-Rubinstein distance implies convergence in distribution (see [26,Theorem 2.2.1] or [13, p. 2149]).In [10] another point process result in a similar setting is given for the number of nearest neighbor balls in fixed dimension.Moreover, [4] presents a general framework for Poisson approximation of point processes on Polish spaces. 1.1.Structure of this paper.The remainder of this paper is structured as follows.In Section 2 we prove weak point process convergence for the dependent points T i in the high-dimensional case as tool for the generalization of the convergence of the maximum (Theorem 2.1).We provide popular representations of the limiting process in terms of the transformed points of a homogeneous Poisson process.Moreover, we derive point process convergence for the record times.In Section 3 these tools are applied to study statistics based on relative ranks like simple linear rank statistics or rank-type U-statistics.We also prove convergence of the point processes of the off-diagonal entries of large sample covariance matrices.The technical proofs are deferred to Section 4. Point process convergence We introduce the model that was briefly described in the introduction.Let x 1 , . . ., x p be iid R n -valued random vectors with x i = (X i1 , . . ., X in ) ⊤ , i = 1, . . ., p, where p = p n is some positive integer sequence tending to infinity as n → ∞.We consider the random points where i = (i 1 , i 2 , . . ., i m ) ∈ {1, . . ., p} m and g n = g n,p : R mn → R is a measurable and symmetric function, where symmetric means g n (y 1 , y 2 , . . ., y m ) = g n (y π(1) , y π(2) , . . ., y π(m) ) for all y 1 , y 2 , . . ., y m ∈ R n and all permutations π on {1, 2, . . ., m}.We are interested in the limit behavior of the point processes M n towards a PRM M , where i/p = (i 1 /p, . . ., i m /p).The limit M is a PRM with mean measure We consider the M n 's and M as random measures on the state space with values in M(S) the space of point measures on S, endowed with the vague topology (see [27]).The following result studies the convergence M n d → M , which denotes the convergence in distribution in M(S). Theorem 2.1.Let x 1 , . . ., x p be n-dimensional, independent and identically distributed random vectors and p = p n is some sequence of positive integers tending to infinity as n → ∞.Additionally, let g = g n : R mn → (v, w) be a measurable and symmetric function, where v, w ∈ R = R ∪ {∞, −∞} and v < w.Assume that there exists a function µ : (v, w) → R + with lim x→v µ(x) = ∞ and lim x→w µ(x) = 0 such that, for x ∈ (v, w) and n → ∞, (A1) p m P(g n (x 1 , x 2 , . . ., x m ) > x) → µ(x) and (A2) Note that (A1) ensures the correct specification of the mean measure, while (A2) is an anti-clustering condition.Both conditions are standard in extreme value theory.It is worth mentioning that where we use the conventions µ(x) = 0 if x > w, µ(x) = ∞ if x < v, and exp(−∞) = 0.The typical distribution functions H are the Fréchet, Weibull and Gumbel distributions.In these cases, the limiting process M has a representation in terms of the transformed points of a homogeneous Poisson process.Let (U i ) i be an iid sequence of random vectors uniformly distributed on S 1 and Γ i = E 1 +. ..+E i , where (E i ) i is an iid sequence of standard exponentially distributed random variables, independent of (U i ) i . It is well-known that N Γ := ∞ i=1 ε Γ i is a homogeneous Poisson process and hence it holds for every A ⊂ (0, ∞) that N Γ (A) is Poisson distributed with parameter λ 1 (A) (see for example [16,Example 5.1.10]).For the mean measure η of M we get for a product of intervals where we used in the second line that as U i is uniformly distributed on S 1 for every i and We get the following representations for the limiting processes M . A direct consequence of the point process convergence is the convergence of the joint distribution of a fixed number of upper order statistics.In the Fréchet, Weibull and Gumbel cases the limit function can be described as the joint distribution function of transformations of the points Γ i . Corollary 2.2.Let G n,(j) be the j-th upper order statistic of the random variables (g n (x i 1 , x i 2 , . . ., x im )), where 1 ≤ i 1 < i 2 < . . .< i m ≤ p.Under the conditions of Theorem 2.1 and for a fixed k ≥ 1 the distribution function as n → ∞.In particular, in the Fréchet, Weibull and Gumbel cases, it holds that By the representation of the limiting point process in the Fréchet, Weibull and Gumbel cases, (2.4) is equal to one of the three distribution functions in the corollary.□ One field, where point processes find many applications, is stochastic geometry.The paper [29], for example, considers order statistics for Poisson k-flats in R d , Poisson polytopes on the unit sphere and random geometric graphs. Setting k = 1 in Corollary 2.2 we obtain the convergence in distribution of the maximum of the points T i . Corollary 2.3.Under the conditions of Theorem 2.1 we get Example 2.4 (Interpoint distances).Let x i = (X i1 , . . ., X in ) ⊤ , i = 1, . . ., p be n-dimensional random vectors, whose components (X it ) i,t≥1 are independent and identically distributed random variables with zero mean and variance 1.We are interested in the asymptotic behavior of the largest interpoint distances where ∥•∥ 2 is the Euclidean norm on R n .Figure 1 shows the four largest interpoint distances of 500 points on R 2 with independent standard normal distributed components.Note that three of the largest four distances involve the same outlying vector x i . We assume that there exists s > ).Additionally, we let (b n ) n and (c n ) n be sequences given by , as n → ∞ (see [19] for details).Therefore, the conditions (A1) and (A2) in Theorem 2.1 hold for m = 2, Record times.In Theorem 2.1 we showed convergence of point processes including time components.Therefore, we can additionally derive results for the record times L(k), k ≥ 1 of the running maxima of the points T i = g n (x i 1 , x i 2 , . . ., x im ) for i = (i 1 , . . ., i m ), which are recursively defined as follows: (c.f.Sections 5.4.3 and 5.4.4 of [16]).To prove point process convergence for the record times we need the convergence in distribution of the sequence of processes (Y n (t), 0 < t ≤ 1) in D(0, 1], the space of right continuous functions on (0, 1] with finite limits existing from the left, defined by where ⌊x⌋ = max{y ∈ Z : y ≤ x} for x ∈ R, towards an extremal process.We call Y = (Y (t)) t>0 an extremal process generated by the distribution function H, if the finitedimensional distributions are given by where [16,Definition 5.4.3]).To define convergence in distribution in D(0, 1] we first need to introduce a metric D on D(0, 1].To this end, let Λ [0,1] be a set of homeomorphisms , λ is continuous and strictly increasing}. Then for f, g ∈ D[0, 1] the Skorohod metric D is defined by (see [5,Section 12]) where f and g are the right continuous extensions of f and g on [0, 1].The space of functions D[0, 1] and therefore D(0, 1] is separable under the Skorohod metric but not complete.However, one can find an equivalent metric, i.e., a metric which generates the same Skorohod topology, under which D[0, 1] is complete (see [ , where (U i ) i is an iid sequence of random vectors uniformly distributed on S 1 and where Then the process Y has the finite dimensional distributions in (2.5) for k ≥ 1, 0 < t i ≤ 1, x i ∈ R and 1 ≤ i ≤ k.Therefore, Y is an extremal process generated by H restricted to the interval (0, 1].For these processes we can show the following invariance principle by application of the continuous mapping theorem (see [5,Theorem 2.7] or [27, p. 152]).Proposition 2.5.Under the conditions of Theorem 2.1 and if Since Y is a nondecreasing function, which is constant between isolated jumps, it has only countably many discontinuity points.Now let (τ n ) n be the sequence of these discontinuity points of Y .Notice that by [16,Theorem 5.4.7] the point process ∞ k=1 ε τ k is a PRM with mean measure ν(a, b) = log(b/a) for 0 < a < b ≤ 1.We are ready to state our result for the point process of record times. Theorem 2.6.Under the conditions of Theorem 2.1 and if H(•) = exp(−µ(•)) is an extreme value distribution it holds that Based on Theorem 2.6 we can make statements about the time points of the last and second last record at or before p.Then the following statements hold for x, y ∈ (0, 1] as n → ∞. (1) (2) ( 1) is a direct consequence of the definitions of ζ and L. Part (2) follows by as n → ∞ and P(J(x, 1] = 0, J(y, 1] ≤ 1) = P(J(x, 1] = 0)P(J(y, x] ≤ 1) = y + y log(x/y). Applications 3.1.Relative ranks.In recent years, maximum-type tests based on the convergence in distribution of the maximum of rank statistics of a data set have gained significant interest for statistical testing [18].Let y 1 , . . ., y n be p-dimensional iid random vectors with y t = (X 1t , . . ., X pt ) following a continuous distribution to avoid ties.We write Q it for the rank of X it among X i1 , . . ., X in .Additionally, let R (t) ij be the relative rank of the j-th entry compared to the i-th entry; that is R ij is that we look at the j-th and i-th rows of (Q it ) and find the location of t in the i-th row.Then we choose the value in the j-th row at this location. Many important statistics are based on (relative) ranks; we consider two classes of such statistics in this section.First, we introduce the so-called simple linear rank statistics, which are of the form where g is a Lipschitz function (also called score function), and (c nt ) with c nt = n −1 f (t/(n + 1)) for a Lipschitz function f and n t=1 c 2 nt > 0 are called the regression constants.An example of such a simple linear rank statistic is Spearman's ρ, which will be discussed in detail in Section 3.1.2.For 1 ≤ i < j ≤ p the relative ranks (R t=1 depend on the vectors x i and x j , where x k = (X k1 , . . ., X kn ) for 1 ≤ k ≤ p.We assume that the vectors x 1 , . . .x p are independent.It is worth mentioning that the ranks (Q it ) remain the same if we transform the marginal distributions to the (say) standard uniform distribution.Thus, the joint distribution of (R t=1 , and thereby the distribution of V ij , does not depend on the distribution of x i or x j .Therefore, we may assume without loss of generality that the random vectors x 1 , . . ., x p are identically distributed.We can write V ij = g n,V (x i , x j ) for a measurable function g n,V : R 2n → R. Next, we consider rank-type U -statistics of order m < n of the form where the symmetric kernel h is such that U ij depends only on (R An important example of a rank-type U -statistic is Kendall's τ , which will be studied in Section 3.1.1.For more examples we refer to [18] and references therein.As for simple linear rank statistics, we are able to write U ij = g n,U (x i , x j ), where g n,U : R 2n → R is a measurable function and x 1 , . . .x p are iid random vectors. An interesting property of rank-based statistics is the following pairwise independence.We also note that they are generally not mutually independent.Lemma 3.1 (Lemma C4 in [18]).For 1 ≤ i < j ≤ p, let Ψ ij be a function of the relative ranks {R (t) ij , t = 1, . . ., n}.Assume x 1 , . . ., x p are independent.Then for any (i, j) ̸ = (k, l), i < j, k < l, the random variables Ψ ij and Ψ kl are independent. As an immediate consequence we obtain pairwise independence of (U ij ) and (V ij ), respectively. Lemma 3.2.For any (i, j) ̸ = (k, l), i < j, k < l, the random variables V ij and V kl are independent and identically distributed.Moreover, U ij and U kl are independent and identically distributed. We now want to standardize U ij and V ij .By independence of (X it ), we have where g n = n −1 n t=1 g(t/(n + 1)) is the sample mean of g(Q 11 /(n + 1)), . . ., g(Q 1n /(n + 1)) and c n = n t=1 c nt .Expectation and variance of U ij can also be calculated analytically.We set ) , and define the standardized versions of U ij and V ij by It is well-known that V ij and U ij are asymptotically standard normal and the following lemma provides a complementary large deviation result. Lemma 3.3.[23, p.404-405] Suppose that the kernel function h is bounded and non-degenerate.Then we have for x = o(n 1/6 ) that Assume that the score function g is differentiable with bounded Lipschitz constant and that the constants (c nt ) t satisfy where C is some constant.Then it holds for x = o(n 1/6 ) For a discussion of (3.1), see [23, p.405].To proceed we need to find a suitable scaling and centering sequences for V ij and U ij , respectively, such that the conditions of Theorem 2.1 are fulfilled.For an iid standard normal sequence (X i ) it is known that where d p = √ 2 log p − log log p+log 4π 2(2 log p) 1/2 ; see Embrechts et al. [16,Example 3.3.29].Since we are dealing with p(p − 1)/2 random variables (V ij ) and (U ij ), respectively, which are asymptotically standard normal, d p = d p(p−1)/2 seems like a reasonable choice for scaling and centering sequences. Our main result for rank-statistics is the following. Theorem 3.4.(a) Suppose that the kernel function h is bounded and non-degenerate.If p = exp(o(n 1/3 )), the following point process convergence holds where and (E i ) are iid standard exponential, i.e., N is a Poisson random measure with mean measure µ(x, ∞) = e −x , x ∈ R. (b) Assume that the score function g is differentiable with bounded Lipschitz constant and that the constants (c nt ) t satisfy (3.1).Then if p = exp(o(n 1/3 )), it holds that Proof.We start with the proof of (3.3) for which we will use Theorem 2.1, as x 1 , . . .x p are iid and g n,V is a measurable function.Therefore, we only have to show that for x ∈ R it holds (1) p(p−1) where x p = x/d p + d p .We will begin with the proof of (1).Since x p ∼ d p = o(n 1/6 ) we get by Lemma 3.3 and by Mill's ratio we have (writing p = p(p−1) log p e − log p+(log log p)/2+(log(4π))/2 e −x = e −x . Regarding (2), we note that, by Lemma 3.2, V 12 and V 13 are independent.Thus, we get where we used Lemma 3.3 and Mill's ratio in the last two steps.That completes the proof of (3.3).The proof of (3.2) follows by analogous arguments.□ Remark 3.5.Theorem 3.4 is a generalization of Theorems 1 and 2 in [18] who proved under the conditions of Theorem 3.4 and if p = exp(o(n 1/3 )) that As in Theorem 2.6, we additionally conclude point process convergence for the record times of the maxima of V ij and U ij .To this end, we investigate the sequence (max 1≤i<j≤k U ij ) k≥1 .This sequence jumps at time k if one of the random variables U 1k , . . ., U k−1,k is larger than every U ij for 1 ≤ i < j ≤ k − 1. Between these jump (or record) times the sequence is constant. Let L U be this sequence of record times defined by and let L V be constructed analogously.As in Corollary 2.7, we can draw conclusions on the index of the last and second last jump before or at p. Let ζ U (p) be the number of records among max 1≤i<j≤2 U ij , . . ., max 1≤i<j≤p U ij .Then, as n → ∞, we have for x, y ∈ (0, 1] (1) , where (3) gives information about how much time elapses between the second last and the last jump of (max 1≤i<j≤k U ij ) k≥1 before or at p. 3.1.1. Kendall's tau.Kendall's tau is an example of a rank-type U-statistic with bounded kernel.For i ̸ = j Kendall's tau τ ij measures the ordinal association between the two sequences (X i1 , . . ., X in ) and (X j1 , . . ., X jn ).It is defined by where the function sign : R → {1, 0, −1} is given by sign(x) = x/|x| for x ̸ = 0 and sign(0) = 0.An interesting property of Kendall's tau is that there exists a representation as a sum of independent random variables.We could not find this representation in the literature.Therefore, we state it here.The proof can be found in Section 4. Proposition 3.7.We have where (D i ) i≥1 are independent random variables with D i being uniformly distributed on the numbers −i/2, −i/2 + 1, . . ., i/2. Corollary 3.8.Under the conditions of Theorem 3.4 we have 3.1.2.Spearman's rho.An example of a simple linear rank statistic is Spearman's rho, which is a measure of rank correlation that assesses how well the relationship between two variables can be described using a monotonic function.Recall that Q ik and Q jk are the ranks of X ik and X jk among {X i1 , . . ., X in } and {X j1 , . . ., X jn }, respectively, and write q n = (n + 1)/2 for the average rank.Then for 1 ≤ i ̸ = j ≤ p Spearman's rho is defined by For mean and variance we get Therefore, we obtain the following corollary of Theorem 3.4. Corollary 3.9.Under the conditions of Theorem 3.4 it holds that The next auxiliary result allows us to transfer the weak convergence of a sequence of point processes to a another sequence of point processes, provided that the maximum distance between their points tends to zero in probability.Proposition 3.10.For arrays (X i,n ) i,n≥1 and (Y i,n ) i,n≥1 of real-valued random variables, let Example 3.11.It turns out that there is an interesting connection between Spearman's rho and Kendall's tau.By [20, p.318] we can write Spearman's rho as where is the major part of Spearman's rho.Therefore, r ij is a U-statistic of degree three with an asymmetric bounded kernel and with We now use Proposition 3.10 and Corollary 3.9 to show that For this purpose we consider the following difference . By (3.4), (3.6) and (3.5) this expression is asymptotically equal to Since |τ ij | and |r ij | are bounded above by constants, we deduce that which verifies the condition in Proposition 3.10.Since N ρ n d → N by Corollary 3.9, we conclude the desired (3.7). Sample covariances. An important field of current research is the estimation and testing of high-dimensional covariance structures.It finds application in genomics, social science and financial economics; see [8] for a detailed review and more references.Under quite general assumptions Xiao et al. [33] investigated the maximum off-diagonal entry of a high-dimensional sample covariance matrix.We impose the same model assumptions (compare [33, p. 2901-2903]), but instead of the maximum we study the point process of off-diagonal entries. We start by describing the model and spelling out the required assumptions.Let x 1 , . . ., x n be p-dimensional iid random vectors with x i = (X 1i , . . ., X pi ), where E[X ji ] = 0 for 1 ≤ j ≤ p and X j := 1 n n k=1 X jk .Denote Σ = (σ i,j ) 1≤i,j≤p as the covariance matrix of the vector x 1 and assume σ i,i = 1 for 1 ≤ i ≤ p.The empirical covariance matrix (σ i,j ) 1≤i,j≤p is given by σi A fundamental problem in high-dimensional inference is to derive the asymptotic distribution of max 1≤i<j≤p |σ i,j − σ i,j |.Since the σi,j 's might have different variances we need to standardize σi,j by θ i,j = Var(X i1 X j1 ), which can be estimated by We are interested in the points Let I n = {(i, j) : 1 ≤ i < j ≤ p} be an index set.We use the following notations to formulate the required conditions: Now, we can draft the following conditions.(B1) lim inf (B4) For some constants t > 0 and 0 < r ≤ 2, lim sup n→∞ K n (t, r) < ∞, and (B4') log p = o(n r/(4+3r) ), lim sup n→∞ K n (t, r) < ∞ for some constants t > 0 and r > 0. Example 3.14 (Non-stationary linear processes).As in the previous example, x 1 , . . ., x n are iid random vectors.Now x 1 = (X 11 , . . ., X p1 ) is given by where (ϵ i ) i∈Z is a sequence of iid random variables with mean zero, variance one and finite fourth moment and the sequences (f i,t ) t∈Z satisfy t∈Z f 2 i,t = 1.Let κ 4 be the fourth cumulant of ϵ 0 and for any positive sequence k n such that k n → ∞ as n → ∞ and one of the assumptions (B4) and (B4') or (ii) p k=1 (h n (k)) 2 = O(p 1−δ ) for some δ > 0 and one of the assumptions (B4') or (B4") holds, then we have To illustrate these assumptions we consider the special case x 1 := (ϵ 1 , . . ., ϵ p )A n , where A n ∈ R p×p is a deterministic, symmetric matrix with (A n ) i,j = a ij for 1 ≤ i, j ≤ p.We assume that p t=1 a 2 it = 1 for every 1 ≤ i ≤ p.The covariance matrix of x 1 is given by Cov(x 1 ) = A n A T n with (A n A T n ) ij = p t=1 a it a jt .Observe that the diagonal entries are equal to 1.To satisfy assumption (3.10) we have to assume that the entries except for the diagonal are asymptotically smaller than 1, i.e.We set as a measure of how close the matrices A n are to diagonal matrices.For the point process convergence either (i) or (ii) has to be satisfied for h n . 4.1. Proofs of the results in Section 2. Proof of Theorem 2.1.We will follow the lines of the proof of Theorem 2.1 in [12].Since the mean measure η has a density, the limit process M is simple and we can apply Kallenberg's Theorem (see for instance [16, p.233, Theorem 5.2.2] or [22,p.35,Theorem 4.7]).Therefore, it suffices to prove that for any finite union of bounded rectangles ], it holds that (1) lim R) .Without loss of generality we can assume that the A k 's are chosen to be disjoint.First we will show (1).Set T := T (1,2,...,m) = g n (x 1 , x 2 , . . ., x m ).If q = 1 we get Since assumption (A1) implies p m /(m!) P(T ∈ B 1 ) → µ(B 1 ), we obtain the convergence η n (R) → η(R) as n → ∞.The case q ≥ 1 follows by To show (2), we let P n be the probability mass function of the Poisson distribution with mean η n (R).Then we have where the last equality holds by (1).Therefore, we only have to estimate |P(M n (R) = 0) − P n (0)|.For this we employ the Stein-Chen method (see [3] for a discussion).The Stein equation for the Poisson distribution P n with mean η n (R) is given by This equation is solved by the function x(0) = 0 x(j + 1) = j! η n (R) j+1 e ηn(R) (P n ({0}) − P n ({0})P n ({0, . . ., j}) , j = 0, 1, . . .By (4.11) we see that (4.12) Therefore, we only have to estimate the right hand side of (4.12) and to this end we set We will start by proving the continuity of V 1 in the case, where µ(x) = − log(H(x)) and H is the Gumbel distribution.In this case, N has a.s. the following properties for any 0 < s < t < 1 and x ∈ R. Therefore, we only have to show continuity at m ∈ M(S) with these properties.Let (m n ) n be a sequence of point measures in M(S), which converges vaguely to m (m n v → m) as n → ∞ (see [27, p. 140]).Since V 1 (m) is right continuous there exists a right continuous extension on [0, 1], which we denote with V 1 (m).Now choose → m, we can conclude from [27, Proposition 3.12] that there exists a 1 ≤ q < ∞ such that for n large enough We enumerate and designate the q points in the following way ((t small enough so that the δ-spheres of the distinct points of the set {(t i , j i )} are disjoint and in S 1 × [β, ∞).Pick n so large that every δ-sphere contains only one point of m n .Then set i,m and λ n is linearly interpolated elsewhere on [0, 1].For this λ n it holds that which finishes the proof.The Fréchet and the Weibull case follow by similar arguments.□ Proof of Theorem 2.6.We will proceed similarly as in [27, p. 217-218] using the continuous mapping theorem again.Since Y is the restriction to (0, 1] of an extremal process (see [27,Section 4.3]), it is a nondecreasing function, which is constant between isolated jumps.Let D ↑ (0, 1] be the subset of D(0, 1] that contains all functions with this property.Set where xn and x are the right continuous extensions of x n and x on [0, 1].We want to prove the vague convergence where {t Let q n = (q 1 , . . ., q n ) be a permutation of the set {1, . . ., n}.If i < j and q i > q j , we call the pair (q i , q j ) an inversion of the permutation q n .Since the X 11 , . . ., X 1n are iid, the permutation consisting of the ranks is uniformly distributed on the set of the n! permutations of {1, . . ., n}. By I n we denote the number of inversions of q n .For s < t, we have sign(X 1s − In view of (4.21), this implies By [24, p. 479] or [25, p. 3] (see also [28]) the moment generating function of I n is 1 − e jt j(1 − e t ) , t ∈ R . 1. 2 . Notation.Convergence in distribution (resp.probability) is denoted by d → (resp.P →) and unless explicitly stated otherwise all limits are for n → ∞.For sequences (a n ) n and (b n ) n we write a n = O(b n ) if a n /b n ≤ C for some constant C > 0 and every n ∈ N, and a n = o(b n ) if lim n→∞ a n /b n = 0. Additionally, we use the notation a n ∼ b n if lim n→∞ a n /b n = 1 and a n ≲ b n if a n is smaller than or equal to b n up to a positive universal constant.We further write a ∧ b := min{a, b} for a, b ∈ R and for a set A we denote |A| as the number of elements in A. Figure 1 . Figure 1.Four largest distances between 500 normal distributed points Theorem 3 . 6 . Under the conditions of Theorem 3.4 it holds that 1], the space of point measures on (0, 1], where J is a Poisson random measure with mean measure ν(a, b) = log(b/a) for 0 < a < b ≤ 1. 2 and ] and µ is defined by µ(B k ) = e −r k − e −s k .From the proof of Theorem 2 of[33, p. 2910, 2913-2914] we know that the conditions of[33, Lemma 6] are satisfied.Furthermore, from the proof of Lemma 6[33, p. 2909-2910] we get that for z ∈ R and z n = (4 log p − log log p − log 8π + 2z)1/ (W 1 ) n d → N .By Proposition 3.10 it remains to show max 1≤i<j≤p a it a jt < 1. i } and {t i } are the discontinuity points of x n and x, respectively.Consider an arbitrary continuous function f on (0, 1] with compact support contained in an interval [a, b] with 0 < a < b ≤ 1, and x is continuous at a and b.It suffices to show that lim 3 . Proof of Proposition 3.10.Our idea is to transfer the convergence of N X n onto N Y n .To this end, it suffices to show (see[22, Theorem 4.2]) that for any continuous function f on R with compact support,f dN Y n − f dN X n P → 0 , n → ∞ .Suppose the compact support of f is contained in [K + γ 0 , ∞) for some γ 0 > 0 and K ∈ R. Since f is uniformly continuous, ω(γ) := sup{|f (x) − f (y)| : x, y ∈ R, |x − y| ≤ γ} tends to zero as γ → 0. We have to show that for any ε > 0i,n ) − f (X i,n ) > ε = 0 .(4.22) On the sets A n,γ = max i=1,...,p Y i,n − X i,n ≤ γ , γ ∈ (0, γ 0 ) , we have [5,Theorem 12.2]).In particular, the Skorohod metric and the equivalent metric generate the same open sets and thus the σ-algebras of the Borel sets, which are generated by these open sets, are the same.Therefore, a sequence of probability measures on D(0, 1] is relatively compact if and only if it is tight [5,Section 13].Hence, for every tight sequence of probability measures on D(0, 1] the convergence of the finite dimensional distributions on all continuity points of the limit distribution implies convergence in distribution[5, Theorem 13.1].For the PRM M
9,010.4
2023-03-28T00:00:00.000
[ "Mathematics" ]
Fistulotomy and drainage of deep postanal space abscess in the treatment of posterior horseshoe fistula Background Posterior horseshoe fistula with deep postanal space abscess is a complex disease. Most patients have a history of anorectal abscess drainage or surgery for fistula-in-ano. Methods Twenty-five patients who underwent surgery for posterior horseshoe fistula with deep postanal space abscess were analyzed retrospectively with respect to age, gender, previous surgery for fistula-in-ano, number of external openings, diagnostic studies, concordance between preoperative studies and operative findings for the extent of disease, operating time, healing time, complications, and recurrence. Results There were 22 (88%) men and 3 (12%) women with a median age of 37 (range, 25–58) years. The median duration of disease was 13 (range, 3–96) months. There was one external opening in 12 (48%) patients, 2 in 8 (32%), 3 in 4 (16%), and 4 in 1 (4%). Preoperative diagnosis of horseshoe fistula was made by contrast fistulography in 4 (16%) patients, by ultrasound in 3 (12%), by magnetic resonance imaging in 6 (24%), and by physical examination only in the remainder (48%). The mean ± SD operating time was 47 ± 10 min. The mean ± SD healing time was 12 ± 3 weeks. Three of the 25 patients (12%) had diabetes mellitus type II. Nineteen (76%) patients had undergone previous surgery for fistula-in-ano, while five (20%) had only perianal abscess drainage. Neither morbidity nor mortality developed. All patients were followed up for a median of 35 (range, 6–78) months and no recurrence was observed. Conclusions Fistulotomy of the tracts along the arms of horseshoe fistula and drainage of the deep postanal space abscess with posterior midline incision that severs both the lower edge of the internal sphincter and the subcutaneous external sphincter and divides the superficial external sphincter into halves gives excellent results with no recurrence. When it is necessary, severing the halves of the superficial external sphincter unilaterally or even bilaterally in the same session does not result in anal incontinence. Close follow-up of patients until the wounds completely healed is essential in the prevention of premature wound closure and recurrence. Background Anorectal abscess fistula disease is most commonly cryptoglandular in origin [1]. However, secondary fistulas may develop due to underlying diseases such as Crohn's disease, hidradenitis suppurativa, tuberculosis or actinomycosis [2]. If the anorectal abscess is not drained spontaneously or surgically, the infection may spread rapidly and may result in extensive tissue loss. Even if the abscess is drained, a fistula-in-ano may develop subsequently. It is most common in people aged between 20 and 50 years with four-fold male predominance and an annual incidence of 1 in 10.000 [3]. Anorectal fistulas are divided into four distinct types according to the Parks' classification: intersphincteric, transsphincteric, suprasphincteric, and extrasphincteric [4]. These groups can be further subdivided according to the presence and courses of associated secondary tracts. The appropriate type of surgery (simple fistulotomy, fistulectomy, seton placement or advancement flap rotation) is dictated by the course of the fistula tracts. The prognosis of cryptoglandular abscess fistula disease is excellent once the source of infection is identified [2]. On the other hand, patients with a chronic or recurring abscess following adequate surgical drainage often have an undrained deep postanal abscess communicating with the ischiorectal spaces via a horseshoe fistula. Abscess perforating the external anal sphincter anteriorly and posteriorly enter the deep preanal and postanal spaces and may spread extensively into the ischiorectal spaces [3]. This pattern of spread produces anterior and posterior horseshoe abscesses which further result in horseshoe or semi-horseshoe fistulas [5]. Chronic abscess fistulas may become quiescent and may recur as an acute abscess with the formation of a new tract and secondary opening [6]. Treatment should involve opening the deep postanal space by a posterior midline incision which separates the superficial external sphincter muscle into halves, severing the subcutaneous external sphincter and the lower edge of the internal sphincter and unroofing the tracts by fistulotomy as first described by Hanley [7]. In this report, the outcomes of surgical treatment of patients with posterior horseshoe fistula associated with the deep postanal abscess are presented and the significance of deep postanal space is discussed under a brief review of current literature. Methods Twenty-five patients who underwent surgery for posterior horseshoe fistula with deep postanal space abscess between January 1997 and December 2002 were analyzed retrospectively with respect to age, gender, previous surgery for fistula-in-ano, number of external openings, diagnostic studies, concordance between preoperative studies and operative findings for the extent of disease, operating time, healing time, complications, and recurrence. Results There were 22 (88%) men and 3 (12%) women with a median age of 37 (range, 25-58) years. The median duration of disease was 13 (range, 3-96) months. Nineteen (76%) patients had undergone surgery for fistula-in-ano previously, while the remaining five (24%) had a history of perianal abscess drainage only. Three of 25 patients (12%) had diabetes mellitus type II. Preoperative diagnosis was established by contrast fistulography in 4 (16%) patients (Figure 1), by ultrasound (US) in 3 (12%) ( Figure 2), by magnetic resonance imaging (MRI) in 6 (24%) (Figures 3 , 4, 5), and by physical examination only in the remainder (48%). There was one external opening in 12 (48%) patients, 2 in 8 (32%), 3 in 4 (16%), and 4 in only one (4%). Operative findings were in accordance with Goodsall's rule, which indicates the most likely position of an internal opening based on the position of identified external openings in relation to a horizontal line transecting the mid anus, in all but one patient with accuracy rate of 96% [3]. Of the five patients who had more than 2 external openings, three underwent preoperative colonoscopy in order to rule out Crohn's disease, while the remaining two had this check some time later in the late postoperative period. In all these patients, the excised tract tissue was sent to histopathologic and microbiologic examinations for rule out any underlying infectious disease such as tuberculosis or actinomycosis, fortunately no associated inflammatory or specific infectious disease was found. Surgical Technique Fleet enema was used for preoperative bowel preparation in all cases. All operations were performed by the same team of two surgeons. Patients were operated on under general anesthesia in the jackknife position. Prophylactic antibiotics were not used except in diabetic patients who received ciprofloxacin 500 mg and metronidazole 500 mg twice daily for 5 days. The extent of disease was established by cannulating the fistulas with probes. All the incisions and dissections were made by electrocautery. Once all the tracts and internal opening were identified they were unroofed. The deep postanal space was reached by a posterior midline incision from the internal opening at the dentate line to the coccyx. The lower edge of the internal sphincter and the subcutaneous external sphincter were severed, while the superficial external sphincter muscle was separated into the halves by a vertical incision along the direction of its fibers-a technique which is known as Hanley procedure [7]. Once the deep postanal space was opened, the primary tract and its bifurcation in this space were identified and the extension to the ischiorectal spaces could be seen. When the horseshoe fistula was unilateral or incomplete, the affected arm of the superficial external sphincter was divided along the fistulotomy tract while the other arm was protected. However, in case of bilateral or complete horseshoe fistula, both arms of the superficial external sphincter were severed. Then, all the fistula tracts and the floor of the deep postanal space were curetted. Finally, deep postanal space was packed with povidone iodine-soaked gauze and a Simple contrast fistulography Figure 1 Simple contrast fistulography. A. Antero-posterior aspect, the contrast agent was given through the external opening at the right-side of the anal channel, note the course of horseshoe fistula tracts; there was a complete arm at the right side (white arrow), while the left-side arms were blind (yellow arrows), B. Oblique aspect, the blind arm of the horseshoe fistula extended into the left gluteal area (arrow), C. Lateral aspect, the connection between the fistula tract and the deep postanal space (arrow) was easily shown. The connection (yellow arrow) between the deep postanal space (red arrow) and the rectum (blue arrow) was shown by ultrasound in a patient with deep postanal space abscess Figure 2 The connection (yellow arrow) between the deep postanal space (red arrow) and the rectum (blue arrow) was shown by ultrasound in a patient with deep postanal space abscess. dressing was applied. The mean ± SD operating time was 47 ± 10 min. A blind upward extension of the tract from the deep postanal space into the supralevator space was observed in one patient. The opening in the levator ani muscle in this patient was dilated in order to facilitate the drainage of supralevator abscess through the deep postanal space. Postoperative Course and Follow-up The patients were allowed to eat their regular diet after the first postoperative day. All but two patients were Sagittal T1-weighted MRI shows a hypointense lesion in the deep postanal space (red arrows) Figure 3 Sagittal T1-weighted MRI shows a hypointense lesion in the deep postanal space (red arrows). Note that the lesion is located under the levator ani muscle (yellow arrows), which indicates that it is in the deep postanal space. discharged on the post-operative day 2. These two patients hospitalized for 4 days postoperatively due to the excessive pain during wound care and they received narcotic analgesic and sedation with midazolam in the first 2 days during wound dressing. Stool softeners were prescribed preemptively only in cases with a history of chronic con-stipation. During the first 7 postoperative days, changing wound dressings twice a day and applying deep packing with povidone iodine-soaked gauze into the wound following sit baths were recommended. The patients were also instructed to run shower water directly into the wound twice a day after the first postoperative week and Sagittal T2-weighted MRI shows a well-demarcated abscess (red arrows) in the deep postanal space under the levator ani mus-cle (yellow arrows) Figure 4 Sagittal T2-weighted MRI shows a well-demarcated abscess (red arrows) in the deep postanal space under the levator ani muscle (yellow arrows). to wear a pad as needed because of the expected minimal drainage during the healing process. All patients were followed up weekly until a complete wound healing was observed. No premature approximation of edges of the skin was observed. The mean ± SD healing time, which defined as the period from the date of operation to the date of complete healing, was 12 ± 3 weeks. Neither morbidity nor mortality developed. All patients were followed up for a median of 35 (range, 6-78) months and no recurrence was observed. Coronal T2-weighted fat-sat MRI after gadolinium injection shows a semi-horseshoe abscess fistula which extends from the deep postanal space (red arrow) into the left ischiorectal space (yellow arrow) Figure 5 Coronal T2-weighted fat-sat MRI after gadolinium injection shows a semi-horseshoe abscess fistula which extends from the deep postanal space (red arrow) into the left ischiorectal space (yellow arrow). Discussion As a general rule, etiopathogenesis of a disease must be well understood in order to achieve a satisfactory response to treatment. Contrary to the general belief that the horseshoe fistula is the cause of posterior deep anal abscess, fistula follows the development of deep postanal space abscess as its complication [5]. The natural history and the developmental steps of anorectal abscess fistula disease are well summarized by Malouf et al. [3]. At first, a cryptoglandular abscess develops in the intersphincteric space which contains the anal glands. Infection may spread via vertical, horizontal or circumferential routes, and this determines the site of the abscess [8]. Circumferential spread may occur in the intersphincteric, ischiorectal or supralevator compartments to form horseshoe fistulas [3]. On the other hand, abscesses perforating the external anal sphincter anteriorly or posteriorly enter the preanal or postanal spaces [3], in which situation the internal opening may be identified on the anterior or posterior midline at the level of dentate line. If the abscesses are not drained either surgically or spontaneously at this stage, they spread extensively into the ischiorectal spaces [6]. This pattern of spread results in anterior and posterior horseshoe abscesses. Incomplete or semi-horseshoe fistula develops when one arm of horseshoe abscess spontaneously drains into the skin, while drainage of both arms results in a complete horseshoe fistula. On the other hand, there may be associated fistulas which are usually transsphincteric. As observed in all patients presented, the presence of internal opening on the posterior midline at the level of dentate line dictates the presence of associating deep postanal space abscess. If this abscess is not drained, the definitive treatment of fistula cannot be achieved [6]. This is the most important point in the surgical treatment of posterior horseshoe fistulas. Simple anorectal fistulas are usually diagnosed by physical examination only, in patients suffering intermittent pain and purulent, often bloodstained, perianal discharge with a common history of anorectal abscess drainage. While physical examination is usually sufficient for assessment in uncomplicated abscess fistula disease, imaging studies such as contrast fistulography, US or MRI may be useful in the evaluation of complex or recurrent disease [2]. Maier et al. [9] compared prospectively the diagnostic yield of anal endosonography and MRI in the assessment of perianal fistula and abscess in 39 patients and found MRI superior to anal endosonography. Similarly, Beets-Tan et al. [10] evaluated the accuracy of MRI with a guadrature phased-array coil for the detection of anal fistulas and evaluated the additional clinical value of preoperative MRI, as compared with surgery alone and found its sensitivity and specificity for detecting fistula tracts as 100% and 86%, horseshoe fistulas as 100% and 100%, and internal openings as 96% and 90%, respectively. On the other hand, many studies have shown that hydrogen peroxide-enhanced US improves identification of fistula tracts and internal openings, particularly in horseshoe fistulas [11,12]. Ratto et al. [11] reported accuracy rates of clinical examination, endoanal US, and hydrogen peroxide-enhanced US for horseshoe fistulas as 81%, 81%, and 92%, respectively. We do not perform routine radiologic studies in patients with anorectal abscess fistula disease, since the diagnosis can be established preoperatively by physical examination and intraoperative findings and the course of fistula tracts direct surgeons to choose the appropriate type of operative procedure. However, if patients describe rectal discomfort, fullness or deep pelvic pain which may indicate the presence of an associated condition, we prefer MRI for diagnosis as well as demonstration of the extent of the abscess fistula. In the present series, we needed to employ MRI in 6 patients. The other 7 patients who underwent preoperative contrast fistulography or US were either referred or admitted to our institution following diagnostic studies at other centers. In these patients, MRI was not employed since the previous radiologic documentation was satisfactory. Several methods can be employed to identify fistula tracts intraoperatively. Passage of a probe from both the external and the internal aspects is the most reliable technique to demonstrate the course of the fistulous tract. However, injection of various substances such as methylene blue, indigo carmine, hydrogen peroxide or even milk has been described and widely used [13]. It should be remembered that when stains are injected, the surgeon may have only one opportunity to visualize the internal opening before the surrounding tissue and the operative field are contaminated by the stain. In order to avoid this, milk has been advocated. However, in patients with a stenosis in the fistula tract these staining techniques may fail. Injecting hydrogen peroxide is probably the best mean for identifying the internal opening, since the pressure created by the bubbles may be sufficient to penetrate even a stenotic tract [13]. We did not encounter this problem during the operations; therefore, we did not need special manoeuvres in any patient. Presence of an anal papilla guarding the internal opening is another way to locate the internal opening. We usually prefer cannulating of the fistula tract by blindtip probes instead of staining techniques as an essential step of fistula surgery. Probing not only provides the identification of the course of the fistula tracts but also facilitates fistulotomy over the probe. However, probing should be gentle, otherwise it easily results in creation of a false route which may further complicates the operative procedure. The importance of identifying internal openings and fistula tracts properly during the initial surgery was best shown by Sangwan et al. [14]. The authors evaluated 461 patients who underwent surgery for simple fistula-in-ano retrospectively and found that 30 (6.5%) of them developed recurrent fistulas. The cause was missed internal openings in 16, missed secondary tracts in 6, premature fistulotomy wound closure in 5, and miscellaneous factors in 3. In this report, patients with high transsphincteric fistulas with or without high blind tract, suprasphincteric, extrasphincteric, and horseshoe fistulas as well as fistulas associated with inflammatory bowel disease had been excluded. Therefore, the investigators concluded that all so-called simple fistulas may not have readily detectable primary openings and may behave as complex fistulas due to their secondary tracts. In the present series, six patients were preoperatively considered to have simple fistulas; however, careful intraoperative exploration showed the internal openings at the posterior midline and the tracts of horseshoe fistula by probing them through the external openings. In addition, the connection with the deep postanal space was demonstrated by probing the internal opening at the posterior midline in these cases. Moreover, a blind upward extension of fistula tract from the deep postanal space into the supralevator space was observed in one patient. The opening in the levator ani muscle was dilated in order to facilitate the drainage of supralevator abscess through the deep postanal space. This patient needed an extended hospitalization period due to excess pain during the wound dressing and could be discharged on the postoperative day 4. One of the important points in the management of this case is to prevent premature closure of the wound which can be achieved by deep wound dressing especially in the early postoperative period. This type of wound care sometimes requires narcotic analgesic administration prior to the wound dressing. The wound was completely healed without any wound complication 18 weeks after surgery in our patient. Fistulotomy and the posterior midline incision to reach the deep postanal space can be made either by traditional knife or electrocautery. We prefer the latter device, because it provides better hemostasis. However, Gupta [15] very recently described a new technique for fistulotomy with the radio frequency surgical device in the treatment of fistula-in-ano. His results are promising since he reported the procedure, which proposed as "sutureless fistulotomy", significantly less time-consuming and more hemostatic. There is still a debate on the use of seton placement in the treatment of horseshoe fistulas. Ustynoski et al. [16] performed primary fistulotomy and counter drainage in 24 patients with horseshoe fistula and reported recurrence rate of 28.6% with this technique. When they treated 11 patients by seton fistulotomy and counter drainage, they could reduce the recurrence rate down to 18.1%. The authors recommend this method as operative procedure of choice for horseshoe abscess fistula. Similarly, Held et al. [17] treated 69 patients for posterior (n = 59) and anterior (n = 10) horseshoe abscess fistula by different surgical techniques including incision and drainage, incision and drainage with primary fistulotomy, incision and drainage with primary fistulotomy and counter drainage, and incision and drainage with insertion of seton. The authors advocated seton placement in the treatment of horseshoe abscess fistula with its better outcomes. Pezim [18] reported excellent results of 24 patients who underwent unroofing the deep postanal space with division of overlying external sphincter muscle by seton for posterior horseshoe fistula. In his series, the success rate was given 92% with a 3.5-month mean healing time. In the present series, severing the halves of subcutaneous external sphincter instead of seton placement yielded excellent results without anal incontinence. It is not our routine clinical practice to perform postoperative anal manometry, transanal US or transanal MRI in the assessment of status of the anal sphincters unless the patient describes any symptom relevant to anal incontinence. None of the patients in our series suffered permanent anal soiling or discharge in the close long-term follow-up. On the other hand, patients undergoing internal sphincterotomy and fistulotomy may experience temporary anal soiling and some degree of drainage from open wounds in the early postoperative period. If anal discharge continues after a complete healing of fistulotomy wounds, investigations for anal incontinence should be performed. All patients were questioned regularly for any symptom of anal incontinence at their routine visits and none of them complained of permanent anal discharge. Digital anal examination also revealed satisfactory anal tonus. Therefore, any further investigation was not needed in patients of this series. Conclusions Posterior horseshoe fistula with deep postanal space abscess is a complex disease, in which most patients have a history of anorectal abscess drainage or surgery for fistula-in-ano. Posterior midline location of internal fistula opening indicates the presence of associating deep postanal space abscess to horseshoe fistula. Drainage of the deep postanal space abscess is an essential step for the prevention of recurrence. Both the lower edge of the internal sphincter and the subcutaneous external sphincter should be severed during the drainage of deep postanal space abscess by posterior midline incision and the superficial external sphincter should be divided into two halves. Fistulotomy should also be carried out along the tracts of the fistula. The superficial external sphincter can be divided by either sphincterotomy or seton placement unilaterally or even bilaterally as appropriate. Although good results following the insertion of seton for this step have been reported, we advocate sphincterotomy without reservation because no serious complication such as incontinence has developed during the long-term followup.
5,076.4
2003-11-26T00:00:00.000
[ "Medicine", "Engineering" ]
Independent Variation of Reynolds Number, Wall Shear Stress and Flow Velocity for Cleaning Experiments: A Geometrically Flexible Parallel Plate Flow Cell For a long time, determining the factors influencing the cleaning of technical surfaces in the food and beverage industry has been of significant interest. In this study, an innovative test setup with a newly designed parallel plate flow cell was implemented to assess the cleaning of soluble molecular fouling materials, which allows for the independent variation of flow parameters, such as the Reynolds number, velocity, and wall shear stress. The test setup used fluorescence spectroscopy; it was found to produce reliable measurements of cleaning, and the results were confirmed with the help of another fluorescent tracer. A comparison of cleaning times for both equipment revealed that the cleaning times tend to have a geometrically independent power-law relationship with the wall shear stress and velocity, and they were used to directly correlate the cleaning times of the used soluble fouling material. However, the Reynolds number showed a geometric dependence on cleaning times. Nevertheless, on dividing the Reynolds number with respective channel characteristic lengths, geometric independence was observed, and, therefore, a correlation was obtained. We also suggest that complex fouling materials should still be investigated to elucidate their cleaning mechanisms better and test for parameter influences on complex cleaning mechanisms. Introduction Cleaning in food industries is an essential process during food production and handling in all food-processing plants. Reliable and efficient cleaning must be ensured to meet the hygienic standards and expectations of end consumers. Inadequate cleaning poses health risks to consumers. In addition, cleaning is a complex process, involving more than one type of mechanism to remove fouling materials; therefore, its validation is complex, and cleaning processes are rarely optimized [1]. The interaction of the fouling material with water is vital given that water is the most widely used cleaning medium. Consequently, fouling materials are classified into soluble, swellable, emulsifiable, and particulate [1,2]. Different types of fouling models are employed to perform cleaning tests. Examples of widely used soluble fouling materials are malt extract, tomato paste, and riboflavin. This is because these fouling materials can spread evenly over a wide surface and form crack-free surface upon drying, thereby yielding high reproducibility. Riboflavin is easily detectable because of its self-fluorescence and capability to detect other fouling materials such as tomato paste and malt extract. The use of a fluorescent or a photoluminescent tracer is usually employed to detect removal and determine the progress of cleaning [3][4][5][6][7]. Fluorescence spectroscopy is used in various disciplines [8,9] and can be easily integrated into a closed cleaning unit such as flow cells. Parallel plate flow cells (PPFC) are used in performing cleaning experiments because of their simple geometry and easily reproducible well-developed flow. Researchers have employed PPFCs in cleaning experiments, such as the detection of microbial adhesion [10][11][12], as well as for comparison of cleaning behaviors of various food biopolymers [13]. In a previous research [13], quantification of cleaning with the help of fluorescence spectroscopy and a PPFC is shown, along with a comparison of cleaning behaviors of different fouling materials, but no work exists in the literature to show the comparison of cleaning by independent variation of flow parameters. This is because such a variation requires geometric flexibility in PPFC. The current study develops a PPFC with geometric flexibility to independently vary flow parameters-velocity, Reynolds number, and wall shear stress-to determine the parameter influences on the cleaning of soluble fouling materials and determine whether these experimental results could be used to correlate the cleaning times. Newly Designed PPFC The flow parameters, Reynolds number (Re), and wall shear stress (τ) are both dependent on the geometry through which the fluid flows. The Reynolds number, a dimensionless quantity, is the ratio of inertial to viscous forces acting on the fluid and is calculated using the following [14,15]: where V is the average flow velocity [m/s], D h is the hydraulic diameter of the flow geometry [m] and ν is the kinematic viscosity of the fluid [m 2 /s]. The wall shear stress (τ) is calculated using the following [14,15]: where ρ is the density of the fluid [kg/m 3 ] and f is the Darcy friction factor [14] for calculating the pipe roughness. The Darcy friction factor for turbulent flow in pipes is evaluated by the following (for 2320 < Re < 10 5 ) [14,15]: The flow cells used so far are geometrically rigid [7][8][9][10], and, therefore, a new design with geometrical flexibility is required so that the flow parameters can be varied independently. By setting up a flow cell with a variable height of the flow channel, it is possible to vary the flow parameters independently as the channel becomes geometrically flexible. Figure 1 shows the newly designed PPFC with exchangeable flow channels. The flow cell in Figure 1 consists of an inlet and outlet (A) that are connected to water hoses via couplings, a bottom (B), and a cover plate (C) made of poly (methyl methacrylate) or more commonly known as plexiglas. The use of plexiglas allows for viewing the cleaning during experimentation. The exchangeable flow channel (D), where the groove (E) is built on the surface to host a stainless steel coupon (SSC) with the food soiling, is variable by design. By using three different flow channels (D), the measuring cell can be operated with variable height and width ratios. Multiple holes (F) are drilled on the plexiglas cover plate (C) to fasten it to the bottom (B). The new flow cell has three flow channels with heights of 2.5, 5, and 7.5 mm. The length and width of the flow channels are 30 cm and 20 mm, respectively. Since cleaning must take place under a turbulent flow condition, it is paramount to ensure that the flow is fully developed. The hydraulic diameter (D h ) for a rectangular channel with width (w) and depth (d) can be calculated as follows [15]: The maximum value of hydraulic diameter for the three different channels is 10.9 mm in the case of the 7.5 mm channel (Table 1). For pipe flow, by ensuring an inlet length of about 10 × D (D: hydraulic diameter), the flow is fully developed for turbulent flows [15]. An inlet length of 250 mm, where the cleaning takes place, already ensures a fully developed turbulent flow. Figure 2 shows the fully assembled PPFC. The individual parts of the cell are fastened with 36 screws and sealed with silicone and rubber seals. The flow channel with a height of 5 mm is shown in Figure 3. Reparation of the Soiling Material and SSC Matrix SSCs, similar to the work of Otto, Zahn et al. [13], were employed for the application of fouling material. The SSCs ( Figure 4) have length, width, and thickness of 30, 18, 2 mm, respectively; they have rounded corners with a radius of 2.5 mm. A smooth finish on the surface of the SSCs is used to produce comparable and reproducible test samples. The SSCs fit exactly into the recess of the flow channels of the PPFC and do not affect the flow The maximum value of hydraulic diameter for the three different channels is 10.9 mm in the case of the 7.5 mm channel (Table 1). For pipe flow, by ensuring an inlet length of about 10 × D (D: hydraulic diameter), the flow is fully developed for turbulent flows [15]. An inlet length of 250 mm, where the cleaning takes place, already ensures a fully developed turbulent flow. The maximum value of hydraulic diameter for the three different channels is 10.9 mm in the case of the 7.5 mm channel (Table 1). For pipe flow, by ensuring an inlet length of about 10 × D (D: hydraulic diameter), the flow is fully developed for turbulent flows [15]. An inlet length of 250 mm, where the cleaning takes place, already ensures a fully developed turbulent flow. Figure 2 shows the fully assembled PPFC. The individual parts of the cell are fastened with 36 screws and sealed with silicone and rubber seals. The flow channel with a height of 5 mm is shown in Figure 3. Reparation of the Soiling Material and SSC Matrix SSCs, similar to the work of Otto, Zahn et al. [13], were employed for the application of fouling material. The SSCs ( Figure 4) have length, width, and thickness of 30, 18, 2 mm, respectively; they have rounded corners with a radius of 2.5 mm. A smooth finish on the surface of the SSCs is used to produce comparable and reproducible test samples. The SSCs fit exactly into the recess of the flow channels of the PPFC and do not affect the flow Reparation of the Soiling Material and SSC Matrix SSCs, similar to the work of Otto, Zahn et al. [13], were employed for the application of fouling material. The SSCs ( Figure 4) have length, width, and thickness of 30, 18, 2 mm, respectively; they have rounded corners with a radius of 2.5 mm. A smooth finish on the surface of the SSCs is used to produce comparable and reproducible test samples. The SSCs fit exactly into the recess of the flow channels of the PPFC and do not affect the flow rate. A mixture of maltose [D(+)-maltose-monohydrate ≥92%-Carl Roth, Karlsruhe, Germany], demineralized water, and the fluorescent tracer uranine AP (AppliChem, Darmstadt, Germany) was used as molecular fouling material. Similarly, for validation experiments (Appendix C) fluorescent tracer eosin Y (Alfa Aesar, Kandel, Germany) was employed in place of uranine AP. Processes 2021, 9, x FOR PEER REVIEW 4 of 14 rate. A mixture of maltose [D(+)-maltose-monohydrate ≥92%-Carl Roth, Karlsruhe, Germany], demineralized water, and the fluorescent tracer uranine AP (AppliChem, Darmstadt, Germany) was used as molecular fouling material. Similarly, for validation experiments (Appendix C) fluorescent tracer eosin Y (Alfa Aesar, Kandel, Germany) was employed in place of uranine AP. For the test, a maltose-uranine mixture was prepared in a 2000:1 ratio. A solution consisting of 15 g maltose and 6.7 mL demineralized water was prepared at a constant temperature of 110 °C with stirring (MH 15, Rotilabo magnetic stirrer with heating, Carl Roth, Karlsruhe, Germany). To weigh, AUW220D (Shimadzu Deutschland GmbH, Duisburg, Germany) semi-micro balance was used. Then, 25 mg uranine AP was mixed in 750 μL demineralized water to produce an uranine solution. Finally, 225 μL uranine solution was mixed in maltose solution to obtain the maltose-uranine mixture. The SSCs were cleaned with the help of acetone (>96% v/v)-also purchased from Carl Roth-before the application of the maltose-uranine mixture over it. Approximately 0.5 g of the cooled uranine-maltose mixture was applied to the SSCs with a pipette (Eppendorf, Hamburg, Germany) and distributed. They were dried in a drying cabinet (UF55, Memmert GmbH and Co. KG, Büchenbach, Germany) for 4 min at 100 °C, then removed and carefully distributed to the edges with a spatula. The applied amount was then weighed again exactly to 0.4 g (±0.002 g). The SSCs were then dried for another 1 h in the drying cabinet. To cool them down, they were placed in the desiccator (Glaswerk Wertheim, Wertheim am Main, Germany) for about 21 h and measured on the fluorescence spectrometer the next day. Before the measurement, the coupons were weighed, and the weight of the applied fouling material was determined. The samples ( Figure 5) had a mean weight of 0.331 g, a residual water content of 0.155 (w/w), and a standard deviation of 0.0069 g, which meant the samples deviated from the mean value by approximately 2%. The fouling material falls under type 1-soluble-categorized by Fryer et al. [1,16] Fluorescence Spectrometer Fluorescence spectroscopy was used in this project to perform online measurements of the cleaning of the molecular fouling material. The fluorescence spectrometer used was "Cary Eclipse" (Figure 6A), obtained from Agilent Technologies, Waldbronn, Germany. For the test, a maltose-uranine mixture was prepared in a 2000:1 ratio. A solution consisting of 15 g maltose and 6.7 mL demineralized water was prepared at a constant temperature of 110 • C with stirring (MH 15, Rotilabo magnetic stirrer with heating, Carl Roth, Karlsruhe, Germany). To weigh, AUW220D (Shimadzu Deutschland GmbH, Duisburg, Germany) semi-micro balance was used. Then, 25 mg uranine AP was mixed in 750 µL demineralized water to produce an uranine solution. Finally, 225 µL uranine solution was mixed in maltose solution to obtain the maltose-uranine mixture. The SSCs were cleaned with the help of acetone (>96% v/v)-also purchased from Carl Roth-before the application of the maltose-uranine mixture over it. Approximately 0.5 g of the cooled uranine-maltose mixture was applied to the SSCs with a pipette (Eppendorf, Hamburg, Germany) and distributed. They were dried in a drying cabinet (UF55, Memmert GmbH and Co. KG, Büchenbach, Germany) for 4 min at 100 • C, then removed and carefully distributed to the edges with a spatula. The applied amount was then weighed again exactly to 0.4 g (±0.002 g). The SSCs were then dried for another 1 h in the drying cabinet. To cool them down, they were placed in the desiccator (Glaswerk Wertheim, Wertheim am Main, Germany) for about 21 h and measured on the fluorescence spectrometer the next day. Before the measurement, the coupons were weighed, and the weight of the applied fouling material was determined. The samples ( Figure 5) had a mean weight of 0.331 g, a residual water content of 0.155 (w/w), and a standard deviation of 0.0069 g, which meant the samples deviated from the mean value by approximately 2%. The fouling material falls under type 1-soluble-categorized by Fryer et al. [1,16] Fluorescence Spectrometer Fluorescence spectroscopy was used in this project to perform online measurements of the cleaning of the molecular fouling material. The fluorescence spectrometer used was "Cary Eclipse" (Figure 6A), obtained from Agilent Technologies, Waldbronn, Germany. Fluorescence Spectrometer Fluorescence spectroscopy was used in this project to perform online measurements of the cleaning of the molecular fouling material. The fluorescence spectrometer used was "Cary Eclipse" (Figure 6A), obtained from Agilent Technologies, Waldbronn, Germany. A xenon lamp was used to excite the samples used in the cuvette. The spectrometer was accompanied by a software called "Cary Eclipse," which allows continuous measurement of the change in fluorescence intensity of a sample in the cuvette via the program "Kinetic". For single measurements of the fluorescence intensities, the program "Scan" was used. To ensure inline measurements using the fluorescence spectrometer, a specially designed holder for the flow through cuvette was used ( Figure 6C). The resulting calibration curves of the Fluorescence Spectrometer are shown in Figure A1. A xenon lamp was used to excite the samples used in the cuvette. The spectrometer was accompanied by a software called "Cary Eclipse," which allows continuous measurement of the change in fluorescence intensity of a sample in the cuvette via the program "Kinetic". For single measurements of the fluorescence intensities, the program "Scan" was used. To ensure inline measurements using the fluorescence spectrometer, a specially designed holder for the flow through cuvette was used ( Figure 6C). The resulting calibration curves of the Fluorescence Spectrometer are shown in Figure A1. Cleaning Tests Cleaning behavior can be monitored with help of an experimental setup consisting of a storage tank, flow cell, continuous measuring unit, and pump [13,17,18]. The cleaning tests were conducted in the experimental setup (Figure 7), which is a similar configuration as proposed by Otto, Zahn et al. [13]. The arrangement was such that the cleaned material was discarded from the flow cell once the cleaning has taken place, and a volume fraction of the flow was diverted, with the help of a peristaltic pump, to the fluorescence spectrometer to measure the cleaning taking place. First, initial experiments were conducted to check the reproducibility of the cleaning tests and define a calculation method for the evaluation of the results. The fluorescence spectrometer measures the fluorescence intensities in arbitrary units. Therefore, a calculation method had to be developed to validate the cleaning experiments. This can be done using the calibration curves previously ob- Cleaning Tests Cleaning behavior can be monitored with help of an experimental setup consisting of a storage tank, flow cell, continuous measuring unit, and pump [13,17,18]. The cleaning tests were conducted in the experimental setup (Figure 7), which is a similar configuration as proposed by Otto, Zahn et al. [13]. The arrangement was such that the cleaned material was discarded from the flow cell once the cleaning has taken place, and a volume fraction of the flow was diverted, with the help of a peristaltic pump, to the fluorescence spectrometer to measure the cleaning taking place. First, initial experiments were conducted to check the reproducibility of the cleaning tests and define a calculation method for the evaluation of the results. The fluorescence spectrometer measures the fluorescence intensities in arbitrary units. Therefore, a calculation method had to be developed to validate the cleaning experiments. This can be done using the calibration curves previously obtained. The calculation procedures and test protocols were then used to perform the actual cleaning tests to obtain the most influential parameter in the cleaning of the molecular soil used. Since uranine is sensitive to the pH values of the solvent [8,9], demineralized water with a temperature of 15 • C was used as the cleaning medium. The flow in the system was controlled by regulating the mass flow. With the new concept of the flow cell, the height of the channel (three different heights) can be varied, so that any of the flow parameters-velocity, wall shear stress, and Reynolds number-can be varied independently with respect to another. Determination of the Parameter Influences This study investigates the influence of flow parameters on the cleaning of a molecular fouling material, which can be later used to correlate cleaning times. As explained, cleaning experiments were performed by independent variation of the flow parameters- Table 1 shows how the wall shear stress and velocity change with the duct depth for the same value of Reynolds number. Tables A2 and A3 show the other variations used in this study. Each test was performed using a three-fold determination. Determination of the Parameter Influences This study investigates the influence of flow parameters on the cleaning of a molecular fouling material, which can be later used to correlate cleaning times. As explained, cleaning experiments were performed by independent variation of the flow parameters-Reynolds number, velocity, and wall shear stress. Therefore, it was possible to record the cleaning curves for a fixed value of the Reynolds number and changing values of the wall shear stress and velocity (Figure 8), and vice versa. It is observed that the peak value of material removed, observed at around 30 s, decreases with increasing values of channel depth. This is expected as the values of flow velocity and wall shear reduce with the increasing values of channel depth (Table 1). Additionally, the cleaning times increase with the increasing values of channel depth. Cleaning time is characterized by the time at which the fluorescence spectrometer readings go to zero (maltose concentration = 0). In addition, the cleaning times of all cleaning experiments with Reynolds numbers greater than the critical value of 2300 were plotted against the respective values of the flow parameters. Figure 9 shows the cleaning times plotted against the Reynolds number. to zero (maltose concentration = 0). In addition, the cleaning times of all cleaning experiments with Reynolds numbers greater than the critical value of 2300 were plotted against the respective values of the flow parameters. Figure 9 shows the cleaning times plotted against the Reynolds number. It can be observed that for each channel geometry configuration, the cleaning times reduce according to a power-law fit with the increasing value of Reynolds number. This shows a geometric dependency of the relationship between cleaning time and Reynolds number. Therefore, cleaning times show a power-law relationship with the Reynolds to zero (maltose concentration = 0). In addition, the cleaning times of all cleaning experiments with Reynolds numbers greater than the critical value of 2300 were plotted against the respective values of the flow parameters. Figure 9 shows the cleaning times plotted against the Reynolds number. It can be observed that for each channel geometry configuration, the cleaning times reduce according to a power-law fit with the increasing value of Reynolds number. This shows a geometric dependency of the relationship between cleaning time and Reynolds number. Therefore, cleaning times show a power-law relationship with the Reynolds It can be observed that for each channel geometry configuration, the cleaning times reduce according to a power-law fit with the increasing value of Reynolds number. This shows a geometric dependency of the relationship between cleaning time and Reynolds number. Therefore, cleaning times show a power-law relationship with the Reynolds number, but only for the respective channel geometries. In Figure 10, the cleaning times are plotted against the respective velocity values. Additionally, here a power-law relationship between cleaning times and the flow velocity is observed. However, here no geometric dependence on the channel geometry is observed. That is, irrespective of the channel configurations, the cleaning times reduce with increasing flow velocities following a power-law fit. number, but only for the respective channel geometries. In Figure 10, the cleaning times are plotted against the respective velocity values. Additionally, here a power-law relationship between cleaning times and the flow velocity is observed. However, here no geometric dependence on the channel geometry is observed. That is, irrespective of the channel configurations, the cleaning times reduce with increasing flow velocities following a power-law fit. Similarly, for wall shear stress, the cleaning times show a power-law relationship with the wall shear stress values for all measured points independent of the channel geometry ( Figure 11). Table 2 shows the correlation of cleaning time obtained for all the flow parameters. Similarly, for wall shear stress, the cleaning times show a power-law relationship with the wall shear stress values for all measured points independent of the channel geometry ( Figure 11). are plotted against the respective velocity values. Additionally, here a power-law relationship between cleaning times and the flow velocity is observed. However, here no geometric dependence on the channel geometry is observed. That is, irrespective of the channel configurations, the cleaning times reduce with increasing flow velocities following a power-law fit. Similarly, for wall shear stress, the cleaning times show a power-law relationship with the wall shear stress values for all measured points independent of the channel geometry ( Figure 11). Table 2 shows the correlation of cleaning time obtained for all the flow parameters. Table 2 shows the correlation of cleaning time obtained for all the flow parameters. Conclusions In this study, in-line cleaning experiments could be performed reliably with the help of fluorescence spectroscopy and a test setup consisting of a new design of PPFC (Appendix B). The results of the test method proposed are consistent with the usage of tracers of different molecular weights and, thus, different diffusive properties (Appendix C). The results of the cleaning experiments demonstrate that by using a geometrically flexible PPFC, independent variation of flow parameters is possible to reveal the influences of flow parameters on the cleaning times of a molecular food soiling. A general decrease in cleaning times with increases in the Reynolds number, flow velocity, and wall shear stress was observed. A power-law relationship between cleaning times and the Reynolds number was observed, which is consistent with the review work of Goode et al. [16] as the fouling material used here is of type 1 deposit as classified by Fryer et al. [1]. It was shown that concerning the flow parameters-wall shear stress and velocity-cleaning times show geometrically independent power-law relationship with the respective flow parameters, for the fouling material used in this study. Therefore, they were used to directly correlate the cleaning times (Table 2). However, for the Reynolds number, a geometric dependency was observed. Although velocity and wall shear stress seemed to have more influence on the cleaning times due to their geometric independence, the wall shear stress had a steeper correlation curve in comparison and was the most influential parameter on cleaning of the molecular fouling used in this study. Therefore, this study provides a better understanding of the cleaning mechanism involved as well as presents an innovative strategy to vary the cleaning parameters to determine the cleaning behavior. More research with fouling materials involving complex removal mechanisms such as dairy cream is required to elucidate their respective cleaning mechanisms and the effect of flow parameters. Conflicts of Interest: The authors declare no conflict of interest. Appendix A. Calibration of the Fluorescence Spectrometer To perform the calibration of the fluorescent tracers-uranine AP and eosin Y-used in this project, serial dilutions of different concentrations of the tracers were prepared, similar to the works of many other authors [8,9]. The diluted solutions contained maltose and the fluorescent tracer under the same weight ratio (2000:1) as in the fouling material used for the cleaning tests. The fluorescence intensities for the solutions with different concentrations were measured and plotted against the respective concentration values. The calibration experiment was performed on 12 different concentration values for each tracer. The resulting calibration curves are shown in Figure A1. The goal of the calibration experiments is to obtain a trend function with a good coefficient of determination (R 2 > 0.95). The performed experiments resulted in a 0.999 coefficient of determination. concentrations were measured and plotted against the respective concentration values. The calibration experiment was performed on 12 different concentration values for each tracer. The resulting calibration curves are shown in Figure A1. The goal of the calibration experiments is to obtain a trend function with a good coefficient of determination ( 2 > 0.95). The performed experiments resulted in a 0.999 coefficient of determination. Appendix B. Initial Cleaning Experiments The preliminary cleaning tests were performed to show the feasibility of the cleaning experiments. The idea was to determine the difference in mass measured by fluorescence spectrometer and the actual measured mass before cleaning experiments. The fluorescence spectrometer measures the intensities of fluorescent tracer in the solution in arbitrary units (a.U.) and is then converted to a concentration by exploiting the calibration curve obtained previously. Based on the mass fraction of maltose to that of the fluorescent tracer present in the fouling layer prepared, the concentration of maltose is calculated. Figure A2 shows the change in mass concentration of maltose ( ) with time obtained from the measurements of the fluorescence spectrometer. The total mass of maltose removed during cleaning experiments from the fluid cell can be calculated using the following steps: Appendix B. Initial Cleaning Experiments The preliminary cleaning tests were performed to show the feasibility of the cleaning experiments. The idea was to determine the difference in mass measured by fluorescence spectrometer and the actual measured mass before cleaning experiments. The fluorescence spectrometer measures the intensities of fluorescent tracer in the solution in arbitrary units (a.U.) and is then converted to a concentration by exploiting the calibration curve obtained previously. Based on the mass fraction of maltose to that of the fluorescent tracer present in the fouling layer prepared, the concentration of maltose is calculated. Figure A2 shows the change in mass concentration of maltose (Cm) with time obtained from the measurements of the fluorescence spectrometer. The total mass of maltose removed during cleaning experiments from the fluid cell can be calculated using the following steps: 1. Finding the area under the curve (A): The mass of maltose that passed via the cuvette (m c ) can be calculated using the volume flow through the cuvette ( The mass of maltose that was removed from the system (m tot ), assuming a homogeneous distribution of the solute in both the separated flows, can be calculated using the fraction of volume flows through the flow cell to that through the cuvette Finally, the deviation of the mass measured by the fluorescent spectrometer (m tot ) from the mass measured before the cleaning experiments (m me ) can be calculated from %error = m me − m tot m me (A4) The following table (Table A1) shows the error values from the preliminary cleaning tests. The deviations from the mean values are smaller, and the experiments can be considered reliable. Appendix C. Validation of the Fluorescent Spectrometer Measurements with Eosin Y To ensure that the current experimental setup with the fluorescence spectrometer measures the cleaning of the fouling material used and not just the dissolution of uranine AP from the fouling material in the system, another initial cleaning test was conducted using the common fluorescent tracer eosin Y. In this test, the fouling material was prepared similarly as in that of uranine AP. Calibration experiments were also performed like that of uranine AP ( Figure A1). The expectation is that if the fluorescence spectrometer measures the cleaning of the fouling material used, the test results should not vary significantly when using another common fluorescent tracer, given that both are soluble in water. Figure A3 shows the comparison of the change in maltose concentration over time with the use of eosin Y (blue line) and uranine (red line). From Figure A3, the values are consistent, and the fluorescence spectrometer measurements were, therefore, successfully validated. Appendix C. Validation of the Fluorescent Spectrometer Measurements with Eosin Y To ensure that the current experimental setup with the fluorescence spectrometer measures the cleaning of the fouling material used and not just the dissolution of uranine AP from the fouling material in the system, another initial cleaning test was conducted using the common fluorescent tracer eosin Y. In this test, the fouling material was prepared similarly as in that of uranine AP. Calibration experiments were also performed like that of uranine AP ( Figure A1). The expectation is that if the fluorescence spectrometer measures the cleaning of the fouling material used, the test results should not vary significantly when using another common fluorescent tracer, given that both are soluble in water. Figure A3 shows the comparison of the change in maltose concentration over time with the use of eosin Y (blue line) and uranine (red line). From Figure A3, the values are consistent, and the fluorescence spectrometer measurements were, therefore, successfully validated. Flow is controlled by controlling the mass flow in the system. Flow is controlled by controlling the mass flow in the system.
7,068.6
2021-05-18T00:00:00.000
[ "Engineering", "Environmental Science" ]
Some properties of pre-quasi norm on Orlicz sequence space In this article, we introduce the concept of pre-quasi norm on E (Orlicz sequence space), which is more general than the usual norm, and give the conditions on E equipped with the pre-quasi norm to be Banach space. We give the necessity and sufficient conditions on E equipped with the pre-quasi norm such that the multiplication operator defined on E is a bounded, approximable, invertible, Fredholm, and closed range operator. The components of pre-quasi operator ideal formed by the sequence of s-numbers and E is strictly contained for different Orlicz functions are determined. Furthermore, we give the sufficient conditions on E equipped with a pre-modular such that the pre-quasi Banach operator ideal constructed by s-numbers and E is simple and its components are closed. Finally the pre-quasi operator ideal formed by the sequence of s-numbers and E is strictly contained in the class of all bounded linear operators, whose sequence of eigenvalues belongs to E. Introduction Throughout the paper, we denote the space of all bounded linear operators from a Banach space X into a Banach space Y by L(X, Y ), and if X = Y , we write L(X), the space of all real sequences is denoted by w, the real numbers R, the complex numbers C, N = {0, 1, 2, . . .}, the space of null sequences by C 0 , and the space of bounded sequences by ∞ . In operator theory, the multiplication operators on L p -spaces are related to the composition operators; this means that the properties of composition operators on L p -spaces can be stated by the properties of multiplication operators. Singh and Kumar [28] proved that a composition operator on L p (X; C) is compact if and only if the multiplication operator T α is compact, where α = dμT -1 dμ is the Radon-Nikodym derivative of the measure μT -1 with respect to the measure μ. In the theory of Hilbert space, every normal operator on a separable Hilbert space is unitarily equivalent to a multiplication operator. In the spectral theory, multiplication operators have their roots in the spectral theory. For more details on multiplication operators, see [1,26,27,[29][30][31]. On sequence spaces, Mursaleen and Noman in [17,18] studied the compact operators on some difference sequence spaces; Komal and Gupta [10] studied the multiplication operators on Orlicz spaces equipped with the Luxemburg norm, and Komal et al. [11] examined the multiplication operators on Cesáro sequence spaces. The theory of operator ideal goals possesses an uncommon essentialness in useful examination. Some of operator ideals in the class of Banach spaces or Hilbert spaces are defined by different scalar sequence spaces. For example the ideal of compact operators is defined by the space C 0 of null sequence and Kolmogorov numbers. Pietsch [24] examined the quasi-ideals formed by the approximation numbers and classical sequence space p (0 < p < ∞). He proved that the ideals of nuclear operators and of Hilbert-Schmidt operators between Hilbert spaces are defined by 1 and 2 , respectively. He proved that the class of all finite rank operators is dense in the Banach quasi-ideal, and the algebra L( p ), where (1 ≤ p < ∞), contains one and only one nontrivial closed ideal. Pietsch [23] showed that the quasi Banach operator ideal formed by the sequence of approximation numbers is small. Makarov and Faried [14] proved that the quasi-operator ideal formed by the sequence of approximation numbers is strictly contained for different powers, i.e., for any infinite dimensional Banach spaces X, Y and for any q > p > 0, it is true that S In [8], Faried and Bakery studied the operator ideals constructed by approximation numbers, generalized Cesáro and Orlicz sequence spaces M . In [9], Faried and Bakery introduced the concept of pre-quasi operator ideal, which is more general than the usual classes of operator ideals. They studied the operator ideals constructed by s-numbers, generalized Cesáro and Orlicz sequence spaces M , and proved that the operator ideal formed by the previous sequence spaces and approximation numbers is small under certain conditions. The aim of this paper to study the concept of pre-quasi norm on E (Orlicz sequence space), which is more general than the usual norm, and give the conditions for E equipped with the pre-quasi norm to be Banach space. We give the necessity and sufficient conditions on E equipped with the pre-quasi norm such that the multiplication operator defined on E is a bounded, approximable, invertible, Fredholm, and closed range operator. The components of pre-quasi operator ideal formed by the sequence of s-numbers and E is strictly contained for different Orlicz functions are determined. Furthermore, we give the sufficient conditions on E equipped with a premodular such that the pre-quasi Banach operator ideal constructed by s-numbers and E is simple and its components are closed. Finally the pre-quasi operator ideal formed by the sequence of s-numbers and E is strictly contained in the class of all bounded linear operators, whose sequence of eigenvalues belongs to E. Definition 2.5 ([24]) A bounded linear operator has compact closure, where B 1 denotes the closed unit ball of E. The space of all compact operators on E is denoted by L c (E). Lindenstrauss and Tzafriri [13] utilized the idea of an Olicz function to define Orlicz sequence space: is a Banach space with the Luxemburg norm: Every Orlicz sequence space contains a subspace that is isomorphic to c 0 or q for some 1 ≤ q < ∞. As of late, different classes of sequences have been studied using Orlicz functions by Et et al. [7], Mursaleen et al. [19][20][21], Alotaibi et al. [2][3][4], and Mohiuddine et al. [15]. (ii) There exists L ≥ 1 such that (βu) ≤ L|β| (u) for all u ∈ E and for any scalar β; Definition 2.10 ([6]) A class of linear sequence spaces The set of all finite sequences is -dense in E. This means that, for each Closed ideal means an ideal which contains its limit points. The concept of pre-quasi operator ideal is more general than the usual classes of operator ideals. is said to be a pre-quasi norm on the ideal Ω if the following conditions hold: (1) For all T ∈ Ω(X, Y ), g(T) ≥ 0 and g(T) = 0 if and only if T = 0; (2) There exists a constant M ≥ 1 such that g(λT) ≤ M|λ|g(T) for all T ∈ Ω(X, Y ) and Definition 2.17 ([25] ) An s-number function is a map defined on L(X, Y ) which associates with each operator T ∈ L(X, Y ) a nonnegative scaler sequence (s n (T)) ∞ n=0 assuming that the taking after states are verified: , where X 0 and Y 0 are arbitrary Banach spaces; (d) If G ∈ L(X, Y ) and λ ∈ R, we obtain s n (λG) = |λ|s n (G); (e) Rank property: If rank(T) ≤ n, then s n (T) = 0 for each T ∈ L(X, Y ); (f ) Norming property: s r≥n (I n ) = 0 or s r<n (I n ) = 1, where I n represents the unit operator on the n-dimensional Hilbert space n 2 . Main results In this part, we give the concept of pre-quasi norm on Orlicz sequence space, which is more general than the usual norm, and give the conditions for Orlicz sequence space equipped with the pre-quasi norm to be a Banach space. for all x ∈ E and for any scalar λ; The space E with is called pre-quasi normed (sss) and is denoted by E , which gives a class more general than the quasi normed space. If the space E is complete with , then E is called a pre-quasi Banach (sss). (ii) Assume λ ∈ R, x ∈ l M , and since M satisfies 2 -condition, we get a number a > 0 such that where L = max{1, a}. (iii) Let x, y ∈ l M . Since M is nondecreasing, convex, and satisfying 2 -condition, then there exists a number a > 0 such that M |y n | = K (x) + (y) for some K = max{1, a 2 }. Hence ( M ) is a pre-quasi normed (sss). Since M is continuous and nondecreasing, hence M -1 exists. To prove that ( M ) is a pre-quasi Banach (sss), suppose x n = (x n k ) ∞ k=0 to be a Cauchy sequence in ( M ) , then for every ε > 0, there exists a natural number n 0 ∈ N such that, for all n, m ≥ n 0 , one has So (x m k ) is a Cauchy sequence in R for fixed k ∈ N, this gives lim m→∞ x m k = x 0 k for fixed k ∈ N. Hence (x nx 0 ) < ε. Finally, to prove that x 0 ∈ M , we have This means that ( M ) is a pre-quasi Banach (sss). Multiplication operator on pre-quasi normed (sss) In this part, we define a multiplication operator on Orlicz sequence space with a pre-quasi norm and give the necessity and sufficient conditions on Orlicz sequence space equipped with the pre-quasi norm such that the multiplication operator defined on Orlicz sequence space is a bounded, approximable, invertible, Fredholm, and closed range operator. Definition 4.1 Let α : N → C be a bounded sequence and E be a pre-quasi normed (sss), the multiplication operator is defined as for all x ∈ E. If T α is continuous, we call it a multiplication operator induced by α. Theorem 4.2 If α : N → C is a mapping and M is an Orlicz function satisfying 2condition, then α ∈ ∞ if and only if T Proof Let α ∈ ∞ . Then there exists C > 0 such that |α n | ≤ C for all n ∈ N. For x ∈ ( M ) , since M is nondecreasing and satisfying 2 -condition, we have where D is a constant depending on C, which implies that T α ∈ L(( M ) ). Conversely, suppose that T α ∈ L(( M ) ). We prove that α ∈ ∞ . For, if α is not a bounded function, then for every n ∈ N, there exists some i n ∈ N such that α i n > n. Since M is nondecreasing, we obtain This proves that T α is not a bounded operator. Hence, α must be a bounded function. Proof Let |α n | = 1 for all n ∈ N. Then for all x ∈ ( M ) . Hence T α is an isometry. Conversely, suppose that |α n | < 1 for some n = n 0 . Since M is nondecreasing, we have Similarly, if |α n 0 | > 1, then we can show that (T α e n 0 ) > (e n 0 ). In both cases, we get contradiction. Hence, |α n | = 1 for all n ∈ N. Proof Suppose that T α is an approximable operator, hence T α is a compact operator. We show that lim n→∞ α n = 0. For if this were not true, then there exists δ > 0 such that the set for all d n , d m ∈ B δ . This proves that {e d n : d n ∈ B δ } is a bounded sequence which cannot have a convergent subsequence under T α . This shows that T α cannot be compact, hence it is not an approximable operator, which is a contradiction. Hence, lim n→∞ α n = 0. Conversely, suppose lim n→∞ α n = 0. Then, for every δ > 0, the set B δ = {n ∈ N : |α n | ≥ δ} is a finite set. Then is a finite dimensional space for each δ > 0. Therefore, T α |(( M ) ) B δ is a finite rank operator. For each n ∈ N, define α n : N → C by Clearly, T α n is a finite rank operator as the space (( M ) ) B 1 n is finite dimensional for each n ∈ N. Now, since M is convex and nondecreasing, we have This proves that T α -T α n ≤ 1 n and that T α is a limit of finite rank operators and hence, T α is an approximable operator. Proof It is easy, so omitted. Proof Suppose that α is bounded away from zero on ker(α) c . Then there exists > 0 such that |α n | ≥ for all n ∈ ker(α) c . We have to prove that range (T α ) is closed. Let z be a limit point of range (T α ). Then there exists a sequence T α x n in ( M ) , for all n ∈ N such that lim n→∞ T α x n = z. Clearly, the sequence T α x n is a Cauchy sequence. Now, since M is nondecreasing, we have This proves that {y n } is a Cauchy sequence in ( M ) . But ( M ) is complete. Therefore, there exists x ∈ ( M ) such that lim n→∞ y n = x. In view of the continuity of T α , lim n→∞ T α y n = T α x. But lim n→∞ T α x n = lim n→∞ T α y n = z. Therefore, T α x = z. Hence z ∈ range(T α ). This proves that T α has closed range. Conversely, suppose that T α has closed range. Then T α is bounded away from zero on (( M ) ) ker(α) c . That is, there exists > 0 such that (T α x) ≥ Proof Suppose that the condition is true. Define β : N → C by β n = 1 α n . Then T α and T β are bounded linear operators in view of Theorem 4.2. Also T α .T β = T β .T α = I. Hence, T β is the inverse of T α . Conversely, suppose that T α is invertible. Then range(T α ) = (( M ) ) N . Therefore, range(T α ) is closed. Hence, by Theorem 4.7, there exists a > 0 such that |α n | ≥ a for all n ∈ ker(α) c . Now ker(α) = φ; otherwise α n 0 = 0 for some n 0 ∈ N, in which case e n 0 ∈ ker(T α ), which is a contradiction, since ker(T α ) is trivial. Hence, |α n | ≥ a for all n ∈ N. Since T α is bounded, so by Theorem 4.2, there exists A > 0 such that |α n | ≤ A for all n ∈ N. Thus, we have proved that a ≤ |α n | ≤ A for all n ∈ N. (ii) |α n | ≥ for all n ∈ ker(α) c . Proof Suppose that T α is Fredholm. If ker(α) is an infinite subset of N, then e n ∈ ker(T α ) for all n ∈ ker(α). But e n s are linearly independent, which shows that ker(T α ) is infinite dimensional, which is a contradiction. Hence, ker(α) must be a finite subset of N. Condition (ii) follows from Theorem 4.7. Conversely, if conditions (i) and (ii) are true, then we prove that T α is Fredholm. In view of Theorem 4.7, condition (ii) implies that T α has closed range. Condition (i) implies that ker(T α ) and (range(T α )) c are finite dimensional. This proves that T α is Fredholm. we have (s n (T)) ∞ n=0 ∈ ( M ) ρ , then T ∈ S ( M ) ρ (X, Y ). Pre-quasi simple Banach operator ideal We give here the sufficient conditions on Orlicz sequence space such that the pre-quasi operator ideal formed by the sequence of s-numbers and this sequence space are strictly contained for different Orlicz functions. It is easy to verify that S ϕ 2 (X, Y ) ⊂ L(X, Y ). Next, if we take (s n (T)) ∞ n=0 such that ϕ 2 (s n (T)) = 1 n+1 , one can find T ∈ L(X, Y ) such that T does not belong to S ϕ 2 (X, Y ). This completes the proof. Corollary 6.2 For any infinite dimensional Banach spaces X, Y and 0 < p < q < ∞, then S p (X, Y ) S q (X, Y ) L(X, Y ). Proof Suppose that there exists T ∈ L(S ϕ 2 , S ϕ 1 ) which is not approximable. According to Lemma 2.3, we can find X ∈ L(S ϕ 2 , S ϕ 2 ) and B ∈ L(S ϕ 1 , S ϕ 1 ) with BTXI k = I k . Then it follows for all k ∈ N that I k S ϕ 1 = ∞ n=0 ϕ 1 s n (I k ) ≤ BTX I k S ϕ 2 ≤ ∞ n=0 ϕ 2 s n (I k ) .
4,082.4
2020-02-28T00:00:00.000
[ "Mathematics" ]
Tsallis Mutual Information for Document Classification Mutual information is one of the mostly used measures for evaluating image similarity. In this paper, we investigate the application of three different Tsallis-based generalizations of mutual information to analyze the similarity between scanned documents. These three generalizations derive from the Kullback–Leibler distance, the difference between entropy and conditional entropy, and the Jensen–Tsallis divergence, respectively. In addition, the ratio between these measures and the Tsallis joint entropy is analyzed. The performance of all these measures is studied for different entropic indexes in the context of document classification and registration. Introduction Based on the capability of scanners to transform a large amount of documents to digital images, the automatic processing of administrative documents is a topic of major interest in many office applications.Some examples are noise removal, image extraction, or background detection.Other processes, such as document clustering or template matching, require the definition of document similarity.Document clustering aims to classify similar documents in groups and template matching consists in finding the spatial correspondence of a given document with a template in order to identify the relevant fields of the document. According to [1], the definition of the similarity between documents can be divided into two main groups based respectively on matching local features, such as the matching of recognized characters [2] or different types of line segments [3], and extracting global layout information, such as the use of a spatial layout representation [4] or geometric features [5].In this paper, instead of extracting specific pieces of information or analyzing the document layout, we propose to use global measures to evaluate the similarity between two image documents.The similarity between two images can be computed using numerous distance or similarity measures.In the medical image registration field, mutual information has become a standard image similarity measure [6].In this paper we investigate three different generalizations of this measure based on Tsallis entropy.As it was previously noted in [7], the main motivation for the use of non-extensive measures in image processing is the presence of correlations between pixels of the same object in the image that can be considered as long-range correlations.Although our analysis can be extended to a wide variety of document types, in this paper we focus our attention on invoice classification.In our experiments, we show the good performance of some of the proposed measures using an invoice database composed by colored images. This paper is organized as follows.Section 2 briefly reviews some previous work on information theory and its use in image registration and document classification.Section 3 presents three generalizations of mutual information that will be applied to document classification.Section 4 presents our general framework for document processing.Section 5 analyzes the obtained results in invoice classification and registration.Finally, Section 6 presents conclusions and future work. Related Work In this section, we review some basic concepts on information theory, image registration and document image analysis. Information-Theoretic Measures Let X be a finite set, let X be a random variable taking values x ∈ X with distribution p(x) = P r[X = x].Likewise, let Y be a random variable taking values y ∈ Y.The Shannon entropy H(X) of a random variable X is defined by The Shannon entropy H(X) measures the average uncertainty of random variable X.If the logarithms are taken in base 2, entropy is expressed in bits.The conditional entropy is defined by where p(x, y) = P r[X = x, Y = y] is the joint probability and p(x|y) = P r[X = x|Y = y] is the conditional probability.The conditional entropy H(X|Y ) measures the average uncertainty associated with X if we know the outcome of Y .The mutual information (M I) between X and Y is defined by M I is a measure of the shared information between X and Y . An alternative definition of M I can be obtained from the definition of the informational divergence or Kullback-Leibler distance (KL).The distance KL(p, q) between two probability distributions p and q [8,9], that are defined over the alphabet X , is given by The conventions that 0 log(0/0) = 0 and a log(a/0) = ∞ if a > 0 are adopted.The informational divergence satisfies the information inequality KL(p, q) ≥ 0, with equality if and only if p = q.The informational divergence is not strictly a metric since it is not symmetric and does not satisfy the triangle inequality.Mutual information can be obtained from the informational divergence as follows [8]: Thus, mutual information can also be seen as the distance between the joint probability distribution p(x, y) and the distribution p(x)p(y), i.e., the distance of the joint distribution to the independence.Mutual information can be also expressed as a Jensen-Shannon divergence.Since Shannon entropy is a concave function, from Jensen's inequality, we can obtain the Jensen-Shannon inequality [10]: where JS(π 1 , . . ., π n ; p 1 , . . ., p n ) is the Jensen-Shannon divergence of probability distributions p 1 , p 2 , . . ., p n with prior probabilities or weights π 1 , π 2 , . . ., π n , fulfilling n i=1 π i = 1.The JS-divergence measures how 'far' are the probabilities p i from their likely joint source n i=1 π i p i and equals zero if and only if all p i are equal.Jensen-Shannon's divergence coincides with I(X; Y ) when {π i } is equal to the marginal probability distribution p(x) and {p i } are equal to the rows p(Y |x i ) of the probability conditional matrix of the information channel X → Y .Then, M I can be redefined as A generalization of the Shannon entropy was given by Tsallis in [11]: where α > 0 and α = 1.H T α (X) is a concave function of p for α > 0 and H T α (X) = H(X) when α → 1 (and natural logarithms are taken in the definition of the Shannon entropy). Image Registration Image registration is treated as an iterative optimization problem with the goal of finding the spatial mapping that will bring two images into alignment.This process is composed of four elements (see Figure 1).As input, we have both fixed X and moving Y images.The transform represents the spatial mapping of points from the fixed image space to points in the moving image space.The interpolator is used to evaluate the moving image intensity at non-grid positions.The metric provides a measure of how well the fixed image is matched by the transformed moving one.This measure forms the quantitative criterion to be optimized by the optimizer over the search space defined by the parameters of the transform.The crucial point of image registration is the choice of a metric.One of the simplest measures is the sum of squared differences (SSD).For N pixels in the overlap domain Ω A,B of images A and B, this measure is defined as where A(i) and B(i) represent the intensity at a pixel i of the images A and B, respectively, and N the number of overlapping pixels.When this measure is applied, we assume that the image values are calibrated to the same scale.This measure is very sensitive to a small number of pixels that have very large intensity differences between images A and B. Another common image similarity measure is the correlation coefficient (CC), which is defined as where Ā is the mean pixel value in image A | Ω A,B and B is the mean of B | Ω A,B .While the SSD makes the implicit assumption that the images differ only by Gaussian noise, the CC assumes that there is a linear relationship between the intensity values in the images [12].From the information theory perspective, the registration between two images X and Y (associated with the random variables X and Y , respectively) can be represented by an information channel X → Y , where its marginal and joint probability distributions are obtained by simple normalization of the corresponding intensity histograms of the overlap area of both images [13].The most successful automatic image registration methods are based on the maximization of M I.This method, almost simultaneously introduced by Maes et al. [13] and Viola et al. [14], is based on the conjecture that the correct registration corresponds to the maximum M I between the overlap areas of the two images.Later, Studholme et al. [15] proposed a normalization of mutual information defined by which is more robust than M I, due to its greater independence of the overlap area.Another theoretical justification of its good behavior is that N M I is a true distance.Different measures derived from the Tsallis entropy have also been applied to image registration [16][17][18][19]. Document Image Similarity In the context of document image analysis, image similarity is mainly used for classification purposes in order to index, retrieve, and organize specific document types.Nowadays, this task is especially important because huge volumes of documents are scanned to be processed in an automatic way.Some automatic solutions based on optical character recognition (OCR), bank check reader, postal address reader and signature verifier, have already been proposed but a lot of work has still to be done to classify other types of documents such as tabular forms, invoices, bills, and receipts [20].Chen and Blostein [21] presented an excellent survey on document image classification. Many automatic classification techniques of image documents are based on the extraction of specific pieces of information from the documents.In particular, OCR software is especially useful to extract relevant information in applications that are restricted to a few specific models where the information can be located precisely [22].However, many applications require to deal with a great variety of layouts, where relevant information is located in different positions.In this case, it is necessary to recognize the document layout and apply the appropriate reading strategy [23].Several strategies have been proposed to achieve an accurate document classification based on the layout analysis and classification [1,4,5,[23][24][25]. An invoice is a commercial document issued by a seller, containing details about the seller, the buyer, products, quantities, prices, etc., and usually a logo and tables.Hamza et al. [20] identify two main research directions in invoice classification.The first one concerns data-based systems and the second one concerns model-based systems.Data-based systems are usually used in heterogeneous document flows and extract different information from documents, such as tables [26], graphical features such as logos and trademarks [27], or the general layout [23].On the contrary, model-based systems are used in homogeneous document flows, where similar documents arrive generally one after the other [28][29][30][31]. In this paper, we focus our attention on capturing visual similarity between different document images using global measures that do not require the analysis of the document layout.In the literature of document image classification, different measures of similarity have been used.Appiani et al. [23] design a criterion to compare the structural similarity between trees that represent the structure of a document.Shin and Doermann [24] use a similarity measure that considers spatial and layout structure.This measure quantifies the relatedness between two objects, combining structural and content features.Behera et al. [32] propose to measure the similarity between two images by computing the distance between their respective kernel density estimation of the histograms using the Minkowski distance or the intersection of the histograms. Generalized Mutual Information We review here three different mutual information generalizations based on the Kullback-Leibler distance, the difference between entropy and conditional entropy, and the Jensen-Tsallis divergence, respectively. Mutual Information From Equation (5), we have seen that mutual information can be expressed as the Kullback-Leibler distance between the joint probability distribution p(x, y) and the distribution p(x)p(y).On the other hand, Tsallis [33] generalized the Kullback-Leibler distance in the following form: Thus, from Equations ( 5) and ( 12), Tsallis mutual information can be defined [33,34] as Although a simple substitution of M I for M I T α can be used as an absolute similarity measure between two images, we focus our interest on a relative one.Such a decision is motivated by the better behavior of N M I with respect to M I [15].Then, the generalization of N M I can be given by Although N M I T α (X; Y ) is a normalized measure for α → 1, this is not true for other α values as N M I T can take values greater than 1.This measure is always positive and symmetric. Mutual Entropy Another way of generalizing mutual information is the so-called Tsallis mutual entropy [35].The Tsallis mutual entropy is defined for α > 1 as This measure is positive and symmetric and Tsallis joint entropy H T α (X, Y ) is an upper bound [35].Tsallis mutual entropy represents a kind of correlation between X and Y . As in [35], the normalized Tsallis mutual entropy can be defined as Jensen-Tsallis Information Since Tsallis entropy is a concave function for α > 0, the Jensen-Shannon divergence can be extended to define the Jensen-Tsallis divergence: As we have seen in Equation ( 7), Jensen-Shannon divergence coincides with I(X; Y ) when {π 1 , . . ., π n } is the marginal probability distribution p(x), and {p 1 , . . ., p n } are the rows p(Y |x) of the probability conditional matrix of the channel.Then, for the channel X → Y , a generalization of mutual information, which we call Jensen-Tsallis Information (JT I T ) can be expressed by For the reverse channel Y → X, we have This measure is positive and, in general, non-symmetric with respect to the reversion of the channel.Thus, JT I T α (X → Y ) = JT I T α (Y → X).An upper bound of this measure is given by the Tsallis joint entropy: JT I T α ≤ H T α (X, Y ).The Jensen-Tsallis divergence and its properties have been studied in [17,36]. Similar to the previous measures, a normalized version of JT I T α can be defined as This measure will also take values in the interval [0, 1]. Overview Large organizations and companies deal with a large amount of documents, such as invoices and receipts, which are usually scanned and stored in a database as image files.Then, some information of these images, such as the seller, the date, or the total amount of the invoice, is integrated in the database via manual editing or OCR techniques. A critical issue for document analysis is the classification of similar documents.The documents of the same class can share some interesting information such as the background color, the document layout, the position of the relevant information on the image, or metadata, such as the seller.Once one document is grouped into a class, an specific processing for extracting the desired information can be designed depending on these features [23].A simple way to define a class consists in taking a representative image.Then, we can create a database with the representative images and every new entry in the database is grouped into the class that the similarity between the new image and the representative image is maximum.A general scheme of our framework is represented in Figure 2.There are two different groups of images.The first one, formed by the reference images, is given by a document set where all documents are different between them and where each document represents a document type that identifies a class.This group of documents forms the document database.The second one, composed by the input images, is given by a set of documents that we want to use as classifier input with the aim of finding their corresponding class within the database of the reference images.Note that each input image has one, and only one, reference image, and different input images can have the same reference image.The main goal of this paper is to analyze the application of the Tsallis-based generalizations of mutual information presented in the previous section to the document classification process.In the experiments of document classification carried out in this paper, we do not apply any spatial transform to the images as we assume that they are approximately aligned. Another objective of this paper is to analyze the performance of the Tsallis-based generalizations of mutual information in aligning two documents.This is also a critical point since it allows us to find the spatial correspondence between an input document and a template.The registration framework used in this paper is represented in Figure 1. Results and Discussion To evaluate the similarity between two document images, the similarity measures presented in Section 3 have been implemented in Visual C++ .NET.In our experiments, we have dealt with a color invoice database, where 24-bits per pixel (8-bits for each RGB color channel) are used.These images usually present a complex layout, including pictures, logos, and highlighted areas.The database is composed by 51 reference invoices and 95 input invoices to be classified.It is required that each input invoice has one and only one reference invoice of the same type in the database.This reference invoice is similar (i.e., from the same supplier) but not identical to the input invoice.In our first experiment on invoice classification, we assume that the images to be compared are fairly well aligned. The reference and input invoices have been preprocessed using the method presented in [37] with the aim of correcting the skew error introduced during the scanning process.Although the skew error is corrected, they still present small translation errors between them.Preliminary experiments have shown that the best classification results are obtained for resolutions with height between 100 and 200 pixels.Note that this fact greatly speeds up the computation process as computation time is proportional to image resolution.In our experiments, all images have been scaled from the original scanning resolution (around 2500 × 3500 pixels) to a height of 100 pixels, conveniently adjusting the image width to keep the aspect ratio of the images. Let us remember that the main objective is to calculate the degree of similarity between each input invoice and all reference invoices.In this way, an ordered list of reference invoices, called similarity list, can be obtained from the degree of similarity (from the highest to the lowest) between both the input and the reference invoices.Thus, we interpret that the first reference invoice of the list is the class assigned to the input invoice. Next, two performance measures are considered for comparison purposes: the percentage of success and the classification error.The percentage of success is given by the number of correctly classified input invoices (i.e., the corresponding reference image of the input invoice has been set to the first place in the similarity list) over the total number of inputs.Given an input invoice, the classification error is determined by the position of the corresponding reference invoice in the similarity list.If the reference invoice is chosen properly, this will be located at position 0 of the list. Table 1 shows the two performance values for each measure and different α values.Note that the values for the M E T and N M E T measures are not shown for α < 1 since these measures are only defined for α > 1.For α = 1, the corresponding Shannon measures are considered in all cases.The first parameter represents the classification success in percentage and the second, in parentheses, represents the mean of classification error of the misclassified input invoices.As it can be seen, we can observe that the measures have a different behavior with respect to the α values.While M I T and N M I T achieve the best classification success for α values between 0.4 and 1.2, the rest of the measures (M E T , N M E T , JT I T , N JT I T ) perform better for α values between 1.0 and 1.4.For these values, the normalized measures classify correctly all the documents.In general, the normalized measures perform much better than the corresponding non normalized ones.We have also tested the performance of SSD and CC measures and we have obtained a classification success of 70.53% and 88.42%, respectively.Note that these results are worse than the ones obtained using the proposed Tsallis-based measures. Table 1.The percentage of classification success and the mean of classification error of the misclassified input invoices (in parentheses) for different measures and α values.The classification error (shown between parentheses in Table 1) allows us to evaluate to what extent the classification is wrong when an invoice is misclassified.If this value is low, the system could suggest a short list of class candidates and the user could select the correct one, while if this value is high that is not recommendable.From the results, we can conclude that, for a high range of α values, methods identify the correct class in the first positions (for almost all cases the mean classification error is lower than 5).Thus, the short list can be taken into account for the final user interface design.The classification error obtained using SSD and CC measures is 20.25 and 6.73, respectively.Note also that Tsallis-based measures clearly outperform SSD and CC measures. Our second experiment analyzes the capability of the Tsallis-based proposed measures to align two similar documents in the same spatial coordinates.In this case, two different features, robustness and accuracy, have been studied. First, the robustness has been evaluated in terms of the partial image overlap.This has been done using the parameter AFA (Area of Function Attraction) introduced by Capek et al. [38].This parameter evaluates the range of convergence of a registration measure to its global maximum, counting the number of pixels (i.e., x − y translations in image space) from which the global maximum is reached by applying a maximum gradient method.Note that this global maximum may not necessarily be the optimal registration position.The AFA parameter represents the robustness with respect to the different initial positions of the images to be registered and with respect to the convergence to a local maximum of the similarity measure that leads to an incorrect registration.The higher the AFA, the wider the attraction basin of the measure.In this experiment, the images have been scaled to a height of 200 pixels, conveniently adjusting the width to keep the aspect ratio.In Figure 3, the left plot represents the results for the M I T , M E T , and JT I T measures with different α values and the right plot represent the results for their corresponding normalized measures.As it can be seen, the best results are achieved for α values greater than 1 for all the measures, being the mutual entropy the one that reaches the best results.As in the previous experiment, the normalized measures also perform better than the non normalized ones.The second feature that we will analyze for the alignment experiment is the accuracy.In this case, the general registration scheme of Figure 1 has been applied, where we have used the Powell's method optimizer [39], a rigid transform (which only considers translation and rotation, but not scaling), and a linear interpolator.The registration process is applied to 18 images of the same class that are aligned with respect to a common template (scaling them to a height of 800 pixels and keeping the aspect ratio).For each image with its original resolution (around 2500 × 3500 pixels), 14 points have been manually identified and converted to the scaled space of a height of 800 pixels.The same process has been done with the template image.In order to quantify the registration accuracy, the points of each image have been moved using the final registration transform.The mean error, given by the average Euclidean distance between these moved points and the corresponding points in the template, has also been computed.In Figure 4, for each measure and each α value, the mean error is plotted.In this case, we can not derive a general behavior.M I T performs better for α = 1.6, while N M I T for α = 0.4.In this case, the non normalized measure performs better than the normalized one.Both M E T and N M E T do not outperform the corresponding Shannon measures (α = 1).Finally, Jensen-Tsallis information have a minimum in α = 0.6 and the accuracy diminishes when the α value increases.Among all measures, the normalized Jensen-Tsallis information achieves the best results, obtaining the minimum error (and thus the maximum accuracy) for α = 0.3.As a conclusion, for document classification, the best results have been obtained by the normalized measures, using α values between 0.4 and 1.2 for N M I T and between 1 and 1.4 for N M E T and N JT I T .For document registration, the most robust results have been obtained by N M E T with α = 1.3 and the most accurate ones have been achieved by N JT I T with α = 0.3. Conclusions In this paper, we have analyzed the behavior of different similarity measures based on Tsallis entropy applied to document processing.Three different generalizations of mutual information, based respectively on Kullback-Leibler distance, the difference between entropy and conditional entropy, and the Jensen-Tsallis divergence, and their ratio with the Tsallis joint entropy have been tested.Two types of experiments have been carried out.First, the proposed measures have been applied to invoice classification, showing different behavior depending on the measure and the entropic index.Second, the document registration has been studied in terms of robustness and accuracy.While the highest robustness is achieved for entropic indices higher than 1, the highest accuracy has been obtained for entropic indices clearly lower than 1. In our future work, we will analyze the performance of the measures analyzed for different typologies of documents, such as scientific papers or journal pages, and further tests will be conducted on larger databases. Figure 1 . Figure 1.Main components of the registration process. ) Normalized mutual entropy takes values in the interval [0..1], taking the value 0 if and only if X and Y are independent and α = 1, and taking the value 1 if and only if X = Y[35]. Figure 3 . Figure 3. AFA parameter values with respect to the α value for the M I T , M E T , and JT I T measures (left) and the corresponding normalized measures (right).AFA parameter evaluates the range of convergence of a registration measure to its global maximum. Figure 4 . Figure 4. Mean error at the final registration position for different measures and α values for the M I T , M E T , and JT I T measures (left) and the corresponding normalized measures (right).
6,162.8
2011-09-14T00:00:00.000
[ "Computer Science" ]
Dosimetric considerations for moldable silicone composites used in radiotherapy applications Abstract Due to their many favorable characteristics, moldable silicone (MS) composites have gained popularity in medicine and recently, in radiotherapy applications. We investigate the dosimetric properties of silicones in radiotherapy beams and determine their suitability as water substitutes for constructing boluses and phantoms. Two types of silicones were assessed (ρ= 1.04 g/cm3 and ρ= 1.07 g/cm3). Various dosimetric properties were characterized, including the relative electron density, the relative mean mass energy‐absorption coefficient, and the relative mean mass restricted stopping power. Silicone slabs with thickness of 1.5 cm and 5.0 cm were molded to mimic a bolus setup and a phantom setup, respectively. Measurements were conducted for Co‐60 and 6 MV photon beams, and 6 MeV electron beams. The doses at 1.5 cm and 5.0 cm depths in MS were measured with solid water (SW) backscatter material (D MS–SW), and with a full MS setup (D MS–MS), then compared with doses at the same depths in a full SW setup (D SW–SW). Relative doses were reported as D MS–SW/D MS–SW and D MS–MS/D SW–SW. Experimental results were verified using Monaco treatment planning system dose calculations and Monte Carlo EGSnrc simulations. Film measurements showed varying dose ratios according to MS and beam types. For photon beams, the bolus setup D MS–SW/D SW–SW exhibited a 5% relative dose reduction. The dose for 6 MV beams was reduced by nearly 2% in a full MS setup. Up to 2% dose increase in both scenarios was observed for electron beams. Compared with dose in SW, an interface of MS–SW can cause relatively high differences. We conclude that it is important to characterize a particular silicone's properties in a given beam quality prior to clinical use. Because silicone compositions vary between manufacturers and differ from water/SW, accurate dosimetry using these materials requires consideration of the reported differences. INTRODUCTION There is a growing demand for solid materials that are moldable and water-equivalent in radiotherapy, particularly for constructing patient-customized bolus and deformable phantoms. As interest grows in adopting sil- There are several characteristics that bolus materials must fulfill, 4 many of which can be met by certain moldable silicone (MS) composites. 5 These materials can have similar mass densities as water's, and can also be manufactured to have similar tactile properties as human tissue's by modifying silicone formulations. From a chemical point of view, silicones are generally categorized as synthetic polymers with a primary repeating unit of polydimethylsiloxane (PDMS). In addition to PDMS, silicones contain "filler" materials, which act to modify properties such as mechanical durability, hardness, and stickiness. Depending on the application and use, silicone is usually transformed into a stable composition through different chemical reactions. The details of these reactions can be found elsewhere, 6 and the preferred reaction mechanism varies by application. To meet various application requirements, commercial silicones are also available with different formulations and instructions for curing. Due to practical reasons, such as ease of use, the favored mechanism for molding silicone materials used in medical applications 7,8 is platinum cure. 6 Few experimental studies have investigated dose attenuation properties and tissue interface effects of silicone boluses in radiotherapy beams. Perhaps the first group to report on this were Dubois and Bice et al. 9 They looked at two different forms of silicone and evaluated their use in 9 MeV electron beams. Compared with solid water (SW), they found that the dose reduction for these materials can be up to 52% at a depth of approximately 3 cm. More recently, and using improved silicone formulations, Canters et al. 10 and Chiu et al. 11 demonstrated the use of 3D printed molds to create patient-specific boluses that offer superior contact with irregular patient surfaces with customizable shapes and thicknesses compared with standard synthetic gel-slab bolus. For 6 MeV and 9 MeV beams, Chiu et al. 11 reported that in vivo measurements made with platinum cure silicone bolus were within 5% of the prescribed dose. In addition to the use of silicone composites as bolus, there has been recent interest in employing these materials for constructing radiotherapy anthropomorphic phantoms. For example, the durability and flexibility of these materials make them useful for construct-ing deformable phantoms for adaptive radiotherapy and magnetic resonance guided radiotherapy. Applications include deformable phantoms for various anatomical sites, such as the thorax, 12 prostate, 13,14 liver, 15 and breast. 5 In these studies, dose measurements were conducted using radiochromic film, 13,15 optically simulated luminescent dosimeters, 13 ionization chambers (ICs), 5 or scintillators 15 ;however,a thorough investigation of the dosimetric properties of silicone has yet to be reported. The purpose of this work is to investigate the dosimetric properties of MS composites in high-energy photon and electron beams, and to determine their suitability as water substitutes for constructing bolus materials and radiotherapy phantoms. MATERIALS AND METHODS In this study, we investigated two types of two-part composite platinum cure MSs using experimental measurements, treatment planning system (TPS) calculations, and Monte Carlo (MC) simulations. Specifically, we sought to answer two questions: the first is, are there differences in high-energy photon and electron radiation beam absorptions in MS compared with SW? And the second is, how do these differences change when an interface of MS and SW is introduced at different depths? These questions are relevant to consider for bolus and deformable phantom construction. For bolus, the dose at the interface between the silicone material and skin is of concern to clinical dose prescription. For deformable phantom construction, it may be desirable to fix a dosimeter rigidly in SW within a surrounding deformable media to reduce measurement uncertainty. Ecoflex ™ 00-10 (E10) and Ecoflex 00-50 ™ (E50) (Smooth-On Inc., PA, USA) MS were used. They are described by the manufacturer as white-translucent silicone rubbers. They both have a low viscosity, and are soft, yet durable, and were selected to characterize the extreme ends of this product line's range. These materials are reported to stretch to many times their original size without tearing and return to their original form without distortion. This is supported by the mechanical properties listed in Table 1. For any material of interest, it is possible to use stoichiometric data to determine key theoretical physical quantities that are relevant for evaluating radiation absorption of materials, such as the mass density ( ), relative electron density (RED) (Z eff ), mean excitation energy, relative mean mass energy-absorption coefficient ratios ( ) med water , and the relative mean mass restricted stopping power ratios for a medium ( L ) med water . Because the exact formulation of E10 and E50 are considered proprietary information and were not made available by the manufacturer, the formula for generic TA B L E 2 Stoichiometric data and fractional weight of each element found in different media of interest used in this study. Each element is listed with its atomic number, Z, provided in brackets This assumption was also based on the fact that filler material is usually added in small amounts as parts per million. Table 2 lists stoichiometric data for this generic form of silicone, SW, and water that were used to determine the aforementioned quantities. In this work, the effective atomic number (Z eff ) values were calculated using the classic Mayneord formula. 16 The mean excitation energy was obtained from NIST's ESTAR database. 17 The RED, mean mass energy-absorption coefficient ratios, ( ) med water , for Co-60 and 6 MV spectra, as well as the mean restricted stopping power ratios, ( L ) MS water (with a cut-off energy of Δ = 10 keV for Co-60 and 6 MV spectra) were determined using the same method reported by Ho and Paliwal 18 and Cunningham and Schulz, 19 and by using data from the NIST ESTAR 17 and XCOM 20 databases. Description of phantoms A custom-built, acrylic cuboid (with a 15 × 15 cm 2 inner base area, 6 mm wall thickness, 10 cm height) was used as a mold for constructing silicone slabs with variable thicknesses. This allowed measurements to be performed in a simple, reproducible geometry. Six silicone slabs were constructed: three using E10, and three using E50-each three corresponding to each silicone type having different thicknesses. The first slab types were 1.5 cm thick with a 15 × 15 cm 2 base area. The second were 5.0 cm thick with a 15 × 15 cm 2 base area. The third was also 5.0 cm thick with a 15 × 15 cm 2 base area, and had an enclosed embedded slot for securely positioning an Advanced Markus® planeparallel IC (S/N: 00815, Model TN34045, PTW Freiburg, Germany) flush against one of the slab's surfaces. The slot was created by placing a plastic IC dummy, with exact dimensions as the Markus IC, at the central axis on the base of the mold to create a slot for positioning the IC. Figure 1 shows the custom-built mold, Markus IC dummy, and molded silicone slabs. Both types of silicone were left to cure for a minimum of 4 h. Since F I G U R E 1 The molding process for the silicone slabs included using a custom-built acrylic open faced cuboid container, which had an optional Markus IC dummy insert that can be added at the base to form a slot for IC placement. The molded E10 and E50 silicone slabs are shown on the right-hand image, and were 1.5 cm and 5.0 cm thick silicone is a deformable material, the total uncertainties related to producing and setting up silicone slabs with the stated thicknesses were determined by measuring the dimensions of cured silicone slabs with a caliper (within 0.1% measurement precision). In addition to the six silicone phantom slabs, four Solid Water ® slabs (Gammex-RMI, WI, USA) were used in our experimental measurements. One of these slabs was 1.5 cm thick, two were 5.0 cm thick, and the fourth was also 5.0 cm thick with an embedded slot to fit the Markus IC flush against one of its surfaces. Experimental setup This study focused on two main aspects of using silicone; when it is used as a full medium for absorbed dose measurements, or when a certain thickness of silicone is placed on top of another type of medium, creating an interface at the point of dose measurement. Figure . Electron beams were measured using 1.5 cm depth slabs: setups (1), (2), and (3) as shown in (a). IC measurements were conducted in the lower MS slabs that were made to fit the IC flush against its surface. The measurement points (at the interfaces) are identified with the x marker in the illustrations shown in (a) and (b) and evaluated dose ratios are shown in grey boxes. Measurements are compared to Monaco TPS calculations for 6 MV and to EGSnrc Monte Carlo simulations for Co-60 and 6 MV at the same depths. An example of one of the setups used for measurements in the Co-60 beam is provided in (c), in which the sides of the acrylic mold were used as a frame to maintain the silicone slabs in an upright position for a lateral beam orientation. An example of one of the setups used for measurements in the 6 MV beam, using the Markus IC, is provided in (d). Measurements for 6 MeV beams were conducted with a 10 × 10 cm 2 electron applicator in a similar setup to that shown in (d) photon measurements were performed using a primary standard Co-60 gamma teletherapy irradiator (GammaBeam X200™, Best Theratronics Ltd., Ottawa, Ontario, Canada). All 6 MV photon measurements were performed using a clinical linear accelerator (Elekta Synergy, Elekta Instrument AB, Stockholm, Sweden). And all 6 MeV electron measurements were performed using another clinical linear accelerator (Elekta Infinity, Elekta Instrument AB, Stockholm, Sweden). Details of each process are provided separately below for each beam type and energy, and for radiochromic film and IC measurements. Radiochromic film Radiochromic film is known to have negligible effects on radiation fluence, 21 therefore, EBT3 Gafchromic (Ashland Inc., Wayne, NJ, USA) film measurements were also performed to validate IC measurements acquired at phantom slab interfaces. Films were pre-cut and divided into two pieces. For each irradiation, a larger piece (10.16 cm × 10.16 cm) was used for film dose measurement, and a smaller piece (2.54 cm x 10.16 cm) was used as dedicated control piece to account for darkening due to heat and light exposure and to estimate unirradiated film baseline homogeneity and scan repeatability. Because two separate batches of films were used for Co-60 measurements and for linac measurements,two separate calibrations were performed. For Co-60 measurements, film calibration was performed using the Co-60 irradiator. At the time of measurements, it had a nominal dose rate of 48.8 cGy/min at a reference depth of 5.0 cm in water, for a 10 × 10 cm 2 field size, 100 cm source-to-surface distance (SSD). For linac measurements, film calibration was performed using the 6 MV beam. Film calibration was performed following a procedure similar to what was described by Devic et al. 22 Film orientation was maintained by marking the upper left edge of the film, and by using a custom-made template which exactly fits both the measurement film as well as its control piece. The films were always placed in the same location on the scanner bed for pre-irradiation and post-irradiation scans. Each film was scanned three times then averaged, and three warm-up scans were taken prior to scanning. All films were scanned and irradiated using the same configuration: in transmission mode, 48-bit color, 150 DPI, with an Epson 10000XL scanner (Seiko Epson Corporation, Suwa, Nagano, Japan), and by using the red channel for Co-60 measurements, and the green channel for linac measurements. Film readout was performed using MATLAB (MathWorks, Inc., Natick, MA, USA, v. R2020b) and by using a 0.3 cm radius region-of -interest sampled to the center of each film. An average of the mean net optical density in each region of interest was used to calculate the average dose for each setup. Measurements in photon and electron beams For each of the configurations shown in Figure 2a,b, dose measurements were performed using film and the Markus IC at the central-axis position. Measurements in all beam types were conducted with the IC protection cap on, then repeated for 6 MV and 6 MeV beams with the IC protection cap off to distinguish chamber cap-related perturbation effects in measurement data. IC measurements were conducted using a Keithley electrometer (S/N: 8-8278, Model 35040, Advanced Therapy Dosimeter, Fluke Biomedical, Everett, WA, USA) set on 300 V bias. Co-60 measurements were performed at depths of 1.5 cm and 5.0 cm, 10 × 10 cm 2 field size, 100 cm source-to-axis distance (SAD) and using an irradiation time of 2.05 min to deliver 100.0 cGy at the measurement point. 6 MV photon beam measurements were performed at depths of 1.5 cm and depths of 5.0 cm, 10 × 10 cm 2 field size, 100 cm SAD, and 1000 Monitor Units (MU, see setup in Figure 2a,b). 6 MeV beam measurements were performed at a depth of 1.5 cm, 10 × 10 cm 2 electron applicator size, 100 cm SAD, and 1000 MU.The primary standard Co-60 beam utilized a fixed gantry head geometry to irradiate with a highly precise and reproducible lateral beam setup. Because the silicone slabs could sag when positioned on their short side, the acrylic mold was used as a frame to maintain the silicone slabs in a flat upright position for a lateral beam orientation, as shown in Figure 2c. This kept the beam direction orthogonal to the slab surface. The base of the mold was removed so it would not interfere with the dose readings. The low-dose rate of the Co-60 beam required long irradiation times to achieve a sufficient dose level. Due to this reason, and because of the slab design, the lateral irradiation geometry, and limited access to the Co-60 irradiator, only one film was exposed per experimental setup. This limited access also prevented repeating IC measurements with the protection cap off. Measurements conducted using the linac were performed using a vertical beam orientation (see Figure 2d). In order to reduce the overall uncertainty on film readings 23 for these experiments, four pieces of film were stacked on top of each other and irradiated simultaneously. Three dose readings were obtained for each beam type, beam energy, and setup configuration. Then, dose ratios were determined as illustrated in Alongside these ratios, the total uncertainties for film and IC dose measurements were estimated by considering the film calibration process, the precision of the silicone molding process, dose calibration factors, beam setup, silicone slab thickness variability due to sag during measurements, as well as setup uncertainty and dose/ reading reproducibility, where applicable. D MS−SW SW−SW and D MS−MS SW−SW values and associated uncertainties were then compared to values determined from TPS calculations and MC simulations, as described below. CT imaging and TPS calculations Using the slab orientation for vertical beam irradiation (Figure 2d), CT images of the six configurations shown in Figure 2a,b were acquired with a radiotherapy CT simulator (Brilliance Big Bore, Philips Medical Systems, Cleveland, USA), and with an image resolution of 0.4 mm × 0.4 mm × 0.4 mm, 120 kVp, and 350 mAs. In each setup, four pieces of film were placed between the two slabs. The image sets were imported into the Monaco ® TPS (v. 5.11.02, Elekta Instrument AB, Stockholm, Sweden), the external contours of each slab were TA B L E 3 Physical quantities related to radiation attenuation and absorption, as reported for generic silicone, and compared to common materials used in radiotherapy dosimetry (namely, solid water and water) contoured, and a small region of interest (0.3 cm 3 volume) centrally located in film was also contoured. The Monaco ® TPS 24 uses a specified CT-to-electron density (ED) table to convert a CT image pixel's Hounsfield Unit (HU) value to an ED value. 25 The HU values for each image pixel in the contoured structure are mapped to RED values using a user specified CT-to-ED file. This file is based on measurement data obtained with a phantom, such as the Gammex 467 Tissue Characterization Phantom (Gammex Inc., Middleton, WI, USA). These type of phantoms house inserts made of tissue-equivalent materials with standard compositions 26 such as lung, adipose, water, muscle, cartilage, bone, aluminum, and iron. Once the ED is determined, this value is subsequently used by the dose calculation algorithm to determine material characteristics required for dose calculation such as, mass density ( ), photon mass attenuation coefficient ( ), electronic (collisional) mass collisional stopping power ( S col ), electron scattering power, etc. Consequently, for accurate dose calculation using the TPS, it is important to use the correct RED value for a particular material. Material When plastic or silicone materials are used, the CTto-ED file may not be appropriate to apply directly since the compositions of these materials can differ from tissues'. To ensure that TPS dose calculations were free of systematic errors resulting from a potential material misrepresentation, the correct RED value of 0.983 (see Table 3) was applied by overriding the silicone slab contours' voxels during calculations. The RED value automatically reported by the TPS was measured at the center of each silicone slab and noted for comparison purposes only. A treatment plan was generated for each of the setup configurations in accordance with measurement conditions. The plan isocentre (100 cm SAD) was set as the interface of the two slabs. For 6 MV photon beams, both 1.5 cm thick and 5.0 cm thick upper slabs were used. For 6 MeV electron beams, only 1.5 cm thick upper slabs were used. This is because 5 cm of material is beyond the practical range of 6 MeV electron beams. Two dose calculation algorithms were employed for 6 MV photon beam calculations, Collapsed Cone Convolution (CC) and X-ray Voxel Monte Carlo (XVMC) 27 MC implementation. The dose-to-medium was calculated using 0.2 cm grid spacing, and for MC with 0.1 uncertainty. The mean dose for the small film contour was obtained for each plan. The beam model used for all calculations was for an Agility MLC linear accelerator (Elekta, Instrument AB, Stockholm, Sweden). For 6 MeV, the VMC 28 MC implementation dose calculation algorithm was used to calculate dose-to-medium with 0.2 cm grid spacing, and 10 6 histories, and similar to 6 MV plans,the mean dose for the small film contour was also obtained for each plan. In accordance with experiments, for 6 MV and 6 MeV, the values reported from TPS calculations are D MS−SW SW−SW and D MS−MS SW−SW for depths of 1.5 cm (for photon and electron plans) and 5.0 cm (for photon plans). Monte Carlo simulations In order to validate experimental data and TPS calculations with photon beams, MC simulations were carried out using EGSnrc/DOSXYZnrc. 29,30 Voxelized dose calculation geometry files were created to emulate the experimental setups and phantom material geometries described above. MC simulation properties are summarized in Table 4, as recommended by AAPM's Research Committee Task Group 268. 31 For each energy, and all the configurations shown in Figure 2a Table 3 listed the dosimetric quantities determined for generic silicone in comparison with SW and water. While the mass density of all three materials is similar, quantities such as Z eff , the RED, the mean excitation energy, and ( L ) med water are significantly different for silicone. Within the energy range investigated, ( ) med water is predominantly due to incoherent scattering (Compton interactions). RESULTS The combined uncertainties (k = 1) for measurements conducted in all beam types and energies were 0.92% for IC in all beam types and energies (refer to Table 5), and for film were 2.06% for Co-60, and 1.11% for 6 MV and 6 MeV (refer to Table 6). In Tables 5 and 6, Type A uncertainties are evaluated through statistical analysis of measurements (such as standard deviation and standard error around the mean of results), whereas Type B uncertainties are determined through best scientific judgment based on the literature. 23,32,33 Table 7 provides a comparison of D MS−SW SW−SW and D MS−MS SW−SW values in phantom material at the measurement plane from experimental measurements and MC simulations in the Co-60 photon beam. In addition to experimental measurements and MC simulation ratios, the ratios obtained from TPS calculations are also listed for the 6 MV photon beam and 6 MeV electron beams in Tables 8 and 9, respectively. In Tables 7-9, experimental data for the two silicone types are provided separately, whereas data from TPS calculations and MC simulations are provided for the generic form of silicone. MC (DOSXYZnrc) data closely matched those obtained experimentally with film at the same depth. Tables 7 and 8. For all beam energies and depths in phantom material, a visible perturbation is present just beyond 100 cm SAD when an interface of silicone and solid water (MS-SW) is present. DISCUSSION MS composites offer practical advantages for constructing deformable anthropomorphic phantoms. 5,12,13,15,34 With increased utilization of 3D printing in radiotherapy, MSs are also being used to mold custom patient-specific radiotherapy bolus out of 3D printed shells. 10,35 In this paper, we constructed slab phantoms out of two types of commercial silicone composites. The first, referred to as E10, formed a soft and flexible slab, and the second, referred to as E50, formed a harder and more rigid slab. The molding process demonstrated was simple and provided a reproducible setup for conducting IC and film dose measurements at the interface of two slab phantom planes. Due to their mass density and electron density being similar to water's, E10 and E50 were expected to be suitable for applications in MV photon radiotherapy and dosimetry-where Compton scattering interactions dominate. Since film has negligible radiation fluence perturbation effects, and if we consider measurements conducted with Co-60 and 6 MV photon beams using film as being more reliable than the Markus IC measurements, we can conclude that the relative dose ratios resulting from MS-MS or MS-SW setups were up to 5% different than with a SW-SW setup see (Tables 7 and 8). As an example, for a prescription dose of 200 cGy, this would translate to a delivered dose of 190 cGy at the same depth. In these cases, the differences in dose ratios were more prominent when the phantom setup configuration comprised an interface of two media (MS-SW), as opposed to being fully made of silicone (MS-MS). Indeed, when silicone was used alone (MS-MS) the dose ratios were up to 4% higher and 2% lower in Co-60 and 6 MV photon beams, respectively (or 208 cGy and 198 cGy, respectively in our stated example above). That is to say that using a phantom made purely of silicone would have more dosimetric tissue equivalence in the higher energy photon beam at measurement depths of 1.5 cm or 5.0 cm. Furthermore, based on its relative dose attenuation in in 6 MV photon beams,E10 (which is more deformable than E50 and is mechanically similar to human tissue 5 ) seems to be better suited for phantom and bolus applications. The difference between measured dose values obtained in E10 and E50 materials may be related to differences in their chemical composition. As mentioned previously, in addition to the repeating silicone polymers in silicone composites, these materials are manufactured to incorporate small amounts of "filler" material. Filler material types range from carbon,to silica,titanium, or barium sulfate. Due to proprietary information, it was not possible to obtain the exact formulation of E10 and E50 from the manufacturer, so the measured dose differences between the two materials could not be identified with certainty to result from differences in filler materials. Only a detailed chemical analysis could offer quantifiable data; however, it is important to note that different silicone composite product lines or different manufacturers can rely on different types and quantities of filler materials to generate variable degrees of hardness or softness, radiopacity, or viscosities for example. Consequently, due to the predominance of the photoelectrical effect at low photon energies, it was expected that the potential presence of higher atomic number elements in E50, which is inferred from its higher shore-hardness compared to E10, would reflect increased dose attenuation when measurements were conducted in the lower photon energy. Indeed, we found that compared with measurements in SW, using E10 and E50 in 6 MV photon beams caused less dose differences than in Co-60 photon beams. Dose discrepancies would likely be even more noticeable for kV photon energy ranges, particularly due to the presence of a high amount of silicon (Z = 14) in silicone composites (see Table 2). For all photon beam measurements, dose ratios obtained with the IC were lower than those obtained TA B L E 6 Uncertainty budget for net optical density readings obtained with EBT3 film in both photon (Co-60 and 6 MV) and 6 MeV electron beams with film (see Tables 7 and 8). MC simulations were employed to investigate these differences. Simulation results showed that for a generic form of silicone, having no distinction between E10 and E50 at both depths in Co-60 (see Table 7) and 6 MV photon beams (see Table 8 Figure 3). These results resembled those obtained with film measurements. Which again allude to the fact that conducting dose measurements entirely within silicone material will yield results within 2% of those conducted in SW, but larger differences can be expected if silicone is placed on top of SW to create an interface of the two materials. These results can also be clearly visualized from MC simulation data shown in Figure 3, where, at both 1.5 cm and 5.0 cm depths, a reduction in the scored dose is observed at (∼2%) and 0.1 cm beyond (∼4%) the interface of MS-SW phantom configurations. This finding is relevant to consider in applications where silicone composites may be used to mold a bolus for a patient's radiotherapy treatment, 11,36 or when they are used to construct phantoms for radiotherapy applications using multiple materials. 12,13,15,37 TPS calculations were also performed using a generic form of silicone. In this case, a corrected RED value of 0.983 was used instead of the TPS determined value of 1.055 ± 0.003. Two dose calculation algorithms (TPS-CC and TPS-MC) were applied to establish any potential errors caused by using an algorithm that did not fully account for lateral scatter, such as CC. 38 In the simple geometry used, no observable differences were found when comparing the point dose ratios obtained with the two dose calculation algorithms (refer to 39 such as the configurations tested. For accurate TPS dose calculation, and in the case that more complicated calculation geometries and material configuration are to be used, the TPS-MC dose calculation algorithm would offer more reliable results. When an interface of MS-SW was used, results from MC simulations were approximately 2% lower than those from TPS calculations. This is related to the differences in ( ) silicone water and ( L ) silicone water (see Table 3), because the TPS will not accurately model silicone's true dose absorption compared to water. During MV photon dose calculation, the Monaco TPS uses the RED value to determine the associated mass density, which, according to Monaco's TPS Dose Calculation Manual, 24 for silicone's RED of 0.983 equates to a mass density of 0.983 g/cm 3 . Moreover, in the Monaco TPS, the relative mass collisional stopping power for a medium,( S col ) med water , is calculated as a function of mass density using equations applicable over variable ranges of mass densities, where ( S col ) med water = 1.000 between the range of 0.98 < < 1.02. 24 This is not entirely accurate since, as determined previously, 5 ( S col ) silicone water is 0.948 for 6 MV photon beams. Silicone has a mean excitation energy that is ∼25% higher than water's (see data provided in Table 3) which in turn lowers the mass stopping power for silicone relative to water. This signifies that the TPS does not account for changes in electron fluence when using a medium that is dissimilar to water, In all cases, the dose is presented relative to the dose at 100 cm source-to-axis distance (SAD) for the SW-SW setup at each respective depth and beam energy. The field size and SAD for all simulations were 10 × 10 cm 2 , and 100 cm, respectively, and all simulations yielded values with uncertainties below 0.3%. Dose ratios from film measurements made with silicone E10 and E50 types are also shown for comparison and are labeled as (film-E10) or (film-E50), for film measurements in each type of silicone material (E10 or E50) and that for accurate TPS-dose calculation to measurement comparisons, it is necessary to apply a correction to the TPS-determined dose value. The primary correction to the TPS dose calculation would account for changes in the charge particle fluence which would be reflected by the difference between the TPS applied ( S col ) silicone water and its actual value. These corrections alone can be in the order of an increase in calculated dose by 1%-2%. Contrary to photon measurements, IC and film readings agreed well in 6 MeV electron measurements (refer to Table 9). Here, film results showed the overall differences in dose ratios were within −2% to +4%, depending on the silicone type and setup. is also worth noting that the reported dose ratios in electron beams are high (as opposed to in MV photon beams, where the reported ratios were generally low), meaning that dose measurements in silicone result in a higher dose value than in SW. This can also be attributed to the fact that the collisional mass stopping power for silicone is lower than that of water, and so the magnitude of electron fluence attenuated by silicone would be less than that in water of equal physical thickness. Once again, this finding is relevant to consider in applications where silicone composites may be used to mold a bolus to increase skin dose for a patient's radiotherapy treatment. The choice of using the Markus IC was based on practicality and offered a conceptualized benefit for establishing the dose readings at the interface of silicone and SW. It would have been challenging to use a Farmer type IC in this type of phantom slab geometry due to the IC slot molding process (see Figure 1) and TA B L E 9 D MS−SW SW−SW and D MS−MS SW−SW values at 100 cm SAD and 1.5 cm depth, in a 6 MeV electron beams from experimental measurements and TPS-MC (using Monaco's MC Calculation Algorithm). Note that TPS calculations were performed for a generic form of silicone, therefore the same calculation data are provided for both types of silicone (E10 and E50) IC dose ratios did not significantly differ when the protection cap was used or removed. For 6 MV beams, the range of differences in dose ratios was −0.28% to 1.05% at depths of 1.5 cm and 5.0 cm, respectively. And for 6 MeV beams, the range was −1.14% to −1.37% at a depth of 1.5 cm. The IC readings were found to differ from data obtained by film measurements, TPS calculations, and MC simulations. These inconsistencies are related to how parallel plate ICs are constructed. The Advanced Markus IC is manufactured for absolute dosimetry in high-energy electron beams and is made of poly-methyl methacrylate (PMMA) with a 0.03 mm thick polyethylene CH 2 entrance foil (2.76 mg/cm 2 ). Its protection cap is also made of PMMA (0.87 mm thickness and 1.19 g/cm 3 ) and has a small sensitive volume with a radius of 2.5 mm (for a depth of 1.0 mm). 40 Based on these specifications, it is designed to minimize dose perturbation effects and minimize volume averaging in the depth direction, which was necessary for the measurements conducted in this study. This has been previously validated both experimentally and through MC simulations in Co-60 photon beams, and have shown that the associated correction for attenuation and scatter in the chamber wall (P wall ) is close to unity. 40,41 Nevertheless, an under-response in measured dose was still observed in our experimental results for photon beams, in which no measurable difference in relative dose was found between interfaces made by SW−SW and MS−SW. This is due to the fact that the backplate of the parallel-plate IC is sufficiently thick to be the primary source of backscatter fluence measured by its body. [42][43][44] This effect may be reduced using a parallelplate IC which is more robust to backscatter such as the Roos® (PTW-Freiburg, Germany). Based on our data, for pre-clinical dose verification, radiochromic film offers a more reliable alternative for measuring dose in silicone material, as well as different material interfaces, in setups similar to those applied in our study. This work investigated the use of silicone in open photon and electron beams only, whereas more modulated radiation beams are often encountered in clinical settings. With intensity modulated beams, the use of multi-leaf collimators can result in low-energy scatter, which, due to silicone's higher Z eff , results in a dramatic increase in photoelectric interactions. In these situations, it may be interesting to also evaluate how dose distributions measured in silicone composite materials differ from those measured in SW. CONCLUSIONS MS composites offer practical advantages for constructing customized patient bolus and radiotherapy phantoms for use in high-energy photon and electron beams. Silicone compositions differ from SW's, and it is important to consider associated differences in beam attenuation properties prior to clinical use or phantom applications. This study demonstrated how the dosimetric properties and effects of silicone can be assessed. Experimental, TPS calculations, and MC simulation data showed that compared with the dose measured in SW, when silicone is used in conjunction with SW to form an interface of two materials, differences in measured dose become relatively high. Using silicone alone offers a more tissue-equivalent medium for constructing phantoms for use in absorbed dose measurement under high-energy photon and electron beams.
8,416.8
2022-04-18T00:00:00.000
[ "Medicine", "Physics" ]
Genome Stability Is in the Eye of the Beholder: CR1 Retrotransposon Activity Varies Significantly across Avian Diversity Abstract Since the sequencing of the zebra finch genome it has become clear that avian genomes, while largely stable in terms of chromosome number and gene synteny, are more dynamic at an intrachromosomal level. A multitude of intrachromosomal rearrangements and significant variation in transposable element (TE) content have been noted across the avian tree. TEs are a source of genome plasticity, because their high similarity enables chromosomal rearrangements through nonallelic homologous recombination, and they have potential for exaptation as regulatory and coding sequences. Previous studies have investigated the activity of the dominant TE in birds, chicken repeat 1 (CR1) retrotransposons, either focusing on their expansion within single orders, or comparing passerines with nonpasserines. Here, we comprehensively investigate and compare the activity of CR1 expansion across orders of birds, finding levels of CR1 activity vary significantly both between and within orders. We describe high levels of TE expansion in genera which have speciated in the last 10 Myr including kiwis, geese, and Amazon parrots; low levels of TE expansion in songbirds across their diversification, and near inactivity of TEs in the cassowary and emu for millions of years. CR1s have remained active over long periods of time across most orders of neognaths, with activity at any one time dominated by one or two families of CR1s. Our findings of higher TE activity in species-rich clades and dominant families of TEs within lineages mirror past findings in mammals and indicate that genome evolution in amniotes relies on universal TE-driven processes. Introduction Following rapid radiation during the Cretaceous-Paleogene transition, birds have diversified to be the most species-rich lineage of extant amniotes (Ericson et al. 2006;Jarvis et al. 2014;Wiens 2015). Birds are of particular interest in comparative evolutionary biology because of the convergent evolution of traits seen in mammalian lineages, such as vocal learning in songbirds and parrots (Petkov and Jarvis 2012;Pfenning et al. 2014;Bradbury and Balsby 2016), and potential consciousness in corvids (Nieder et al. 2020). However, in comparison to both mammals and non-avian reptiles, birds have much more compact genomes (Gregory et al. 2007). Within birds, smaller genome sizes correlate with higher metabolic rate and the size of flight muscles (Hughes and Hughes 1995;Wright et al. 2014). However, the decrease in avian genome size occurred in an ancestral dinosaur lineage over 200 Ma, well before the evolution of flight (Organ et al. 2007). A large factor in the smaller genome size of birds in comparison to other amniotes is a big reduction in repetitive content (Zhang et al. 2014). The majority of transposable elements (TEs) in the chicken (Gallus gallus) genome are degraded copies of one superfamily of retrotransposons, chicken repeat 1 (CR1) (International Chicken Genome Sequencing Consortium 2004). The chicken has long been used as the model avian species, and typical avian genomes were believed to have been evolutionarily stable due to little variation in chromosome number and chromosomal painting showing little chromosomal rearrangement (Burt et al. 1999;Shetty et al. 1999). These initial, low-resolution comparisons of genome features, combined with the degraded nature of CR1s in the chicken genome, led to the assumption of a stable avian genome both in terms of karyotype and synteny but also in terms of little recent repeat expansion (International Chicken Genome Sequencing Consortium 2004;Wicker et al. 2005). The subsequent sequencing of the zebra finch (Taeniopygia guttata) genome supported the concept of a stable avian genome with little CR1 expansion, but revealed many intrachromosomal rearrangements and a significant expansion of endogenous retroviruses (ERVs), a group of long terminal repeat retrotransposons, since divergence from the chicken (Ellegren 2010;Warren et al. 2010). The subsequent sequencing of 48 bird genomes by the Avian Phylogenomics Project confirmed CR1s as the dominant TE in all non-passerine birds, with an expansion of ERVs in oscine passerines following their divergence from suboscine passerines (Zhang et al. 2014). The TE content of most avian genomes has remained between 7% and 10% not because of a lack of expansion, but due to the loss and decay of repeats and intervening noncoding sequence through nonallelic homologous recombination, canceling out genome size expansion that would have otherwise increased with TE expansion (Kapusta et al. 2017). Since then, hundreds of bird species have been sequenced, revealing variation in karyotypes, and both intrachromosomal and interchromosomal rearrangements (Hooper and Price 2017;Damas et al. 2018;Feng et al. 2020;Kretschmer, Furo, et al. 2020;Kretschmer, Gunski, et al. 2020). This massive increase in genome sequencing has similarly revealed TEs to be highly active in various lineages of birds. Within the last 10 Myr ERVs have expanded in multiple lineages of songbirds, with the newly inserted retrotransposons acting as a source of structural variation (Suh et al. 2018;Boman et al. 2019;Weissensteiner et al. 2020). Recent CR1 expansion events have been noted in woodpeckers and hornbills, leading to strikingly more repetitive genomes than the "typical" 7-10%. Between 23% and 30% of woodpecker, hornbill, and hoopoe genomes are CR1s, however, their genome assembly size remains similar to that of other birds (Zhang et al. 2014;Manthey et al. 2018;Feng et al. 2020). Although aforementioned research focusing on the chicken suggested CR1s have not recently been active in birds, research focusing on individual avian lineages has used both recent and ancient expansions of CR1 elements to resolve deep nodes in a wide range of orders including early bird phylogeny (Suh et al. 2011Matzke et al. 2012), flamingos and grebes (Suh et al. 2012), landfowl (Kaiser et al. 2007;Kriegs et al. 2007), waterfowl (St John et al. 2005), penguins (Watanabe et al. 2006), ratites (Haddrath and Baker 2012;Baker et al. 2014;Cloutier et al. 2019), and perching birds (Treplin and Tiedemann 2007;Suh et al. 2017). These studies largely exclude terminal branches and, with the exception of a handful of CR1s in grebes (Suh et al. 2012) and geese (St John et al. 2005), the timing of very recent insertions across multiple species remains unaddressed. An understanding of TE expansion and evolution is important as they generate genetic novelty by promoting recombination that leads to gene duplication and deletion, reshuffling of genes and major structural changes such as inversions and chromosomal translocations (Lim and Simmons 1994;Bailey et al. 2003;Zhou and Mishra 2005;Lee et al. 2008;Chuong et al. 2017;Underwood and Choi 2019). TEs also have the potential for exaptation as regulatory elements and both coding and noncoding sequences (Warren et al. 2015;Wang et al. 2017;Barth et al. 2020;Cosby et al. 2021). Ab initio annotation of repeats is necessary to gain a true understanding of genomic repetitive content, especially in nonmodel species (Platt et al. 2016). Unfortunately, many papers describing avian genomes (Cornetti et al. 2015;Laine et al. 2016;Jaiswal et al. 2018) only carry out homology-based repeat annotation using the Repbase (Bao et al. 2015) library compiled from often distantly related model avian genomes (mainly chicken and zebra finch). This lack of ab initio annotation can lead to the erroneous conclusion that TEs are inactive in newly sequenced species (Platt et al. 2016). Expectations of low repeat expansion in birds inferred from two model species, along with a lack of comparative TE analysis between lineages is the large knowledge gap we addressed here. As CR1s are the dominant TE lineage in birds and present in all birds (Feng et al. 2020) unlike, for example, CR1-mobilized SINEs which exist in only some birds Ottenburghs et al. 2021), we carried out comparative genomic analyses to investigate their diversity and temporal patterns of activity. Identifying Potential CR1 Expansion across Birds From all publicly available avian genomes, we selected 117 representative assemblies not under embargo and with a scaffold N50 above 20,000 bp (available at July 2019) for analysis (supplementary table 1, Supplementary Material online). To find all CR1s that may have recently expanded in the 117 genomes, we first used the CARP ab initio TE annotation tool. From the output of CARP, we manually identified and curated CR1s with the potential for recent expansion based on the presence of protein domains necessary for retrotransposition, homology to previously described CR1s, and the presence of a distinctive 3 0 structure. To retrotranspose and hence expand, CR1s require endonuclease (EN) and reverse transcriptase (RT) domains within a single ORF, and a 3 0 structure containing a hairpin and microsatellite which potentially acts as a recognition site for the RT (Suh et al. 2014;Suh 2015). If a CR1 identified from homology contained both protein domains and the distinctive 3 0 structure, we classified it as a "full-length" CR1. We next classified a full-length CR1 as "intact" CR1 if the EN and RT were within a single intact ORF. Using the full-length CR1s and previously described avian and crocodilian CR1s in Repbase as queries (International Chicken Genome Sequencing Consortium 2004;Warren et al. 2010;Green et al. 2014), we performed iterative searches of the 117 genomes to identify divergent, low copy number CR1s which may not have been identified by ab initio annotation. We ensured the protein domains and 3 0 structures were present throughout the iterative searches. Assemblies with lower scaffold N50s generally contained fewer full-length CR1s and none in the lowest quartile contained intact CR1s ( fig. 1). Outside of the lowest quartile, assembly quality appeared to have little impact on the proportion of intact, full-length repeats. The correlation of the low assembly quality with little to no full-length CR1s was seen both across all species and within orders. Our iterative search identified high numbers of intact CR1s in kiwis, parrots, owls, shorebirds, and waterfowl (figs. 1 and 2). Only two of the 22 perching bird (Passeriformes) genomes contained intact CR1s, and all contained ten or fewer fulllength CR1s. Similarly, of the seven landfowl (Galliformes) genomes, only the chicken contained intact CR1s and contained fewer than 20 full-length CR1s. High numbers of fulllength and intact repeats were also identified in two woodpeckers, Anna's hummingbird, the chimney swift and the hoatzin, however, due to a lack of other genome sequences from their respective orders, we were unable to perform further comparative within order analyses of these species to look for recent TE expansion, that is, within the last 10 Myr. Of all the lineages we examined, only four have high-quality assemblies of genera which have diverged within the last 10 Myr and, based on the number of full-length CR1s identified, the potential for very recent CR1 expansion: ducks (Anas), geese (Anser), Amazon parrots (Amazona), and kiwis (Apteryx) (Mitchell et al. 2014;Silva et al. 2017;Sun et al. 2017). A large number of full-length repeats were also identified in owls, however, we were unable to examine recent expansion in Strigiformes in detail due to the lack of a dated phylogeny. In addition to our genus scale analyses, we also examined CR1 expansion in parrots (Psittaciformes) overall, perching birds (Passeriformes) and shorebirds (Charadriiformes) since the divergence of each group, and compared the expansion in kiwis and their closest living relatives (Casuariiformes). Order-Specific CR1 Annotations and a Phylogeny of Avian CR1s Reveal Diversity of Candidate Active CR1s in Neognaths In order to perform comparative analyses of activity within orders, we created order-specific CR1 libraries. Instead of consensus sequences, all full-length CR1s identified within an order were clustered and the centroids of the clusters were used as cluster representatives for that avian order. To classify the order-specific centroids, we constructed a CR1 phylogeny from the centroids and full-length avian and crocodilian CR1s from Repbase ( fig. 3 and supplementary fig. 1 and data 2, Supplementary Material online). From this tree, we partitioned CR1s into families to determine if groups of elements have been active in species concurrently. We partitioned the tree by eye based on the phylogenetic position of previously described CR1 families (Vandergon and Reitman 1994;Wicker et al. 2005;Warren et al. 2010;Bao et al. 2015) and long branch lengths rather than a cutoff for divergence, attempting to find the largest monophyletic groups containing as few previously defined CR1 families as possible. We took this "lumping" approach to our classification to avoid paraphyly and excessive splitting, resulting in some previously defined families being grouped together in one family (supplementary table 2, Supplementary Material online). For example, all full-length CR1s identified in songbirds were highly similar to the previously described CR1-K and CR1-L families and were nested deeply within the larger CR1-J family. As a result, CR1-K, CR1-L, and all full-length songbird CR1s were reclassified as subfamilies of the larger CR1-J family. Based on the position of high confidence nodes with long branch lengths and previously described CR1s in the phylogeny, we defined seven families of avian CR1s, with a new family, CR1-W, which was restricted to shorebirds. Interestingly, the 3 0 microsatellite of the CR1-W family is a 10-mer rather than the octamer found in nearly all amniote CR1s (Suh 2015). With the exception of Palaeognathae (ratites and tinamous), all avian orders that contained large numbers of full-length CR1s also contained full-length CR1s from multiple CR1 families ( fig. 3). Variable Timing of Expansion Events across Avian Orders We used the aforementioned order-specific centroid CR1s and avian and crocodilian Repbase sequences to create order-specific libraries. We used these in reciprocal searches to identify and classify 3 0 anchored CR1s (3 0 ends with homology to both the hairpin sequences and microsatellites) present within all orders in which we had identified full-length repeats. We used all 3 0 anchored CR1s identified above (both full length and truncated) and constructed divergence plots to gain a basic understanding of CR1 expansions within each genome (supplementary data 3 and 4, Supplementary Material online). At high Jukes-Cantor distances, divergence profiles in each order show little difference between species. However, at lower Jukes-Cantor distance, profiles differ significantly between species in some orders. For example, in songbirds at Jukes-Cantor distances higher than 0.1 the overall shape of the divergence plot curves and the proportions of the various CR1 families are nearly identical, whereas at distances lower than 0.1 higher numbers of the CR1-J family are present in Sporophila hypoxantha and T. guttata than in the three other species (supplementary fig. 2a, Supplementary Material online). CR1s most similar to all defined families were present in all orders of Galloanserae and Neoaves examined, with the exception of CR1-X which was restricted to Charadriiformes. Almost all CR1s identified in Palaeognathae genomes were most similar to CR1-Y with a small number of truncated and divergent repeats most similar to crocodilian CR1s (supplementary data 3, Supplementary Material online). Divergence plots may not accurately indicate the timing of repeat insertions as they assume uniform substitution rates across the noncoding portion of the genome. High divergence could be a consequence of either full-length CR1s being absent in a genome or the centroid identified by the clustering algorithm being distant from the CR1s present in a genome. To better determine when CR1 families expanded in avian genomes, we first identified regions orthologous to CR1 insertions sized 100-600 bp in related species (see Materials and Methods). We compared these orthologous regions and approximated the timing of insertion based on the presence or absence of the CR1 insertion in the other species. In most orders only long term trends could be estimated due to long branch lengths (cf., fig. 2) and high variability of the quality of genome assemblies (cf., fig. 1). Therefore, we focused our presence/absence analyses to reconstruct the timing of CR1 insertions in parrots, waterfowl, perching birds, and kiwis ( fig. 4). We also applied the method to owls (supplementary figure 3, Supplementary Material online) and shorebirds ( fig. 5), however, due to the lack of orderspecific fossil-calibrated phylogenies of owls and long branch lengths of shorebirds, we could not determine how recent the CR1 expansions were. FIG. 1.-The impact of genome assembly quality on the identification of full-length and intact CR1s. CR1s containing both an endonuclease and reverse transcriptase domains were considered full length, and those containing both domains within a single ORF considered intact. Both across all orders and within individual orders, genomes with higher scaffold N50 values (quartiles 2 through 4) had higher numbers of full-length CR1s. FIG. 2.-The number of full-length CR1s varies significantly across the diversity of birds sampled. Minimum, maximum, and mean number of full-length CR1 copies identified in each order of birds, and the number of species surveyed in each order. Largest differences are noticeable between sister clades such as parrots (Psittaciformes) and perching birds (Passeriformes), and landfowl (Galliformes) and waterfowl (Anseriformes). The double helix represents a putative hard polytomy at the root of Neoaves (Suh 2016). Orders bolded contain at least one intact and potentially active CR1 copy and those highlighted are the orders examined in detail. For coordinates of full-length CR1s within genomes, see supplementary data 1, Supplementary Material online. Tree adapted from Mitchell et al. (2014) and Suh (2016). In analyzing the repeat expansion in the kiwi genomes, we used the closest living relatives, the cassowary and emu (Casuariiformes), as outgroups. Following the divergence of kiwis from Casuariiformes, CR1-Y elements expanded, both before and during the recent speciation of kiwis over the last few Myr. In contrast, there was little CR1 expansion in Casuariiformes, both following their divergence from kiwis, and more recently since their divergence approximately 28 Ma, with only one insertion found in the emu and three in the cassowary since they diverged (supplementary table 3, Supplementary Material online). In the waterfowl species examined, both CR1-J and CR1-X families expanded greatly in both ducks and geese during the last 2 Myr. Expansion occurred in both examined genera, with greater expansions in the ducks (Anas) than the geese (Anser). Other CR1 families appear to have been active following the two groups' divergence approximately 30 Ma, but have not been active since each genus speciated. Due to the high number of genomes available for passerines, we chose best quality representative genomes from major groups sensu (Oliveros et al. 2019); New Zealand wrens (Acanthisitta chloris), Suboscines (Manacus vitellinus), Corvides (Corvus brachyrhynchos), and Muscicapida (Sturnus vulgaris), Sylvida (Phylloscopus trochilus and Zosterops lateralis), and Passerida (T. guttata, S. hypoxantha, and Zonotrichia albicollis). Between the divergence of Oscines (songbirds) and Suboscines from New Zealand wrens and the divergence of Oscines, there was a large spike in expansion of multiple families of CR1s, predominantly CR1-X. Since their divergence 30 Ma, only CR1-J remained active in oscines, though the degree of expansion varied between groups. Of all avian orders examined, we found the highest levels of CR1 expansion in parrots. Because most branch lengths on the species tree were long, the timing of recent expansions could only be reconstructed in genus Amazona. The species from Amazona diverged 5 Ma and seem to vary significantly in their level of CR1 expansion. However, genome assembly quality might be a confounder as the number of insertions into a species of Amazona was highest in the best quality Multiple expansions of multiple families of CR1s have occurred in the two shorebird lineages examined; plovers (Charadriidae) and sandpipers (Scolopacidae) (fig. 5). The diversity of CR1 families that remained active through time was higher than in the other orders investigated, particularly in sandpipers, with four CR1 families showing significant expansion in Calidris pugnax and five in Calidris pygmaea, since their divergence. In all other orders examined in detail, CR1 expansions over similar time periods have been dominated by only one or two families, with insertions of fewer than ten CR1s from nondominant families (supplementary table 2, Supplementary Material online). Unfortunately, due to long branch lengths more precise timing of these expansions is not possible. Finally, CR1s continuously expanded in true owls since divergence from barn owls, with almost all resolved insertions being CR1-E-like (supplementary fig. 3, Supplementary Material online). However, due to the lack of a genus-level timed phylogeny, the precise timing of these expansions cannot be determined. Combined, our CR1 presence/absence analyses demonstrate that the various CR1 families have expanded at different rates both within and across avian orders. These differences are considerable, ranging from an apparent absence of CR1 expansion in the emu and cassowary to slow, continued expansion of a single CR1 family in songbirds, to recent rapid expansions of one or two CR1 families in kiwis, Amazon parrots and waterfowl, as well as a wide variety of CR1 families expanding concurrently in sandpipers. To further examine the relative timing of the expansion of the various CR1 families in relation to each other, we performed transposition in transposition (TinT) analysis in species we have analyzed in detail above (supplementary data 5, Supplementary Material online). The TinT analysis largely confirmed the relative ages of insertions and activity profiles from the divergence and presence/absence analyses. Genome Assembly Quality Impacts Repeat Identification The quality of a genome assembly has a large impact on the number of CR1s identified within it, both full-length and 5 0truncated. This is made clear when comparing the number of insertions identified within species in recently diverged genera. The three Amazona parrot species diverged approximately 2 Ma (Silva et al. 2017) and the scaffold N50s of A. vittata, A. aestiva, and A. collaria are 0.18, 1.3, and 13 Mb, respectively. No full-length CR1s were identified in A. vittata, and only ten in A. aestiva, whereas 1,125 were identified in A. collaria. Similarly, in Amazona the total number of truncated insertions identified increased significantly with higher scaffold N50s. In contrast, the three species of kiwi compared, diverged approximately 7 Ma, and have similar N50s (between 1.3 and 1.7 Mb). This pattern of higher quality genome assemblies leading to higher numbers of both full-length and intact CR1s being identified is consistent across most orders examined, and is particularly true of the lowest N50 quartile (fig. 1). The lower number of repeats identified in lower quality assemblies is likely due to the sequencing technology used. Repeats are notoriously hard to assemble and are often collapsed, particularly when using short read Illumina sequencing, leading to fragmented assemblies (Alkan et al. 2011;Treangen and Salzberg 2011). The majority of the genomes we have used are of this data type. The recent sequencing of avian genomes using multiplatform approaches have resolved gaps present in short read assemblies, finding these gaps to be rich in interspersed, simple, and tandem repeats (Li et al. 2021;Peona et al. 2021). Of particular note, Li et al. (2021) resolved gaps in the assembly of Anas platyrhynchos which we analyzed here using long-read sequencing, and found the gaps to be dominated by the two CR1 families that have recently expanded in waterfowl (Anseriformes): CR1-J and CR1-X. Species with low-quality assemblies may have full-length repeats present in their genome, yet the sequencing technology used prevents the assembly of the repeats and hence detection. Thus, CR1 activity may be even more widespread in birds than we estimate here. The Origin and Evolution of Avian CR1s Avian CR1s are monophyletic in regards to other major CR1 lineages found in amniotes (Suh et al. 2014). For comparison, crocodilians contain some CR1 families more similar to those found in testudines and squamates than others in crocodilians. By searching for truncated copies of previously described CR1s in addition to our order-specific CR1s, we were able to uncover how CR1s have evolved in avian genomes as birds have diverged. CR1-Y is the only family with full-length CR1s present in Palaeognathae, Galloanserae, and Neoaves. The omnipresence of CR1-Y indicates it was present in the ancestor of all birds. A small number of highly divergent truncated copies of CR1s most similar to CR1-Z are found in ratites and CR1-J in tinamous (supplementary fig. 2b, Supplementary Material online). This is potentially indicative of an ancestral presence of CR1-J and CR1-Z in the common ancestor of all birds, or misclassification owing to the high divergence of these CR1 fragments. As mentioned above, we took a lumping approach to classification to CR1 classification to avoid paraphyly, thereby collapsing highly similar families elsewhere considered as separate families. As CR1-C, CR1-E, and CR1-X are present in both Galloanserae and Neoaves but absent from Palaeognathae, we conclude these three families likely originated following the divergence of neognaths from paleognaths, but prior to the divergence of Neoaves and Galloanserae. In addition to having a 10-bp microsatellite instead of the typical 8-bp microsatellite, CR1-W is peculiar as it is unique to Charadriiformes but sister to CR1-J and CR1-X ( fig. 3). This implies an origin in the neognath ancestor, followed by retention and activity in measurable numbers only in Charadriiformes. A wide variety of CR1 families has expanded in all orders of neognaths, with many potential expansion events within the past 10 Myr present in many lineages. As mentioned in the results, it is not possible to conclude that insertions are ancient based on divergence plots alone. Some species with lowquality genome assemblies, such as A. vittata, contained very few full-length repeats compared with relatives (supplementary fig. 4, Supplementary Material online). As a result of full-length repeats not being assembled, the divergence of most or all truncated insertions identified in A. vittata would likely be calculated using CR1 centroids identified in A. collaria, leading to higher divergence values than those identified in A. collaria, and in turn an incorrect assumption of less recent expansion in A. vittata than A. collaria. In addition to fewer full-length repeats being assembled, fewer truncated repeats also appear to have been assembled in poorer quality genomes. CR1 Family Expansions within Orders Across all sampled neognaths, recent expansions appear to be largely restricted to one or two families of CR1. Our presence/ absence analyses found this to be the case in waterfowl, parrots, songbirds, and owls, with shorebirds and the early passerine divergences the only exceptions. Similarly, based on the phylogeny of full-length elements, most orders only retain full-length CR1s from two or three families, whereas shorebirds retain full-length CR1s from across all seven families. Our presence/absence analysis revealed likely concurrent expansions of at least four CR1 families in two families of shorebirds: sandpipers of genus Calidris and plovers of genus Charadrius. In both genera four families of CR1s have significantly expanded since their divergence including the order-specific CR1-W ( fig. 5). Although in both genera one family accounts for 40-50% of insertions, the other three families have hundreds of insertions each. This is highly different to the pattern seen in songbirds and waterfowl which, over a similar time period, have single digit insertions of nondominant CR1 families (supplementary table 3 , Supplementary Material online). This increase of CR1 diversity in shorebirds could be due to some CR1 families in shorebirds having 3 0 inverted repeat and microsatellite motifs which differ from the typical structure (Suh 2015) (supplementary fig., Supplementary Material online). For example, the CR1-W family has an extended 10bp microsatellite (5 0 -AAATTCYGTG-3 0 ) rather than the 8-bp microsatellite (5 0 -ATTCTRTG-3 0 ) seen in nearly all other avian CR1s. When transcribed the 3 0 structure upstream of the microsatellite is hypothesized to form a stable hairpin which acts as a recognition site for the cis-encoded RT (Luan et al. 1993;Suh 2015;Suh et al. 2017). The recently active CR1s we identified in other avian orders have 3 0 microsatellites and hairpins which closely resemble those previously described. Although the changes seen in shorebirds are minor, we speculate they could impact CR1 mobilization, allowing for more families to remain active than the typical one or two. Rates of CR1 Expansion Can Vary Significantly within Orders Based on the presence/absence of CR1 insertions and divergence plots and TinT analysis, rates of CR1 expansion within lineages appear to vary even across rather short evolutionary timescales. The expansion of CR1-Y in kiwis appears to be a recent large burst of expansion and accumulation, whereas since Passeriformes diverged CR1-J appear to have continued to expand slowly in all families, however, the number of new insertions seen in the American crow is much lower than that seen in the other oscine songbird species surveyed. The expansion of CR1-Y seen in the Psittacula-Melopsittacus lineage of parrots, following their divergence from the lineage leading to Amazona, appears to result from an increase in expansion, with little expansion in the period prior to divergence and none observed in other lineages of parrots. CR1s appear to have been highly active in all parrots examined since their divergence, however, due to the less dense sampling it is not clear if this has been continuous expansion as in songbirds or a burst of activity like that in kiwis. Finally, in sandpipers CR1s have continued to expand in both species of Calidris since divergence, however, the much lower number of new insertions in C. pygmaea suggests the rate of expansion differs significantly between the two species. All full-length CR1s identified in ratites were CR1-Y, and almost all truncated copies found in ratites were most similar to either CR1-Y, or crocodilian CR1s typically not found in birds (Suh et al. 2014). This retention of ancient CR1s and the presence of full-length CR1s in species such as the southern cassowary (Casuarius casuarius) and emu (Dromaius novaehollandiae), yet without recent expansion, reflects the much lower substitution and deletion rates in ratites compared with Neoaves (Zhang et al. 2014;Kapusta et al. 2017). These crocodilian-like CR1s in ratites may be truncated copies of CR1s that were active in the common ancestor of crocodilians and birds (Suh et al. 2014), whereas we hypothesize that these have long since disappeared in Neoaves due to their higher deletion and substitution rates (Zhang et al. 2014;Kapusta et al. 2017). Co-Occurrence of CR1 Expansion with Speciation The four genera containing recent CR1 expansions we have examined co-occur with rapid speciation events. Of particular note, kiwis rapidly speciated into five distinct species composed of at least 16 distinct lineages arising due to significant population bottlenecks caused by Pleistocene glacial expansions (Weir et al. 2016). We speculate that the smaller population sizes might have allowed for CR1s to expand as a result of increased genetic drift (Szitenberg et al. 2016). This reflects previous findings of rapid fixation of TEs following population bottlenecks in birds (Matzke et al. 2012). Although we do not see CR1 expansion occurring alongside speciation in passerines, ERVs, which are rare in other birds, have expanded throughout their diversification (Warren et al. 2010;Boman et al. 2019). Investigating the potentially ongoing expansion of CR1s and its relationship to speciation in ducks, geese, and Amazon parrots will require a larger number of genomes from within the same and sister genera to be sequenced, especially in waterfowl due to the high rates of hybridization even between long diverged species (Ottenburghs et al. 2015). Comparison to Mammals As mentioned in the introduction, many parallels have been drawn between LINEs in birds and mammals, most notably the expansion of LINEs in both clades being balanced by a loss through purifying selection (Kapusta et al. 2017). Here, we have found additional trends in birds previously noted in mammals. The TE expansion during periods of speciation seen in Amazona, Apteryx, and Anas has previously been observed across mammals (Ricci et al. 2018). Similarly, the dominance of one or two CR1 families seen in most orders of birds resembles the activity of L1s in mammals (Ivancevic et al. 2016), however, the general persistence of activity of individual CR1 families seems to be more diverse (Kriegs et al. 2007;Suh et al. 2011). Conclusion: The Avian Genome Is More Dynamic Than Meets the Eye Although early comparisons of avian genomes were restricted to the chicken and zebra finch, where high level comparisons of synteny and karyotype led to the conclusion that bird genomes were largely stable compared with mammals (Ellegren 2010), the discovery of many intrachromosomal rearrangements across birds (Skinner and Griffin 2012;Zhang et al. 2014;Farr e et al. 2016;Hooper and Price 2017) and interchromosomal recombination in falcons, parrots, and sandpipers (O'Connor et al. 2018;Coelho et al. 2019;Pinheiro et al. 2021) has shown that at a finer resolution for comparison, the avian genome is rather dynamic. The highly variable rate of TE expansion we have observed across birds extends knowledge from avian orders with "unusual" repeat landscapes, that is, Piciformes (Manthey et al. 2018) and Passeriformes (Warren et al. 2010), and provides further evidence that the genome evolution of bird orders and species within orders differs significantly, even though synteny is often conserved. In our comprehensive characterization of CR1 diversity across 117 bird genome assemblies, we have identified significant variation in CR1 expansion rates, both within genera such as Calidris and between closely related orders such as kiwis and the cassowary and emu. As the diversity and quality of avian genomes sequenced continues to grow and whole-genome alignment methods improve (Feng et al. 2020;Rhie et al. 2020), further analysis of genome stability based on repeat expansions at the family and genus level will become possible. Although the chicken and zebra finch are useful model species, models do not necessarily represent diversity of evolutionary trajectories in nature. Our results indicate that recurrent, similar patterns of TE family expansion are seen across amniotes and suggest mechanisms of TE-driven genome evolution can be generalized across tetrapods. Identification and Curation of Potentially Divergent CR1s To identify potentially divergent CR1s, we processed 117 bird genomes downloaded from GenBank (Benson et al. 2015) with CARP (Zeng et al. 2018); see supplementary table, Supplementary Material online, for species names and assembly versions. We used RPSTBlastN (Altschul et al. 1997) with the CDD library (Marchler-Bauer et al. 2017) to identify protein domains present in the consensus sequences from CARP. Consensuses which contained both an EN and a RT domain were classified as potential CR1s. Using CENSOR (Kohany et al. 2006), we confirmed these sequences to be CR1s, removing others, more similar to different families of LINEs, such as AviRTEs, as necessary. Confirmed CR1 CARP consensus sequences were manually curated through a "search, extend, align, trim" method as described in (Galbraith et al. 2020) to ensure that the 3 0 hairpin and microsatellite were intact. Briefly, this curation method involves searching for sequences highly similar to the consensus with BlastN 2.7.1þ (Zhang et al. 2000), extending the coordinates of the sequences found by flanks of 600 bp, aligning these sequences using MAFFT v7.453 (Katoh and Standley 2013) and trimming the discordant regions manually in Geneious Prime v2020.1. The final consensus sequences were generated in Geneious Prime from the trimmed multiple sequence alignments by majority rule. Identification of More Divergent and Low Copy CR1s To identify more divergent or low copy number CR1s which CARP may have failed to identify, we performed an iterative search of all 117 genomes. Beginning with a library of all avian CR1s in Repbase (Bao et al. 2015) (see supplementary table 2, Supplementary Material online, for CR1 names and species names) and manually curated CARP sequences, we searched the genomes using BlastN (-task dc-megablast -max_target_seqs <number of scaffolds in respective genome>), selecting those over 2,700 bp and retaining 3 0 hairpin and microsatellite sequences. Using RPSTBlastN, we then identified the fulllength CR1s (those containing both EN and RT domains) and combined them with the previously generated consensus sequences. We clustered these combined sequences using VSEARCH 2.7.1 (Rognes et al. 2016) (-cluster_fast -id 0.9) and combined the cluster centroids with the Repbase CR1s to use as queries for the subsequent search iteration. This process was repeated until the number of CR1s identified did not increase compared with the previous round. From the output of the final round, order-specific clusters of CR1s were constructed and cluster centroids identified. Tree Construction To construct a tree of CR1s, the centroids of all order-specific CR1s were combined with all full-length avian and two crocodilian CR1s from Repbase and globally aligned using MAFFT (-thread 12 -localpair). We used FastTree 2.1.11 with default nucleotide parameters (Price et al. 2010) to infer a maximum likelihood phylogenetic tree from this alignment, and rooted the tree using the crocodilian CR1s. The crocodilian CR1s were used as an outgroup as all avian CR1s are nested within crocodilian CR1s . This tree was split into different families of CR1 by eye, based on the presence of long branches from high confidence nodes and the position of the previously described CR1 families from Repbase. To avoid excessive splitting and paraphyly of previously described families a lumping approach was taken resulting in some previously distinct families of CR1 from Repbase being treated as members of families they were nested within (supplementary table 3, Supplementary Material online). Identification and Classification of CR1s within Species To identify, classify, and quantify divergence of all 3 0 anchored CR1s present within species, order-specific libraries were constructed from the order-specific clusters and the full-length avian and crocodilian Repbase CR1s. 3 0 -anchored sequences CR1s were defined as CR1s retaining the 3 0 hairpin and microsatellite sequences. Using these libraries as queries, we identified 3 0 anchored sequences CR1s present in assemblies using BlastN. The identified CR1s were then classified using a reciprocal BlastN search against the original query library. Determination of Presence/Absence in Related Species To reconstruct the timing of CR1 expansions, we selected the identified 3 0 anchored CR1 copies of 100 and 600 bp length in a species of interest and at least 600 bp from the end of a contig, extending the coordinates of the sequences by 600 bp to include the flanking region and extracting the corresponding sequences. If the flanking regions contained more than 25% unresolved nucleotides ("N" nucleotides) they were discarded. Using BlastN, we identified homologous regions in species belonging to the same order as the species being analyzed, and through the following process of elimination identified the regions orthologous to CR1 insertions and their flanks in the related species. At each step of this process of elimination, if an initial query could not be satisfactorily resolved, we classified it as unscorable (unresolved) to reduce the chance of falsely classifying deletions or segmental duplications as new insertion events. First, we classified all hits containing the entire repeat and at least 150 bp of each flank as shared orthologous insertions. Following this, we discarded all hits with outer coordinates less than a set distance (150 bp) from the boundary of the flanks and CR1s to remove hits to paralogous CR1s insertions. This distance was chosen by testing the effect of a range of distances from 300 bp through to 50 bp in increments of 50 bp on a random selection of CR1s first identified in Anser cygnoides and Corvus brachyrhynchos and searched for in other species within the same order. Requiring outer coordinates to be higher values resulted in higher numbers of orthologous regions not being resolved, likely due to insertions or deletions within flanks since divergence. Allowing for boundaries of 50 or 100 bp resulted in many CR1s having multiple potential orthologous regions at 3 0 flanks, many of which were false hits, only showing homology to the target site duplication and additional copies of the 3 0 microsatellite sequence. Thus, 150 bp was chosen, as it was the shortest possible distance at which a portion of the flanking sequence was always present. Based on the start and stop coordinates of the remaining hits, we determined the orientation the hit was in and discarded any queries without two hits in the same orientation. In addition, any queries with more than one hit to either strand were discarded. From the remaining data, we determined the distance between the two flanks. If the two flanks were within 16 bp of each other in the sister species and the distance between the flanks was near the same length of the query CR1, the insertion was classified as having occurred since divergence. If the distance between the ends of the flanks in both the original species and sister species were similar, the insertion was classified as shared. For a pictorial description of this process including the parameters used, see supplementary figure 5, Supplementary Material online. This process was conducted for other species in the same order as the original species. Finally, we determined the timing of each CR1 insertion event by reconciling the presence/absence of each CR1 insertion across sampled species with the most parsimonious placement on the species tree (supplementary fig. 6, Supplementary Material online). Further Estimating Recent Activity by Identifying Transpositions in Transpositions To further qualify timing the recent expansions of CR1 subfamilies in waterfowl, shorebirds, parrots, kiwis, cassowary, and emu, we performed "transposition in transposition" (TinT) analyses. We masked the relevant genomes using RepeatMasker (Smit 2004) and a library used consisting of the centroids of final output of the reciprocal search described above, combined with all avian and two crocodilian CR1s from Repbase. Using the TinT application (Churakov et al. 2010), we estimated the timing of CR1 subfamilies' expansion relative to other subfamilies in each genome (supplementary data 5, Supplementary Material online). Supplementary Material Supplementary data are available at Genome Biology and Evolution online.
9,475.2
2021-04-14T00:00:00.000
[ "Biology" ]
Screening a supplier with fairness preference This paper is to study how a firm to screen an external supplier who has fairness preference using a general screening framework. The supplier with fairness preference provides products or services to the firm, and the firm designs a contract to screen the preference type of suppliers. The supplier’s fairness preference is adjusted by their ability difference, this paper analyzes how the supplier’s ability difference affect the optimal contract variables under the standard framework. The results illustrate that the larger ability difference will narrow the output difference between two different type suppliers. Associated with fairness preference, the probability distribution can increase or decrease the output difference. Furthermore, different strengths of the fairness preference would amplify or shrink the output difference between two different type suppliers. INTRODUCTION A superior external supplier can help a company to improve product quality and customer satisfaction, reducing service waiting time, the price of products or services, and enhancing the company's core competitiveness effectively 1. The companies can also obtain information about related production processes from external suppliers to evaluate the efficiency of the company's organizational structure, affecting the organizational structure of the company 2, 3. The selection of a suitable supplier has become a very important operational decision issue 4. Supplier selection usually goes through the process of determining the criteria of supplier evaluation, selecting which suppliers to enter the evaluation according to the criteria, conducting the evaluation, and finally selecting the most suitable supplier 5. However, there exists information asymmetry between enterprises and potential suppliers: potential suppliers have a better understanding of their production costs, equipment capacity, worker's productivity, team cooperation, financial situation, and so on. Before the evaluation, the enterprise may use public information or request relevant data from potential suppliers for evaluation, but these potential suppliers may not provide credible data to the public or the firm who is searching for a supplier. If becoming a supplier is profitable, those potential suppliers may have incentives to pretend to be a supplier that suits the needs of the firm. In this sense, the supplier selection problem can be considered as a screening or adverse selection problem. In the classical screening problem, the agent has private information. Akerlof's analysis of the lemon market 6 shows that such information asymmetry leads to low efficiency and even the disappearance of the market. To achieve the efficient allocation of resources, the principal needs to design a contract to reveal the private information of the agent. At the same time, for the purpose of revelation, the principal needs to pay certain information rent to the agent. The principal is faced with the tradeoff between the rent extraction and efficiency 7, 8. The key assumption in the classical theory is that all participants in the contract care only about their own pecuniary outcome. However, a large number of studies 9-1011 have shown that decision-makers care not only about their own interests, they also take a fair view of distribution results. Typically, behavioral economicsts define fairness as inequality aversion based on the distributional outcome 10, 11. The results of a higher distribution result than others' circumstances or below others can lead to negative utility, which is called the Advantageous inequality aversion and the Disadvantageous inequality aversion respectively. Another related stream of literature is about social comparison. The early formal study shows that social comparison and fairness concern have impact on labor relations originated from 1920 12. An individual will compare his wage to another, if he feels that he has been treated unfairly, his productivity and cooperation may be negatively affected. For example, both 13 and 14 shown that workers will reduce their effort provision if they perceive unfair wages. In the supply chain relationship, an unfairness outcome may lead to the coordination failure and efficiency loss of the supply chain 15, 16. 17 show that when the agent has social preference, the principal should consider these factors in the contract design. They also demonstrate the feasibility and necessity of incorporating social preference into the contract design through field experiment study 18. There is abundant literature focus on fairness concern or social comparison, but little paper takes agents' ability differences into their consideration. Actually, the practitioners have already considered the link between the payment scheme and ability difference. For example, Skill-Based Pay is a widely used pay scheme in many big companies. This scheme is a system to reward their employees with a monetary value on their skills, regardless of their jobs or tasks. This system reduces the firm's costs of hiring and brings flexibility and productivity to the firm 19-20. In this paper, we build a model to capture how a firm screen the external suppliers who have fairness preference. In our model, the external supplier considers not only differences in their revenues but also differences in their products and services cost. Our result shows that our adjusted fairness preference would increase output difference between two types suppliers, and the fairness concern can amplify or diminish the effect. Background We consider a firm needs an external supplier to provide him unit product or service, and the firm would obtain value from these products or services. We define and , that is the marginal value of products or services is positive and strictly decreasing in the number of products purchased by the firm. The external supplier has a positive marginal cost or . The marginal cost information is the external provider's private information. The supplier would receive a transfer from the firm and she provides the products/services quantity . Figure 1 shows the timing of the model: the supplier knows her own cost type; then the firm designs a take-it-or-leave-it contract; the supplier can accept or reject the contract; the supplier honors the contract upon her acceptance. Note that the information asymmetry about the marginal cost has already existed before the contract design. Social comparison and fairness preferences Once the supplier knows her own ability type, he will compare her ability to the others. The traditional fairwage hypothesis only supposes that the agents compare their own payoff to others, ignoring their ability differences. In this paper, we propose that the agents not only compare their transfer but their ability difference as well. We define , and model the supplier's utility function using a reference-dependent way: (1) The utility function captures the idea that the supplier receives additional (dis)utility from inequality of transfers, and the effect of inequality is associated with their ability differences. Screening model We assume that both the firm and the supplier are riskneutral. The firm only knowns that the supplier has a higher cost with probability and lower cost with probability , and . The firm receives value from the supplier's products or services is , where , and . The firm has to pay and for the supplier's production and , thus the firm's expected utility function is: (2) To screen the supplier, the principal has to design a contract which satisfies the following constraints: We define and , then these constraints can be rewritten as: (4) Note that even we restrict the higher cost type supplier's utility as , the lower cost type supplier can also receive positive utility , which represents the lower cost agent's information rent. We have by adding ICL and ICH. Because the monotonicity constraint is satisfied. Furthermore, because and , it's easy to find that ICH is a loose constraint. The firm's optimization problem is (2) subject to (3). We solve it using the standard technique by restricting the higher cost type to receive zero utility, relaxing ICH (loose constraint). The firm's optimization problem becomes: We solve and from the constraints and replace them into the optimization problem. Because and , we have . We use formula (6) to replace and in (5), the firm's optimization problem becomes: We solve the above optimization and the optimal output satisfies: (8) COMPARATIVE STATICS When we set , the optimal solution becomes , and . There's no output distortion for the efficient supplier, but the higher cost supplier has a downward output distortion. When there's no social comparison, the ability difference only restricts the higher cost type supplier's output, and the higher leads to lower , that is, the bigger gap between two output levels (because is fixed). Furthermore, if only one type supplier exists in the market ( or , the complete information situation), the optimal output becomes or . The solution degenerates into a standard complete information form. When there exists social comparison, or , the output under incomplete information has the following properties: , we have lower and higher . We combine the above two conditions, the proposition is proofed. Proposition 1 means that if the suppliers care about the transfer difference associated with ability difference, the higher cost supplier would reverse his output to a higher level, and the lower cost type supplier would lower his output downward. (For example, the L type supplier receives additional positive utility, she can reduce her effort cost (by reducing quantity ); for the type agent, she receives disutility from social comparison, so she has to increase her output . ) Proposition 2: The lower probability leads to higher output difference , and the higher probability leads to lower output difference . Proof: The supplier's output level is correlated with probability . From equation (8), we have and . That is, with a lower probability , we have lower and higher , the output difference would be higher; similarly, a higher probability leads to higher and lower , then we have a lower output difference . For a better understanding of the proposition 2 above, let's look at an extreme case. A lower probability (e.g. ) means that only the lower cost supplier exists on the market, we have and ; similarly, if the probability is very high (e.g., ), we have and . Remember that if , we can conclude that probability has effects on output differences in such way. Proposition 3: (a) If , and , social comparison increases (decreases) the lower (higher) cost agent's output, the output difference will increase; (b) If , and , social comparison decreases (increases) the lower(higher) cost agent' output, and narrow the output difference. (c) If , the output level is the same as there has no social comparison, and the higher leads to higher output difference . Proof: Note that and .If , we have and , it has the same output level comparing with the case, or the selfregarding situation. That is, if the firm has the belief that two different cost type suppliers have equal probability, then the social comparison has no effect on output. Furthermore, because and , combing with proposition 2, the proposition is proofed. Compare with the complete information condition, asymmetric information with social comparison distorts the socially optimal output level. The different thing here is that the distortion exists for both supplier types, and the output here can be higher than the social optimal (e.g., and ), the upward distortion can be found when there exists social comparison. Proposition 4: (a) If , the higher leads to higher output difference ; (b) If , the higher leads to lower output difference ; (c) If , has no effect on output difference. Proof: Note that if , and . The social comparison parameter does not affect the output. we derivate the output with respect to , we have the following equations and the proposition 4 is proofed. The social comparison parameter represents the strength of the supplier's fairness perception adjusted by their ability difference. Proposition 3 has stated that probability distribution has a clear impact on the supplier's output level and difference, and lower leads to higher output difference. If the suppliers care their transfer difference more, the effect can also be amplified or diminished in the same direction. Proposition 5: The social comparison will not change the suppliers' utility. The supplier's constraints in equation (3) can be rewritten as equation (4). Both ICL and IRH constraints are tight means that the higher cost supplier receives zero utility, and the lower cost supplier receives information rent and Proposition 5 is proofed. The proposition means that social comparison has no effects on the supplier's welfare. The conclusion of the classical model is still valid here: only the efficient agent receives positive information rent. CONCLUSION Social comparison and fairness concern are pervasive in a large number of areas such as consumer's behavior and labor relationship. Previous studies have shown that fairness concern or social comparison would affect the agent's performance but ignoring their ability difference. This paper presents a general framework to analyze social comparison adjusted by their ability level. In this paper, we consider a screening model where the firm needs an external supplier to offer him products or services. We use their ability difference to adjust their social comparison perception and analyze how social comparison affects the optimal contract variables. Our analysis shows that social comparison would amplify the output difference between the two different cost type suppliers. If there is a higher probability that the higher efficiency supplier on the market, then the output difference between the two types providers will also be higher. When the suppliers care more about social comparison, this effect would also be amplified or diminished.
3,105.2
2021-01-01T00:00:00.000
[ "Business", "Economics" ]
The Essential and Enigmatic Role of ABC Transporters in Bt Resistance of Noctuids and Other Insect Pests of Agriculture Simple Summary The insect family, Noctuidae, contains some of the most damaging pests of agriculture, including bollworms, budworms, and armyworms. Transgenic cotton and maize expressing Cry-type insecticidal proteins from Bacillus thuringiensis (Bt) are protected from such pests and greatly reduce the need for chemical insecticides. However, evolution of Bt resistance in the insects threatens the sustainability of this environmentally beneficial pest control strategy. Understanding the interaction between Bt toxins and their targets in the insect midgut is necessary to evaluate the risk of resistance evolution. ABC transporters, which in eukaryotes typically expel small molecules from cells, have recently been proposed as a target for the pore-forming Cry toxins. Here we review the literature surrounding this hypothesis in noctuids and other insects. Appreciation of the critical role of ABC transporters will be useful in discovering counterstrategies to resistance, which is already evolving in some field populations of noctuids and other insects. Abstract In the last ten years, ABC transporters have emerged as unexpected yet significant contributors to pest resistance to insecticidal pore-forming proteins from Bacillus thuringiensis (Bt). Evidence includes the presence of mutations in resistant insects, heterologous expression to probe interactions with the three-domain Cry toxins, and CRISPR/Cas9 knockouts. Yet the mechanisms by which ABC transporters facilitate pore formation remain obscure. The three major classes of Cry toxins used in agriculture have been found to target the three major classes of ABC transporters, which requires a mechanistic explanation. Many other families of bacterial pore-forming toxins exhibit conformational changes in their mode of action, which are not yet described for the Cry toxins. Three-dimensional structures of the relevant ABC transporters, the multimeric pore in the membrane, and other proteins that assist in the process are required to test the hypothesis that the ATP-switch mechanism provides a motive force that drives Cry toxins into the membrane. Knowledge of the mechanism of pore insertion will be required to combat the resistance that is now evolving in field populations of insects, including noctuids. Introduction ABC proteins are a huge and ancient superfamily of proteins that are defined by the presence of a domain called the ATP-binding cassette [1]. Binding to ATP causes this domain to change its shape, and this conformational change has been harnessed for many purposes. The ABCE1 protein is required for termination of protein translation, where its lever-like action separates the large and small ribosomal subunits [2]. The Rad50 protein is invoked when chromosomes are damaged, by clamping onto DNA like tweezers to bring the strands together for repair [3]. The ABC transporters possess membrane-spanning helices that change orientation to squeeze small molecules across membranes when the ABC domain binds to ATP [4]. The most recently discovered property of ABC proteins [10,11] predictions of membrane topology in the lipid bilayer of four ABC transporters from Bombyx mori orthologous to those interacting with three-domain Cry proteins described in the text (not to scale). The external lumenal surface is on top. Numbers of residues in each predicted external loop are shown. Transmembrane domains: TM0, TM1, and TM2. Nucleotide-binding domains: NBD1, NBD2. The N-terminus and C-terminus of each polypeptide is indicated. Initial Discoveries ABC transporters were first discovered as targets of Bt toxins by positional cloning in strains of resistant insects. Despite decades of research on Bt resistance, there was no previous biochemical or physiological evidence that ABC transporters could be involved. The discovery of their role was a surprise, and only the independent efforts by three [10,11] predictions of membrane topology in the lipid bilayer of four ABC transporters from Bombyx mori orthologous to those interacting with three-domain Cry proteins described in the text (not to scale). The external lumenal surface is on top. Numbers of residues in each predicted external loop are shown. Transmembrane domains: TM0, TM1, and TM2. Nucleotide-binding domains: NBD1, NBD2. The N-terminus and C-terminus of each polypeptide is indicated. Initial Discoveries ABC transporters were first discovered as targets of Bt toxins by positional cloning in strains of resistant insects. Despite decades of research on Bt resistance, there was no previous biochemical or physiological evidence that ABC transporters could be involved. The discovery of their role was a surprise, and only the independent efforts by three groups that converged on the same protein has convinced the research community that understanding their role in resistance is important, as described in several recent reviews [5,[12][13][14][15][16][17][18][19][20][21][22][23]. Solving this problem holds the key to developing strategies to combat the increasing problem of Bt resistance. A mutation in the ABCC2 protein was identified in a Cry1Ac-resistant strain of H. virescens, by positional cloning using markers from the early versions of the B. mori genome sequence, well before a genome sequence for H. virescens was available [24]. Evidence that this mutation was important for resistance came from mapping, binding studies, and an allele frequency change correlated with the increase of resistance over time [24]. Like most ABC mutations subsequently found in other species, and like the cadherin mutation previously found in H. virescens [25] it introduced a frameshift and prevented expression of the full-length protein in the membrane. This contrasts with many cases of chemical insecticide resistance, where deletion of the target would be lethal. Even stronger evidence came from analysis of a more subtle mutation in the ABCC2 protein in B. mori [26]. Positional cloning using the genome sequence converged on ABCC2, but no incapacitating mutation could be found. Instead, the protein in the Cry1Ab-resistant strain had several amino acid substitutions and an insertion of a tyrosine in an extracellular loop. Germline transformation was used to prove that one copy of the susceptible allele inserted in the resistant genetic background could confer Cry1Ab-susceptibility on the resistant strain [26], and subsequent experiments [27] proved that the inserted tyrosine was necessary and sufficient for resistance. A similar approach using bulked segregant analysis based on cDNA markers identified ABCC2 and its neighboring gene ABCC3 as contributing to Cry1Ac and Cry1Ca resistance in S. exigua [28]. In contrast to the previous mutations, ABCC2 carried a lesion in an intracellular nucleotide-binding domain (NBD). Suppression of ABCC2 and ABCC3 by RNA inhibition (RNAi) increased the tolerance of susceptible larvae to Cry1Ac and Cry1Ca. This independent and unbiased positional cloning approach extended the phenomenon to a third genus, and to a third type of mutation in ABCC2, proving that ABC transporters could no longer be ignored in the mode of action of Cry toxins. Search for ABC Mutants in Resistant Strains from Field and Laboratory These early findings motivated the search for the involvement of ABC transporters in other Bt-resistant strains of noctuids and other Lepidoptera. Comparative linkage mapping with markers, previously shown to be linked to ABCC2 in H. virescens, was used to identify a mutation that eliminated the last transmembrane domain of ABCC2 in Cry1Acresistant P. xylostella from Hawaii [29]. The same study localized Cry1Ac resistance in T. ni from British Columbia to a region containing ABCC2, although a specific mutation was not identified [29]; a different resistance mechanism, altered aminopeptidase expression, was also identified in the same strain of T. ni [30]. Mis-spliced transcripts of ABCC2 generating a truncated protein were found in a Cry1Ac-resistant strain of H. armigera from China [31]. Another comparative linkage mapping approach identified a genomic region containing ABCC2 in a laboratory-selected Cry1F-resistant strain of O. nubilalis from collections in the Corn Belt of the USA [32], although involvement of ABCC2 has not yet been confirmed in field-evolved Cry1F resistance in O. nubilalis from Nova Scotia [33]. In Puerto Rico, rapid appearance of Cry1F resistance in S. frugiperda stimulated withdrawal of the transgenic maize variety from the market, and was found to be associated with mutations in ABCC2 [34]. Additional mutations in S. frugiperda ABCC2 were associated with Cry1Fa and Cry1A.105 resistance in Puerto Rico [35] and Brazil [36]. Screening for some of these was included in surveys using DNA diagnostics for resistance to chemical insecticides as well as Bt [37,38]. The only member of the ABCB family to be investigated in Lepidoptera as a target of Cry toxins initially came to attention because it was down-regulated in a Cry1Ac-resistant strain of P. xylostella. PxABCB1 expression was found to be lower in other resistant strains, was further reduced by additional Cry1Ac selection, and was suppressed by RNAi in susceptible strains, which increased their tolerance to Cry1Ac [39]. Functional Studies Heterologous expression of ABC transporters in insect cell lines has been extensively used to probe their function. The crucial role of the tyrosine insertion in loop 2 of B. mori ABCC2 in conferring Cry1Ab resistance was convincingly shown by expression in Sf9 cells [27]. The same study demonstrated the synergy of the cadherin BtR175 and ABCC2 for the first time; co-expression of both made the cells much more susceptible to Cry1Ab than either one alone. These results were recapitulated using proteins from H. virescens in Sf9 cells [40], with the added information that synergy was observed only when both proteins were expressed in the same cell; i.e., not from a mixture of cells expressing one or the other, as might be expected from the sequential binding hypothesis [41]. The mechanism of synergy was further probed by comparing the ability of the cadherin from H. armigera or S. litura to synergize Cry1Ac action on H. armigera ABCC2 expressed in Hi5 cells [42]. Although the S. litura cadherin was an ineffective synergist, when cadherin repeat 11 from H. armigera was swapped in, synergistic activity increased. The authors hypothesized that specific binding sites on the cadherin localized the toxin to a good position for interaction with ABCC2 in a species-specific manner [42]. A similar species-specific synergism with ABCC2 from S. exigua and the cadherin from S. exigua (but not H. armigera) was observed with the Cry1C toxin in Sf9 cells [43]. Domain-swapping between Cry1C and Cry1Ac was used to infer that domains II and III of Cry1Ac have different binding sites on ABCC2 of S. exigua [44]. ABCC2 from S. frugiperda expressed in Hi5 cells conferred sensitivity to Cry1Ab and Cry1Fa, while the cadherin did not, but synergism was not investigated in this study [45]. ABCC3 from S. frugiperda was also found to confer sensitivity to Cry1Ab and Cry1Fa under similar conditions [46]. A wide-ranging study explored the specificity of the toxin-target interaction by expressing ABC transporters from Lepidoptera, Coleoptera, Diptera, and humans in Sf9 cells and testing them with lepidopteran-or coleopteran-active toxins [47]. ABCC2 or ABCC3 from B. mori conferred sensitivity to Cry1Aa, but not Cry1Ca or Cry1Da. The latter two must have different, unknown targets because they are active on caterpillars of some lepidopteran species. Human and dipteran ABC transporters tested did not respond to lepidopteran-or coleopteran-active toxins. D. melanogaster is not normally susceptible to Cry1Ac, but when ABCC2 was expressed in the midgut of transgenic larvae, Cry1Ac in the artificial diet killed them [48]. Moreover, the genome of D. melanogaster lacks the ortholog of the 12-cadherin domain protein found in all Lepidoptera, so the killing mechanism did not rely on the same type of synergism from the cadherin. However, synergism could be observed when the transgenic larvae were fed peptide fragments from lepidopteran cadherins along with Cry1Ac [48]. The most sensitive measurements of the interaction between Cry toxins and their receptors have been made using heterologous expression in Xenopus oocytes [49]. Messenger RNA experimentally injected into these huge cells is translated and the proteins (e.g., ABCC2 or cadherin) are incorporated into the egg membrane. This technique is often used to investigate the properties of ion channels using the voltage-clamp technique. The current through the channel is measured as a function of the experimentally-fixed voltage gradient across the membrane and the resulting graph characterizes the electrophysiological properties of the channel. In this case, the channel is the Cry toxin pore inserted into the membrane, which allows inward cation flux. The dynamics of current flow depend on the details of pore insertion and structure. Using this sensitive technique, it was shown that expression of the cadherin alone produced almost no current, expression of ABCC2 allowed abundant current, and expression of both produced even more current-the most convincing demonstration of synergism to date. The mechanism of synergism is still obscure, but a number of hypotheses can be envisaged, which are not mutually exclusive ( Figure 3). These can be classified into transacting mechanisms where synergism can occur when the cadherin and ABC transporter may be separated from each other, and cis-acting mechanisms where synergism requires a close physical interaction. According to the sequential binding hypothesis [41], toxin monomers bind to the cadherin and are further processed by cleavage of the N-terminal α1-helix, whereupon they form oligomeric pre-pores in solution ( Figure 3A). However, toxin monomer binding to the cadherin is not an absolute requirement for toxicity; cadherin knockouts can still be killed by higher toxin amounts [24,[50][51][52] and Cry1Ac is still lethal to T. ni dispite not being able to bind to the T. ni cadherin [53]. Synergism is due to presence of the cadherin, which speeds up a process that happens at a slower rate in its absence. Here the pre-pore structure can diffuse away from the cadherin to interact with a remote ABC transporter, so this mechanism is classified as trans-acting. If, however the cadherin traps the toxin and brings it to the ABC transporter, this would be a cisacting mechanism ( Figure 3B). This could be synergistic if there are many more cadherin molecules in the membrane than ABCC2 molecules. Another previously suggested cisacting mechanism [40] is shown in Figure 3C, where the cadherin pulls the pre-pore away from the ABC transporter, enabling it to insert into the membrane and freeing up the ABC transporter for the next pre-pore. The mechanism of synergism is still obscure, but a number of hypotheses can be envisaged, which are not mutually exclusive ( Figure 3). These can be classified into transacting mechanisms where synergism can occur when the cadherin and ABC transporter may be separated from each other, and cis-acting mechanisms where synergism requires a close physical interaction. According to the sequential binding hypothesis [41], toxin monomers bind to the cadherin and are further processed by cleavage of the N-terminal α1-helix, whereupon they form oligomeric pre-pores in solution ( Figure 3A). However, toxin monomer binding to the cadherin is not an absolute requirement for toxicity; cadherin knockouts can still be killed by higher toxin amounts [24,[50][51][52] and Cry1Ac is still lethal to T. ni dispite not being able to bind to the T. ni cadherin [53]. Synergism is due to presence of the cadherin, which speeds up a process that happens at a slower rate in its absence. Here the pre-pore structure can diffuse away from the cadherin to interact with a remote ABC transporter, so this mechanism is classified as trans-acting. If, however the cadherin traps the toxin and brings it to the ABC transporter, this would be a cis-acting mechanism ( Figure 3B). This could be synergistic if there are many more cadherin molecules in the membrane than ABCC2 molecules. Another previously suggested cis-acting mechanism [40] is shown in Figure 3C, where the cadherin pulls the pre-pore away from the ABC transporter, enabling it to insert into the membrane and freeing up the ABC transporter for the next pre-pore. Extracellular Loops Most of the ABC protein is out of reach of Bt toxins approaching the cell, since the nucleotide binding domains are entirely cytoplasmic, and most of the transmembrane domains are buried within the lipid bilayer ( Figure 2). Extracellular loops connecting adjacent transmembrane helices are very short in ABCC proteins, but larger in ABCB and especially ABCA proteins. Detailed analyses of the interaction between extracellular loops of B. mori ABC transporters and various domains of Cry1A toxins have been carried out by the group of Ryoichi Sato in Tokyo. Following the discovery that insertion of a single tyrosine in Loop 2 of ABCC2 conferred resistance to Cry1Ab in larvae, swapping other amino acids for the inserted tyrosine blocked pore formation in cells expressing the transporter, while amino acid substitutions at other positions in the non-inserted loop did not [54]. Thus, the size of the loop, rather than its amino acid composition, was the more important determinant of sensitivity. Domain-swapping within the toxin implicated Domain II as most important in this interaction. Subsequent mutagenesis of Domain II of Cry1Aa revealed a region that bound both to ABCC2 and another important receptor, the cadherin BtR175 [55]. While ABCC3 had much lower binding affinities to Cry1Aa and Cry1Ab than ABCC2, binding was increased in constructs containing partial loops from ABCC2 [56]. Another group pointed out an amino acid difference within Loop 1 of ABCC2 of S. frugiperda and S. litura that correlated with binding affinity to Cry1Ac, and they also replicated the binding difference by creating two versions of the H. armigera ABCC2, one with each amino acid [57]. Regulatory Changes Regulatory changes involving ABC transporters were also found to confer resistance. In a Cry1Ac-resistant strain of P. xylostella, resistance mapped to the vicinity of ABCC2 but no disruptive mutations in the gene could be found [58]. Instead, the expression level of ABCC2 and ABCC3 was found to be controlled by the mitogen-activated protein kinase (MAPK) signaling pathway, with the MAP4K4 gene located close by on the same genomic scaffold, accounting for the mapping results. Constitutive expression of MAP4K4 in the resistant strain suppressed ABCC2, ABCC3, and alkaline phosphatase, another Cry1Acbinding protein. RNA interference (RNAi) suppression of MAP4K4 transiently restored susceptibility by upregulating ABCC2 and ABCC3. Thus, resistance in this case was due to higher expression of a negative regulator of ABCC2 transcription. The Forkhead Box Protein A (FOXA) transcription factor was found to stimulate transcription of ABCC2 and ABCC3 in H. armigera, as predicted from FOXA binding sites in the promoters [59]. RNAi silencing of FOXA downregulated ABCC2 and ABCC3 and increased the tolerance of susceptible larvae to Cry1Ac. Parallel results were obtained by expression in Sl-HP cells in the same study. A different study screened several members of the GATA transcription factor family from H. armigera and found that GATAe caused Sf9, QB-Ha-E5, and Hi5 cell lines to increase their expression of ABCC2, conferring greater Cry1Ac sensitivity [60]. If either mechanism were to be found in resistant strains from the field, resistance would be due to lower expression of a positive regulator of ABCC2 transcription. Comparison of the ABCC2 coding sequence across many species of Lepidoptera identified a conserved region targeted by the microRNA miR-998-3p [61]. MicroRNAs bind to messenger RNAs in a sequence-specific fashion and target them for destruction or inhibit translation. Injection of an agomir (a chemically modified RNA that mimics the effect of the miRNA) into susceptible larvae of P. xylostella, S. exigua, or H. armigera increased their tolerance of Cry1Ac and decreased the abundance of ABCC2 mRNA. Injection of an antagomir (a single-stranded molecule designed to block the effect of the miRNA) into larvae of Cry1Ac-resistant P. xylostella reduced their tolerance of Cry1Ac and increased their ABCC2 mRNA levels. Cell Lines The influential colloid-osmotic lysis theory to explain how Cry toxins kill cells was developed using different cell lines that naturally differed in their susceptibilities to two different toxins [62]. It would be interesting to determine which ABC transporters are naturally expressed by those cells. Sl-HP cells from S. litura are susceptible to activated Cry1Ac even though S. litura larvae are not. ABCC3 was found to be expressed in this cell line, and RNA inhibition of ABCC3 decreased Sl-HP sensitivity to Cry1Ac [63]. In another study, comparison of Cry1Ac sensitivities of cell lines from different tissues produced the order midgut > fat body > ovary as expected, but unexpectedly fat body-derived cells were most susceptible to Cry2Ab toxin [64]. Surveys of heterologous expression of candidate targets in cell lines [65] should also take their native expression patterns into account. Cry2A Toxins An extensive sampling effort in Australia employing the F2 screen [66] yielded strains of H. armigera and H. punctigera with high levels of resistance to the Cry2Ab toxin. Linkage mapping with these strains revealed several different mutations in the ABCA2 gene that prevented expression of the full-length protein [67]. Unlike the ABCC proteins, ABCA2 has two very large ectodomains (Figure 2), and because the mutants are extremely resistant to Cry2Ab, it was speculated that the single ABCA2 protein functions similarly to the sequential binding model for the cadherin and ABCC2 [67]. Linkage mapping in a strain of T. ni that was selected with Dipel in British Columbia greenhouses [68] eventually resulted in the identification of a transposable element in ABCA2 conferring Cry2Ab resistance [69]. Mis-splicing mutants in ABCA2 were associated with Cry2Ab resistance in P. gossypiella in a laboratory-selected strain from Arizona and field populations from India [70]. Additional crosses confirmed these mutations but suggested that an additional, uncharacterized mechanism was also involved in Cry2Ab resistance in this species [71]. A different member of the ABCC family from H. armigera (GenBank Accession No. KY796050) was also found to bind Cry2Ab, identified by the authors as "HaABCC1" [72], although it is not the ortholog of the ABCC1 (BGIBMGA007737+38) on B. mori Chromosome 15 next to ABCC2 and ABCC3 described previously [24,26,28]. The ortholog of "HaABCC1" in B. mori is part of a small cluster on Chromosome 12 ( Figure 1D) of ABCC proteins with 5 additional N-terminal transmembrane domains as well as two very large ectodomains, unlike the Chromosome 15 ABCC1-ABCC2-ABCC3 cluster in B. mori and other Lepidoptera ( Figure 2). Although the authors speculated on the role of ABCC proteins in cross-resistance between Cry1Ac and Cry2Ab, no binding or toxicity studies were performed with Cry1Ac, and the strain of H. armigera used was susceptible to both Cry1Ac and Cry2Ab [72]. The use of a name already assigned to another gene has unfortunately created the false impression that the results are relevant to cross-resistance. Here we designate this gene as ABCC6 (Figures 1D and 2). CRISPR/Cas9 Knockouts CRISPR/Cas9 knockouts provide a very useful tool to investigate gene function in non-model organisms. The first use of the technique to knock out ABC transporters in Lepidoptera targeted the half-transporter genes white, scarlet, and ok in Helicoverpa armigera [73]. These are homologs of the well-known pigment transporters white, scarlet, and brown in Drosophila melanogaster, and as expected, the knockouts affected adult eye color and larval skin pigmentation. Homozygotes for the white knockout, however, were embryonic lethal in H. armigera and in the milkweed bug [74], which was unexpected because these are viable in Drosophila, Aedes, and Tribolium. Lethality has also complicated the interpretation for some knockouts of Cry toxin targets. Knockouts of ABCA2 in a susceptible strain of H. armigera conferred >100-fold resistance to Cry2Aa and Cry2Ab, and eliminated Cry2Ab binding to BBMV, but did not affect resistance or binding to Cry1Ac [75]. After mapping Cry2Ab resistance in T. ni to ABCA2, where a transib mobile element was found to disrupt the gene, either ABCA1 or ABCA2 were knocked out in a susceptible strain and only ABCA2 was found to affect Cry2Ab tolerance [69]. Knockouts of the ABCA2 gene in B. mori (using the TALEN technique) conferred Cry2A resistance on larvae, and heterologous expression of ABCA2 in HEK239 cells confirmed the absence of cross-resistance to Cry1A, Cry1Ca, Cry1Da, Cry1Fa, and Cry9Aa toxins [76]. In P. xylostella, a knockout of ABCC2 conferred 724-fold resistance to Cry1Ac and a knockout of ABCC3 conferred 413-fold resistance to the same toxin. Each knockout greatly reduced BBMV binding to Cry1Ac, but the double knockout was not made in this study [77]. Somewhat different results were obtained in another study of the same species [78], in which single knockouts were weakly resistant (~4-fold) and only the double knockout was >8000-fold resistant to Cry1Ac. So far there is no explanation for the differing results in P. xylostella. A study in H. armigera, however, produced results closer to the second study, in that single knockouts of ABCC2 and ABCC3 were weakly resistant to Cry1Ac, while the double knockout was >15,000-fold resistant [79]. Knocking out ABCC2 in O. furnacalis conferred >300-fold resistance to Cry1Fa but less than 10-fold resistance to Cry1Ab or Cry1Ac and no resistance to Cry1Aa [80]. Knocking out ABCC2 in S. frugiperda increased tolerance to either Cry1Fa or Cry1Ab >120-fold, while knocking out ABCC3 increased tolerance by a lesser amount (>16-fold); in this study the double knockout was reported to be lethal [46]. In S. exigua, an ambitious study created single knockouts of ABCC1, ABCC2, or ABCC3, as well as the cadherin and an aminopeptidase, and examined susceptibility to Cry1Ac, Cry1Fa, and Cry1Ca. Among the 15 pairwise comparisons, ABCC2 had a strong effect and the cadherin a weak effect on Cry1Ac or Cry1Fa tolerance, and ABCC2 also had a weak effect on Cry1Ca tolerance [52]. CRISPR/Cas9 knockout experiments are useful in confirming the role of a given ABC transporter in susceptibility to a given toxin, and when more than one knockout and more than one toxin are compared, in assessing their relative importance. The possibility of non-target effects needs more investigation, in order to reconcile studies where double knockouts are lethal with those where they confer even more resistance, since these studies make diametrically opposed recommendations for resistance management. Targeting the nucleotide binding domains would increase the probability of nontarget effects, since these are more highly conserved across ABC family members. In addition, no knock-ins have been reported yet for ABC transporters, as in the case of another Bt resistance gene, tetraspanin [81]. Negative Cross-Resistance with Chemical Insecticides The intriguing possibility that mutations in ABC transporters could interfere with the insect's ability to use them to rid itself of other toxins has motivated many recent studies. With the increasing use of Bt sprays and transgenic plants in the 1990s, the issue of cross-resistance between Bt and chemical insecticides had achieved some attention, but Bt resistance mechanisms had not been characterized at the molecular level. More recently, in 2016 when several chemical insecticides were screened against an ABCC2-mutant strain of H. armigera, abamectin and spineotram were more toxic compared with their activity against a Cry1Ac-susceptible strain [82]. Measurements of higher abamectin concentrations in mutant larvae and transfected cells were consistent with the bioassay results. RNAi silencing of ABCC2 decreased susceptibility to Cry1Ac and increased susceptibility to abamectin [82]. However, the selective differential exerted by abamectin on the Cry1Acresistant versus Cry1Ac-susceptible strains was small, and not all subsequent studies have confirmed the effect. Single and double knockouts of ABCC2 and ABCC3 in H. armigera produced in a different study were not more susceptible to abamectin or spinetoram [79]. The knockout of ABCC2 in O. furnacalis was not more susceptible to abamectin or chlorantraniliprole [80]. On the other hand, single knockouts of ABCC2 or ABCC3 in S. frugiperda were more susceptible to abamectin and spinosad (while the double knockout was reported to be lethal and could not be compared) [46]. Another study on S. frugiperda found that a Cry1F-resistant strain isolated from the field with a frameshift mutation in ABCC2 had lower sensitivity to bifenthrin and higher sensitivity to spinetoram; yet when ABCC2 was knocked out in a different strain, Cry1F resistance increased 25-fold but sensitivity to chlorantraniliprole, bifenthrin, spinetoram, and acephate was unchanged [83]. Knocking out the P-glycoprotein ABCB1 in S. exigua increased, rather than decreased, susceptibility to abamectin and emamectin benzoate [84]; whether this protein is orthologous to the coleopteran ABCB1 (see below) has not been determined. Contradictory results were obtained in a study of ABCC2 in P. xylostella [85]; HEK-293 cells stably transformed with ABCC2 accumulated less avermectin, but down-regulating ABCC2 in vivo with RNAi had no effect on avermectin or chlorfenapyr tolerance. Although results are suggestive in some studies, the changes in tolerance to the conventional insecticides examined are small and would not be useful in a resistancebreaking approach against ABCC mutations, as mortality of Bt-resistant insects would not be much greater than their Bt-susceptible counterparts. Only a few insecticides have been examined so far, and some with a greater effect are likely to be found eventually in a wider screen. ABCB (P-glycoprotein) and Cry3 Toxins in Coleoptera A different family of ABC transporters, the P-glycoproteins (ABCB), is involved in toxicity of the Cry3 toxins in Coleoptera. Linkage mapping in a strain of the poplar leaf beetle Chrysomela tremula (Fabricius) resistant to transgenic Cry3Aa-expressing poplar identified a frameshift in the ABCB1 protein, and heterologous expression of ABCB1 in Sf9 cells conferred susceptibility to Cry3Aa in vitro [86]. In the western corn rootworm D. virgifera virgifera, heterologous expression of the orthologous ABCB1 protein also conferred Cry3A sensitivity on Sf9 and HEK-293 cells in vitro, and an mCry3A-resistant strain was found to have deletions in the ABCB1 gene [87]. Whether the beetle ABCB1 genes are orthologous to PxABCB1 from P. xylostella mentioned in Section 3 above [39] is not known; Cry3 toxins were not experimentally tested in that study. The authors pointed out structural similarities among Cry1 and Cry3 toxins, and searched for but could not find PxABCB1 in the fragmented Plutella genome sequence. We have found that the Bombyx ortholog maps to Chromosome 15, about 1 MB away from the ABCC cluster ( Figure 1B). More studies are required to confirm the involvement of the P-glycoproteins in Cry1A toxin interactions in Lepidoptera, and to establish the generality of the P-glycoproteins as targets of the beetle-active Cry3 toxins. Cross-resistance studies suggest the existence of additional, different targets in beetles [88]. Other ABC Transporters in Lepidoptera Suppression of the white gene in P. xylostella by RNAi reduced Cry1Ac susceptibility but was not lethal [89]; as pointed out earlier, CRISPR/Cas9 knockouts of white were lethal in H. armigera [73]. Suppression of ABCH1 in P. xylostella caused larval mortality but did not affect Cry1Ac resistance [90]. A gene in O. furnacalis identified as ABCG was downregulated in Cry1Ab-and Cry1Ac-resistant strains [91]; it is evidently not orthologous to either of the two genes in P. xylostella. No mutations in half-transporter ABCG or ABCH family genes have yet been identified in Bt-resistant Lepidoptera. Hypotheses on the Mechanism of Pore Insertion The lack of three-dimensional structures of the ABCC2 or ABCA2 proteins, the Cry toxin pore embedded in the membrane, and the toxin-binding region of the cadherin has inhibited the development of detailed hypotheses on the manner by which ABC transporters facilitate pore insertion. ABC transporters could simply be another binding site on the membrane surface, increasing the local toxin concentration and increasing the rate of pore insertion, due to a concentration effect. It has been hypothesized that ABCC2 facilitates the formation of the pre-pore oligomer, in a manner similar to the cadherin [92]. It was hypothesized that active opening and closing of the ABC transporter channel would be required to pull the pre-formed pore into the membrane [13]. This hypothesis would seem to be refuted by results with a mutant ABCC2 from S. exigua lacking the second nucleotide-binding domain [93], as well as engineered mutants of ABCC2 from B. mori lacking nucleotide-binding domains [94]. As previously pointed out [15], it is difficult to explain how evolution in Bacillus thuringiensis has resulted in Cry2A-type toxins that target ABCA proteins, Cry1A-type toxins that target ABCC proteins, and Cry3B-type toxins that target ABCB proteins, without invoking some fundamental property that unites these very different ABC transporters. If the shared ATP-switch mechanism powering substrate transport [4] is not such a property, then we are left without a mechanistic explanation of the pore-insertion process for the threedomain Cry proteins [13]. Many other bacterial pore-forming toxins enter the membrane with dynamic conformational changes, for example Vip3A [95] or the Membrane Attack Complex [96], or the Tc toxin [97]. Whether such a dynamic process is required for Cry toxin pore formation deserves more investigation. A reasonable hypothesis at this point is that the dynamism comes from the toxin-target interaction, not just the toxin. Future Perspectives Since the first description in 2010, mutations in ABC transporters have emerged as the most important type of mutation causing resistance to the three-domain Cry toxins of Bacillus thuringiensis. Yet mechanistic studies have lagged behind those on other poreforming toxins with much more complicated structures. Why? Not enough effort has been expended on determining the three-dimensional structures that will be required for a full understanding of how the toxin interacts with membrane proteins to form a membrane pore. Since the first two structures of trypsin-activated monomers of the threedomain Cry toxins revealed their structural similarity [98,99], many more have appeared confirming that this similarity is fundamental. Recently a structure of the entire protoxin was determined [100], revealing additional domains that might potentiate pore formation in some way. The low-hanging fruit has been harvested. Lacking is a structure of any ABC transporter known to interact with a Cry toxin. Lacking is a structure of any cadherin known to interact with a Cry toxin. Lacking is a structure of the Cry toxin pore in the membrane. Without these structures, theorizing about how Bt toxins work is fantasy. Recent advances in electron cryo-microscopy (cryo-EM) have made these structures attainable. It is now time to attain them. Funding: Preparation of this review was funded by the Max-Planck-Gesellschaft. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Data Availability Statement: NCBI GenBank: www.ncbi.nlm.nih.gov, accessed on 2 April 2021. Kaikobase: kaikobase.dna.affrc.go.jp, accessed on 2 April 2021.
7,800.8
2021-04-28T00:00:00.000
[ "Biology" ]
Hyperleptinemia Is Required for the Development of Leptin Resistance Leptin regulates body weight by signaling to the brain the availability of energy stored as fat. This negative feedback loop becomes disrupted in most obese individuals, resulting in a state known as leptin resistance. The physiological causes of leptin resistance remain poorly understood. Here we test the hypothesis that hyperleptinemia is required for the development of leptin resistance in diet-induced obese mice. We show that mice whose plasma leptin has been clamped to lean levels develop obesity in response to a high-fat diet, and the magnitude of this obesity is indistinguishable from wild-type controls. Yet these obese animals with constant low levels of plasma leptin remain highly sensitive to exogenous leptin even after long-term exposure to a high fat diet. This shows that dietary fats alone are insufficient to block the response to leptin. The data also suggest that hyperleptinemia itself can contribute to leptin resistance by downregulating cellular response to leptin as has been shown for other hormones. Introduction Body weight in mammals is controlled by a physiological system that balances energy intake and expenditure over the long term [1]. The core component of this system that signals the availability of body fat is the hormone leptin [2]. Leptin is secreted by adipocytes in proportion to their size and number, such that the concentration of leptin in the blood is proportional to the total amount of adipose tissue [3,4]. Binding of leptin to its target neurons, which are expressed in the hypothalamus, brain stem and other brain regions, inhibits feeding and stimulates energy expenditure. Leptin thus functions as the afferent signal in a negative feedback loop that maintains a stable level of body fat reserves. Leptin deficient mice (ob/ob) and humans are obese and hyperphagic [5], and, in these individuals, leptin replacement therapy induces dramatic weight loss [6,7,8,9]. However most obesity is associated with elevated plasma leptin levels [3,4], implying resistance to leptin's weight reducing effects [10]. The contribution of leptin resistance to obesity has also been established by the demonstration that hyperleptinemic animals and humans have a blunted response to exogenous leptin. Despite the importance of delineating the causes of leptin resistance, the cellular and molecular mechanisms responsible remain poorly understood. The diet-induced obese mouse is a well characterized system for studying the development leptin resistance and pathogenesis of obesity. In this model, C57Bl/6J mice fed a high-fat diet (45% to 60% calories from fat) become progressively obese and hyperleptinemic over a span of 4 to 6 months. As these animals become obese, they lose the ability to reduce their food intake and body weight in response to leptin treatment. In the early stages of obesity, mice develop resistance to leptin delivered peripherally, but not centrally; this has been attributed to downregulation or saturation of the transport system that transports leptin across the blood-brain barrier [11,12]. After long-term exposure to high-fat diet (.20 weeks), mice become resistant to leptin even when it is directly infused into the brain via the cerebral ventricle [10,12,13,14]. In these animals, the first-order, leptin-responsive neurons have apparently lost the ability to activate the signaling pathways downstream of the leptin receptor. How does exposure to a high-fat diet impair the leptin sensitivity of these neurons? Two models have been proposed. The first is that leptin resistance is caused by elevated plasma leptin levels, which result in chronic overstimulation of the leptin receptor and activation of negative feedback pathways that block further leptin signaling. This model is supported by the fact that leptin stimulates the expression of SOCS-3, a protein that directly inhibits leptin signaling [15,16,17], and that ablation of SOCS-3 in neurons enhances leptin sensitivity and protects against diet-induced obesity [18,19,20]. Moreover, targeted expression of a constitutively active form of STAT3, which is a key mediator of leptin signaling, is sufficient to induce leptin resistance in the hypothalamus [21]. This mechanism is analogous to the decrease in insulin receptor signaling that is associated with chronic insulin treatment and believed to result from the activation of negative feedback pathways such as serine phosphorylation of IRS-1 [22]. An alternative explanation for the development of leptin resistance is that dietary fats themselves, rather than hyperleptinemia, are responsible. Fats could either directly block leptin signaling or activate cellular processes, such as endoplasmic reticulum (ER) stress and inflammation, that impair leptin responsive neurons [23,24,25,26,27,28]. This model is supported by the fact that pharmacological or genetic modulation of fat metabolism in the hypothalamus has been shown to influence energy balance and leptin sensitivity [26,29,30]. Moreover, leptin resistance is known to develop most strongly in the arcuate nucleus of the hypothalamus, which relative to other regions of the brain has enhanced access to circulating nutrients [13]. In addition, it has been observed in some [10,31], but not all [32], experimental settings that mice fed a high-fat diet fat do not consume more calories than mice fed a low-fat diet; this implies that dietary fat itself, rather than increased energy intake, may be responsible for leptin resistance in these animals. In order to distinguish between these two possibilities, we separated the contributions of hyperleptinemia and dietary fat to the development of leptin resistance by (1) using osmotic infusion pumps to clamp the plasma leptin of ob/ob mice to the level found in lean wild-type animals over the long term, and then (2) measuring the leptin sensitivity of these animals after being placed on either a low-or high-fat diet. Results The development of central leptin resistance in C57Bl/6J mice requires exposure to a high-fat diet for 20 weeks [10,14]. To establish the possible contribution of hyperleptinemia versus a high fat diet itself to the development of leptin resistance, we developed an experimental protocol in which, beginning at weaning, the plasma leptin levels of ob/ob mice could be fixed to the level of lean wild-type mice (,5 ng/mL) for this duration (Figure 1a). We performed extensive dose response studies infusing leptin into ob/ ob mice via osmotic infusion pumps and found that wild type plasma leptin levels of approximately 5 ng/mL in ob/ob mice could be achieved by delivering leptin at 150 ng/h. This infusion rate could also be stably maintained for longer than six months by replacement of the pumps every 28 days. Pumps dispensing 150 ng/h leptin were implanted in male ob/ ob mice at four weeks of age (hereafter referred to as ''ob-norm''), and leptin treatment normalized the body weight of these animals within three weeks (Figure 1b). Identical pumps dispensing vehicle (PBS) were implanted in a control group of male wild-type mice. At six weeks of age, animals from each group were assigned to either a high-fat diet (60% of calories from fat) or a low-fat chow diet (13% of calories from fat) and maintained on this diet for 20 weeks with monthly pump replacements. The body weight of the ob-norm and wild-type mice on the lowfat diet was similar throughout the course of the experiment (Figure 1b). There was no significant difference in average weekly food intake between the two groups (Figure 1d; 13.560.3 kcal/d for ob-norm versus 13.160.2 kcal/d for wild-type, p = 0.26) and their body fat percentages were similar when measured at 14 and 18 weeks (Figure 1c). Plasma leptin levels were measured periodically and, there was no significant difference in leptin levels between the two cohorts on the low-fat diet (Figure 1f; p.0.22). Interestingly, a small age-dependent increase in plasma leptin was observed in wildtype mice on a low-fat diet, and this was also observed in the obnorm animals (2.660.4 ng/mL versus 1.960.4 at 18 weeks of age, and 7.062.3 versus 6.160.9 ng/mL at 26 weeks of age for ob-norm and wild-type, respectively; p,0.01 for the comparison between 18 and 26 weeks within either cohort). Because the only source of leptin in the ob-norm mice was pumps that delivered a constant dose, this small age-dependant increase likely reflects age-dependant changes in leptin metabolism or excretion. Exposure to a high-fat diet resulted in obesity in both wild-type and ob-norm animals, and the time course of weight gain was Figure 1. Energy balance in wild-type and ob-normalized animals on low and high-fat diets. A. Schematic of experimental strategy. Wildtype and ob/ob animals were implanted with pumps dispensing PBS or leptin, respectively, at 4 weeks of age, and pumps were replaced every four weeks as indicated by red arrows. Leptin sensitivity was assayed at 26 weeks of age. B. Body weight of wild-type (black) and ob-norm (red) mice on a high-fat or low-fat diet. C. Body fat percentage of wild-type and ob-norm mice at 14 and 18 weeks of age that were maintained on a high-fat diet (open bars) or low-fat diet (filled bars). D. Daily food intake for wild-type (black) and ob-norm (red) mice on a low-fat diet. p.0.05 for wild-type vs. obnorm at all time points. E. Daily food intake for wild-type (black) and ob-norm (red) mice on a high-fat diet. * indicates p,0.05. F. Plasma leptin concentrations of wild-type and ob-norm mice at 18 and 26 weeks of age that were maintained on a high-fat diet (open bars) or low-fat diet (filled bars). All error bars are +/2 SEM. doi:10.1371/journal.pone.0011376.g001 indistinguishable in the two groups ( Figure 1b). The body weights of wild-type and ob-norm animals were similar after 20 weeks on a high-fat diet (52.660.4 g in wild-type versus 52.161.8 g in obnorm, p = 0.79). As expected, the plasma leptin levels of animals in these two groups were very different ( Figure 1f). Wild-type animals became markedly hyperleptinemic as they became obese (37.465.6 ng/mL at 18 weeks of age and 15466 ng/mL at 26 weeks of age), whereas the plasma leptin levels of ob-norm animals on a high-fat diet were the same as low-fat diet, chow fed controls (e.g., 7.062.3 ng/mL on a low-fat diet versus 8.162.3 ng/mL on a high-fat diet at 26 weeks of age, p = 0.75). The nearly identical body weight trajectory of wild-type and obnorm animals on a high-fat diet was somewhat unexpected: unlike wild-type animals, the weight gain of ob-norm mice is not restrained by increases in plasma leptin. We measured weekly food intake in both cohorts after exposure to a high-fat diet, and while the ob-norm animals did eat more food than wild-type animals at a subset of time points (Figure 1e; p,0.01), this slightly elevated food intake did not translate into increased body weight or adiposity ( Figure 1c). These data show that wild-type and ob-norm animals display a very similar progression of diet-induced obesity in response to a high-fat diet, despite the fact that they maintain a ,20-fold difference in plasma leptin levels. Diet-induced obesity causes insulin resistance, whereas leptin improves glucose homeostasis independent of its effects on body weight [33,34,35,36]. We therefore compared glucose metabolism in the four groups of animals in this experiment. Both wild-type and ob-norm animals on a high-fat diet developed hyperglycemia relative to low-fat diet controls (Figure 2a; p,0.001), and there was no significant difference in fasting blood glucose between cohorts on the same diet (p.0.15). We measured plasma insulin levels in all four groups at 18 and 26 weeks of age (Figure 2b). Both wild-type and ob-norm animals on a high-fat diet were hyperinsulinemic relative to low-fat diet controls (p,0.05), confirming that their hyperglycemia was a consequence of insulin resistance. By contrast, there was no significant difference in plasma insulin between cohorts on the same diet (p.0.24). These data indicate that the differences in blood glucose and insulin levels were determined by diet, not plasma leptin levels. We performed glucose tolerance tests to measure the ability of each cohort to clear a bolus of sugar from the blood (Figure 2c). For mice on a low-fat diet, plasma glucose levels were indistinguishable between wild-type and ob-norm cohorts throughout the test (p.0.5). This confirms that leptin replacement at physiological levels was sufficient to normalize glucose homeostasis in the lean ob-norm animals. By contrast, both cohorts exposed to a high-fat diet showed delayed glucose clearance (Figure 2c; p,0.01). The magnitude of this impairment was greater in ob-norm animals relative to wild-type controls at 60 and 120 minutes (Figure 2c; p,0.01). Thus, while the high-fat fed wild-type and ob-norm animals have similar fasting hyperglycemia and hyperinsulinemia, the ob-norm animals have an additional defect in glucose metabolism that is revealed by glucose challenge suggesting that their relative leptin deficiency may further impair glucose metabolism of DIO mice. Leptin-deficient ob/ob mice are hypoactive and have reduced energy expenditure. We therefore performed respirometry and activity measurements to compare the four groups of animals in this experiment ( Figure 2). Activity was quantified by measuring beam breaks in three dimensions in animals that had been acclimated in metabolic cages. For the groups on a low-fat diet, there was no significant difference in average beam breaks between the wild-type and ob-norm groups (Figure 2d). For animals on a high-fat diet, there was a trend toward decreased activity in the ob-norm group, but this did not reach significance (35.964.3 breaks/min in wild-type versus 27.762.6 breaks/min in the ob/ob norm, p = 0.12). Similarly, we found that there was no significant difference in oxygen consumption, during either the light or dark phases, between wild-type and ob-norm cohorts on the same diet (Figure 2e). Having established that wild-type and ob-norm animals have a similar body weight and physiology when maintained on the same diet, other than a modest impairment of glucose metabolism among ob-norm animals on a high fat diet, we next tested whether long-term exposure to a high-or low-fat diet affected the leptin sensitivity of these two groups. We used two assays for this purpose. First, in order to measure functional sensitivity to leptin, we tested whether a short-term leptin infusion could reduce food intake and body weight in each of the four cohorts. We did this by replacing the mini-osmotic pumps in all animals with pumps that delivered leptin at either the same rate (''vehicle'') or at a rate 450 ng/h higher. This results in a 4-fold increase in the leptin infusion rate in ob-norm animals (150 ng/h versus 600 ng/h leptin) and has previously been shown to result in a 4-fold increase in plasma leptin in lean wild-type animals [37]. We then measured food intake and body weight for 12 days, at which point the pumps were replaced with pumps dispensing leptin or PBS at the baseline rate. As expected, wild-type mice maintained on a low-fat diet remained sensitive to exogenous leptin, reducing their food intake (15.160.6 kcal/d for vehicle versus 12.560.5 kcal/d for leptin, p,0.05) and showed a progressive reduction of body weight throughout the 12-day infusion (Figure 3). By contrast, wild-type mice that had been maintained on a high-fat diet for 20 weeks showed no reduction in food intake or body weight in response to the 450 ng/h leptin infusion (Figure 3). These results are consistent with previous reports showing that prolonged dietinduced obesity induces robust leptin resistance in mice [10]. ob-norm mice that had been maintained on a low-fat diet were sensitive to leptin, and showed a similar reduction of food intake as their wild-type counterparts, (15.261.4 kcal/d for vehicle versus 12.161.1 kcal/d for leptin, p = 0.08) and losing approximately 10 percent of their body mass over the course of the 12-day infusion (22.861.1% for control versus 211.661.0% for leptin, p,0.05). In contrast, and unlike wild-type controls on a high fat diet, obnorm mice that had been exposed to a high-fat diet remained highly sensitive to exogenous leptin. These animals showed a significantly reduced food intake (18.361.5 kcal/d for vehicle versus 13.860.8 kcal/d for leptin, p,0.05) and body weight (0.660.6% for vehicle versus 27.361.8% for leptin, p,0.05) in response to the leptin infusion. Because wild-type and ob-norm animals differed only in their plasma leptin levels, this result confirms that hyperleptinemia is functionally required for the development of leptin resistance after long-term exposure to a high-fat diet. In a second assay of leptin sensitivity, we next tested whether leptin could stimulate acute phosphorylation of STAT3 in neurons of the mediobasal hypothalamus among the four groups of mice [10,13,38]. Mice were given an intraperitoneal bolus of either vehicle (PBS) or leptin (2 mg/kg) and sacrificed 30 minutes later by cardiac perfusion. This dose of leptin has previously been shown to increase plasma leptin levels by 40-fold [10]. Brains were dissected, and STAT3 phosphorylation was quantified by immunohistochemistry ( Figure 4). Wild-type animals maintained on a low-fat diet showed a robust increase in the number of pSTAT3 positive cells in response to leptin (Figure 4a cells for leptin, p,0.02). By contrast, wild-type mice maintained on a high-fat diet showed no increase in pSTAT3 positive cells in response to leptin (24.265.8 cells for vehicle versus 21.866.8 cells for leptin, p = 0.8), confirming that diet-induced obesity blunts the leptin-responsiveness of these first-order neurons. While wild-type mice on a high-fat diet were insensitive to exogenous leptin, these mice did have a modestly increased number of pSTAT3 positive cells at baseline compared to low-fat fed animals (Figure 4c, p,0.05). This supports prior suggestions that diet-induced obesity results in chronically elevated leptin signaling that cannot be further modulated by additional leptin [39,40,41]. We next tested ob-norm mice in the same assay. Ob-norm mice maintained on a low-fat diet showed increased STAT3 phosphorylation in response to leptin, and the magnitude of this increase was similar to wild-type controls (10.764.5 cells for vehicle versus 7161.7 cells for leptin, p,0.01). Unlike wild-type animals on a high fat diet, ob-norm mice that had been maintained on a highfat diet also showed a robust increase in STAT3 phosphorylation in response to leptin (2.760.8 cells for vehicle versus 108621 cells for leptin, p,0.01). There was no indication that the leptin responsiveness of these high-fat diet fed, ob-norm mice was impaired in this assay relative to lean controls. These biochemical data are consistent with the physiological data from leptin infusion experiments and indicate that even long-term exposure to a highfat diet is not sufficient to block the leptin-induced activation of JAK/STAT signaling in normoleptic animals. Instead, we conclude that hyperleptinemia is required for the development of leptin resistance in response to diet-induced obesity. Discussion The mechanisms responsible for the development of leptin resistance have been the focus of many studies. As leptin resistance results in obesity and other metabolic diseases, agents which can re-sensitize the obese to leptin would have great therapeutic value. However, the molecular events responsible for the development of leptin resistance remain poorly understood, in part because of the numerous physiological and nutritional differences between lean and diet-induced obese animals. In principle, any of these differences could contribute to impaired leptin sensitivity. In this study we asked whether one physiological parameterelevated plasma leptin levels -is required for the development of leptin resistance in response to a high-fat diet. We focused on hyperleptinemia because negative feedback in response to excess signaling is a classical mechanism of hormone resistance (including insulin), and because there was already evidence supporting at least one molecular mechanism for negative feedback regulation of leptin (SOCS-3). These experiments were made possible by our having developed a protocol that could normalize the plasma leptin levels of ob/ob mice over the long-term using subcutaneous pumps. We found, consistent with previous studies, that leptin replacement in ob/ob animals is sufficient to normalize their body weight and other physiologic abnormalities, and furthermore that this normalization can be maintained for 6 months by replacing the pumps monthly. The data clearly show that the hyperleptinemia in mice fed a high fat diet is required for the development of leptin resistance. These results do not support the claim that a high-fat diet can block the response to leptin in ob/ob animals [42]. Recently, the peptide amylin has been shown to sensitize animals and humans to the effects of leptin [43,44]. An important challenge will be to further understand the cellular basis of leptin resistance and the mechanism by which amylin can improve it. In addition to its role in regulating energy balance in adult animals, leptin has trophic effects on hypothalamic neurons during early post-natal development [45]. We have shown that leptin treatment beginning at 4 weeks of age normalizes the food intake and adiposity of ob/ob mice on a low fat diet, but it remains possible that neonatal leptin deficiency could selectively influence the development of leptin resistance in response to a high-fat diet. Previous studies have found that leptin treatment of ob/ob mice rapidly normalizes the number of excitatory and inhibitory synaptic inputs to arcuate POMC and NPY neurons [46], suggesting that there is significant plasticity in the adult hypothalamus. Consistent with this, we have found that the leptin sensitivity of ob/ob mice subjected to long-term leptin replacement is indistinguishable from lean wild-type controls (in contrast to leptin-naïve ob/ob animals, which are hypersensitive to leptin). Nonetheless, certain hypothalamic projections are impaired in ob/ob mice, and leptin treatment of adult ob/ob animals does not correct these defects [45]. The significance of these anatomical differences to energy balance and leptin sensitivity remains unclear. Recent work on leptin resistance has focused on the role of inflammation in the hypothalamus [47,48]. This is based on the observation that fats, either when supplied in excess in the diet or administered centrally, can induce hypothalamic inflammation, and that genetic or pharmacological blockade of inflammatory signaling in the brain can improve leptin sensitivity [26,27,28,49]. In addition, ER stress is a common byproduct of inflammation, and ER stress has been shown to contribute independently to the development of leptin resistance [23,24,25]. Our data are not inconsistent with a role for inflammation and ER stress in the development of leptin resistance. However, our data do imply that hyperleptinemia is required for any inflammation-mediated mechanism that blocks leptin signaling after exposure to a highfat diet. In this respect, the fact that upregulation of SOCS-3 is one of the primary mechanisms by which inflammation has been proposed to inhibit leptin signaling suggests that hyperleptinemia and inflammatory cytokines may contribute to leptin resistance by activating a common set of pathways. One unexpected finding from our study was that ob-norm animals, which had a fixed level of plasma leptin, nonetheless gained weight in response to a high-fat diet at a rate that was indistinguishable from wild-type animals on the same diet, and the two groups reached an identical body weight plateau (Figure 1b). If incremental changes in leptin significantly restrained body weight in response to excess dietary fat, then the ob-norm animals would be predicted to gain weight more quickly than wild-type controls and plateau at a higher body weight, if at all. The fact that this was not observed indicates that a leptin-independent mechanism specifies the body weight set point in animals fed a high-fat diet. While the nature of this mechanism is unclear, this observation is consistent with previous reports showing that transgenic mice that overexpress leptin in the liver (and therefore do not experience substantial increases in plasma leptin in response to obesity) nonetheless gain weight at a rate that is indistinguishable from wild-type animals when exposed to a highfat diet [50]. Likewise, leptin infusion fails to prevent the weight gain that occurs when wild-type animals are exposed to a high-fat diet [51]. However, viral overexpression of leptin in the rat hypothalamus has been reported to potentiate the weight gain caused by a high-fat diet [52]. As body weight is regulated by numerous short and long-term signals, it will be important to establish how energy balance can be maintained in a leptin independent manner. In summary, our finding that hyperleptinemia, and therefore excess leptin signaling, is required for the development of leptin resistance reinforces the view that understanding the changes in signal transduction within the sparse neuronal cell types that control energy balance will be essential to unraveling the mechanism of leptin resistance. One of the major challenges in studying leptin resistance has been the difficulty of accessing leptin's key target cells, which are a small subpopulation of neurons located primarily in the hypothalamus and brain stem. The development of BacTrap technology for neuron-specific profiling represents an exciting opportunity to begin to characterize the molecular changes that develop in these cells with acute and chronic hyperleptinemia, and these experiments are underway [53]. Ethics Statement All procedures were carried out in accordance with the National Institutes of Health Guidelines on the Care and Use of Animals and approved by the Rockefeller University Institutional Animal Care and Use Committee (Protocol #09012). Animals: Diet and leptin normalization Wild-type and ob/ob C57BL/6J mice were obtained from Jackson laboratories (Bar Harbor, ME). At 4 weeks of age, a micro-osmotic pump (model 1004; Durect, Cupertino, CA) dispensing either vehicle (PBS) for wild-type animals or leptin (150 ng/h in PBS) for ob/ob animals was implanted subcutaneously and mice were individually housed. Recombinant murine leptin was obtained from Amylin Pharmaceuticals (San Diego, CA). At 6 weeks of age, mice from each cohort were assigned to either remain on a low-fat chow diet (Picolab Rodent Diet 20, LabDiet, St. Louis, MO) or switched to a high-fat diet (D12492, Research Diets, New Brunswick, NJ). The low-fat diet contained 3.41 kcal/g (24.6% calories from protein, 13.2% calories from fat, and 62.1% calories from carbohydrates) and the high-fat diet contained 5.24 kcal/g (20% calories from protein, 60% calories from fat, and 20% calories from carbohydrates). Food intake and body weight were monitored weekly. Micro-osmotic pumps were replaced at week 8 and every 4 weeks thereafter to ensure continuous delivery of leptin or vehicle; model 2004 pumps were used after week 8. Body composition was measured by dual-energy x-ray absorptiometry (DEXA) densitometry (Lunar PIXImus 2, GE Medical Systems, Wisconsin). Glucose, Insulin, and Leptin Measurements Glucose tolerance tests were performed on mice that had been fasted for 14 hr beginning at the onset of the dark cycle. The following day mice were given an intraperitoneal injection of an aqueous solution of 20% glucose (6.25 mL/g body weight) and blood glucose was measured from the tail vein at 0, 15, 30, 60, and 120 min using an Ascensia Elite XL glucometer (Bayer Health-Care, Tarrytown, NY). Plasma hormone levels were measured using ELISA kits for leptin (R&D Systems, Minneapolis, MN) or insulin (Mercodia, Winston Salem, NC) from blood collected from the tail vein of ad libitum fed animals. Respirometry and Activity Measurements Mice were individually housed in Oxymax metabolic cages (Columbus Instruments, Columbus, OH) with ad libitum access to food and water. Gas consumption and movement was recorded for 3-4 days. Animals that stopped eating or drinking during this interval or that lost significant body weight (.2 g) were excluded from the analysis. Leptin Infusion Assays To measure the sensitivity of animals to short-term leptin infusion, the micro-osmotic pump in each animal was replaced with a 14-day pump (Model 2002, Durect) dispensing leptin at a rate 450 ng/h above baseline. This means that for wild-type animals, pumps delivering PBS were replaced with pumps delivering leptin at 450 ng/h, and for ob/ob animals, pumps delivering leptin at 150 ng/h were replaced with pumps delivering leptin at 600 ng/h. Body weight was recorded daily and food intake every 6 days. After 12 days pumps were removed and replaced with pumps dispensing leptin at baseline (PBS for wildtype animals and 150 ng/h for ob/ob animals). To quantitate the number of stained cells, a 3006300 pixel section was removed from the region representing the arcuate nucleus in each image. The number of positive cells was then counted by an observer blinded to the sample identity.
6,393.8
2010-06-29T00:00:00.000
[ "Biology" ]
Text Mining and Natural Language Processing Approaches for Automatic Categorization of Lay Requests to Web-Based Expert Forums Background Both healthy and sick people increasingly use electronic media to obtain medical information and advice. For example, Internet users may send requests to Web-based expert forums, or so-called “ask the doctor” services. Objective To automatically classify lay requests to an Internet medical expert forum using a combination of different text-mining strategies. Methods We first manually classified a sample of 988 requests directed to a involuntary childlessness forum on the German website “Rund ums Baby” (“Everything about Babies”) into one or more of 38 categories belonging to two dimensions (“subject matter” and “expectations”). After creating start and synonym lists, we calculated the average Cramer’s V statistic for the association of each word with each category. We also used principle component analysis and singular value decomposition as further text-mining strategies. With these measures we trained regression models and determined, on the basis of best regression models, for any request the probability of belonging to each of the 38 different categories, with a cutoff of 50%. Recall and precision of a test sample were calculated as a measure of quality for the automatic classification. Results According to the manual classification of 988 documents, 102 (10%) documents fell into the category “in vitro fertilization (IVF),” 81 (8%) into the category “ovulation,” 79 (8%) into “cycle,” and 57 (6%) into “semen analysis.” These were the four most frequent categories in the subject matter dimension (consisting of 32 categories). The expectation dimension comprised six categories; we classified 533 documents (54%) as “general information” and 351 (36%) as a wish for “treatment recommendations.” The generation of indicator variables based on the chi-square analysis and Cramer’s V proved to be the best approach for automatic classification in about half of the categories. In combination with the two other approaches, 100% precision and 100% recall were realized in 18 (47%) out of the 38 categories in the test sample. For 35 (92%) categories, precision and recall were better than 80%. For some categories, the input variables (ie, “words”) also included variables from other categories, most often with a negative sign. For example, absence of words predictive for “menstruation” was a strong indicator for the category “pregnancy test.” Conclusions Our approach suggests a way of automatically classifying and analyzing unstructured information in Internet expert forums. The technique can perform a preliminary categorization of new requests and help Internet medical experts to better handle the mass of information and to give professional feedback. Introduction Both healthy and sick people increasingly use electronic media to obtain medical information and advice [1]. Internet users actively exchange information with others about subjects of interest or send requests to Web-based expert forums, or so-called "ask the doctor" services [2,3]. They want to understand specific diseases, to be informed about new therapies, or to ask for a second opinion before they decide on a treatment [4][5][6]. In addition, these expert forums also represent seismographs for medical and/or psychological needs, which are apparently not met by existing health care systems [5,7]. In the past, emails, e-consultations, and requests for medical advice via the Internet have been manually analyzed using quantitative or qualitative methods [1][2][3][4][5][6]. To facilitate the work of medical experts and to make full use of the seismographic function of expert forums, it would be helpful to classify visitors' requests automatically. By doing so, specific requests could be directed to the appropriate expert or even answered semiautomatically, thereby providing comprehensive monitoring. By generating "frequently asked questions (FAQs)," similar patient requests and their corresponding answers could be collated, even before the expert replies. Machine-based analyses could help both the lay public to better handle the mass of information and medical experts to give professional feedback. In addition, this method could be used to help policy makers recognize the health needs of the population [8]. Text mining [9] is a method for the automatic classification of large volumes of documents, which could be applied to the problem at hand. This technique usually consists of finite steps, such as parsing a text into separate words, finding terms and reducing them to their basics ("truncation") followed by analytical procedures such as clustering and classification to derive patterns within the structured data, and finally evaluation and interpretation of the output. Typical text-mining tasks include, besides others, text categorization, concept/entity extraction, sentiment analysis, and document summarization. This technique has been successfully applied, for example, in automatic indexing, ascertaining and classifying consumer complaints, and handling changes of address requests sent to companies by email. Text mining is also used in genome analysis, media analysis, and indexing of documents in large databases for retrieval purposes [8][9][10][11]. An automatic classification of lay requests to medical expert Internet forums is a challenge because these requests can be very long and unstructured as a result of mixing, for example, personal experiences with laboratory data. Very often, people simply require psychological help or are looking for emotional reassurance. Such heterogeneous samples of requests appear in the section "Wish for a Child" on the German Rund ums Baby (Everything about Babies) website [12], which provides information for parents, potential parents, and infertile couples. Although involuntary childlessness is not the focus of this paper, some introductory notes on this condition may be helpful. Infertility leading to involuntary childlessness is defined as the inability of a couple to achieve conception or bring a pregnancy to term after a year or more of regular, unprotected sexual intercourse. Infertile couples may pass through different stages of reactions and feelings, which include shock, surprise, anger, helplessness, and loss of control. Feelings of failure, embarrassment, shame, and stigmatization may lead to social isolation and to a breakdown in communication between the couple, including depressive reactions, anxiety, emotional instability, diminished self-confidence, sexual problems, and conflicts [13]. The vast majority of cases of male infertility are due to a low sperm count, often associated with poor motility and a high rate of abnormal sperm. However, in a large number of patients (25% to 30%), it is not possible to determine the cause of the problem. The main causes of female infertility are ovarian dysfunctions and disorders of the fallopian tubes and uterus. Frequently, two or even all three causes can be found in one patient. Before 1980, infertility due to low sperm quality was treated by performing insemination with the patient's own sperm or donor sperm. This was followed by in vitro fertilization (IVF) in the early 1980s and intracytoplasmic sperm injection (ICSI) in the early 1990s. ICSI only requires one living sperm cell [14]. Like many other conditions, involuntary childlessness is often not caused by just one factor, nor can it always be cured with a single treatment regimen. Patients and doctors alike are often confronted with the fact that they cannot find a reason for childlessness and that a treatment for a particular case is not helpful for a person or couple with a similar problem [15]. In addition to the cause itself, other factors, such as the age of the woman or problems shared by both partners, might also influence the choice of treatment. So it seems consequent that patients/couples suffering from involuntary childlessness use the Internet to get information about their infertility [6]. Requests addressed to medical expert forums such as "Wish for a Child" can be classified according to (1) the subject matter or (2) the sender's expectation (eg, to receive a summary of the current treatment options [second opinion], to get general information about a certain disease or biological process, or to ask for advice about where to seek adequate medical help). While the first aspect is of great importance to medical experts so that they can understand the contents of requests, the latter is of interest to public health experts to allow the analysis of information needs within the population. We carried out an initial trial to automatically classify these requests using standard text-mining software such as that provided by SAS [16,17]. However, the results of our first trial were rather disappointing since the quality of classification, expressed in terms of precision and recall, did not exceed 60% [18]. To make full use of text mining with complex data, different strategies and a combination of these strategies may refine automatic classification. The aim of this paper is to present a method for an automatic classification of requests to a medical expert forum and to evaluate its performance quality. A special focus of this method should be its flexibility to allow a precise and content-related input of expert knowledge. Setting and Data The analysis is based on a sample of requests collected from the section "Wish for a Child" on the German Rund ums Baby website [12]. In this section, visitors can participate in a medical expert forum and ask questions about involuntary childlessness. Requests and answers are openly published on the website. The structure of these dialogues resembles, for example, The Heart Forum of the Cleveland Clinic Foundation [19]. Visitors to the website ask questions directly to a group of medical experts via a Web-based interface. The expert team consists, at the moment, of eight persons who are experts in gynecology, urology, andrology, and/or embryology. Some of them work in outpatient departments, some in reproductive clinics, and some in university hospitals. So, the expert forum is well equipped to give medical advice in difficult situations, to provide help to make the correct decision, to offer a second opinion, or, in some instances, even to meet psychological needs not covered by doctors. The experts work on an honorary (unpaid) basis. To date, more than 12,000 requests have been sent to the expert forum and have been published on the site. From these requests, we selected a random sample of 988 and classified them manually to provide a sound basis for training and evaluation. Manual Classification Similar to Shuyler and Knight [20], who analyzed questions to an orthopedic website in several dimensions (topics, purpose, relationship), we decided to classify the requests into two dimensions. The first dimension ("subject matter") comprised 32 categories (eg, assessment of pregnancy symptoms or information about artificial insemination). The second dimension ("expectations") comprised six different categories that characterize the goals or the purpose of the sender (eg, emotional reassurance or a recommendation about treatment options). From the very beginning of the classification process, it became apparent that many requests belong to one subject matter category but fit into more than one category of the second dimension ("expectations"). For example, a visitor asked the experts to comment on the results of a semen analysis and, at the same time, wanted some advice about whether he or she should change doctors. We decided to provide as many categories per request as appropriate. In the first dimension ("subject matter"), this request could be categorized as "semen analysis," and, in the second dimension ("expectations"), as "discussion of results" as well as "treatment options." Two of the authors (HWM, WH) independently classified the first set of 100 requests manually. Because of a high rate of differing results, we defined the categories more precisely, added and removed some categories, and agreed upon the use of multiple categories. We then classified another 100 requests. This time, strong classification discrepancies, such as each author classifying the text into a different category, occurred in only 12 cases. Some minor discrepancies also occurred, such as agreement in all categories except one additional category that was suggested by one author but not the other. This resulted in a degree of agreement of 0.69 according to the kappa statistic for overlapping categories [21]. Complete agreement was achieved after further discussion and refinement of the categories and then HWM once more manually coded the first 200 and then the remaining 788 requests. The final categories of both dimensions used for classification are shown in Table 2, presented in the Results section. Preparation for Automatic Classification For automatic classification, we created a dataset that contained the text from each request as a separate observation. The text was then parsed into separate words or noun groups. "Parsing" entails several techniques: (1) separation of the text into terms (eg, "uterus") or multi-word terms (eg, "uterus milleu"), (2) normalization of different formats for dates (eg, 26/02/2008; Feb. 26,2008) and data (eg, various degrees of temperature), (3) recognition of synonyms, and (4) stemming of verbs, nouns, or (in German) adjectives to their root form (eg, "transfer," "transferred," "transferring"). Programs, such as SAS Text Miner, perform this automatically and provide a complete list of all words, noun groups, and so on appearing in the text. The two authors who categorized the requests first by hand formed a detailed starting list [16] of about 10,500 relevant terms in order to include all relevant content words-even misspelled words and abbreviations. Since we focused on these words, greetings and function words such as "hello," "the," or "of" are not included and have therefore no effect upon the classification. As a next step, we clustered similar terms to create 4109 groups of terms called "parents" (for examples, see Table 1). The final dataset was a large table consisting of 998 rows (one row for each document analyzed) and 4109 columns (one column for each parent). The words in each document were analyzed to register how often each parent was represented in the text. Text-Mining Strategies To reduce the final dataset consisting of 988 rows and 4109 columns, we used three techniques (as different text-mining procedures): (1) indicator variables on the basis of Cramer's V, (2) principle component analysis (PCA), and (3) singular value decomposition (SVD). The first strategy was developed by the authors. The second strategy used the indicator variables from the first strategy as input for PCA. The third strategy made use of a standard procedure from statistic software for SVD, SAS Text Miner (SAS, Carey, NJ, USA). Cramer's V We calculated the average Cramer's V statistic for the association of each of the 4109 "parents" with each category and the subsequent generation of indicator variables that sum for each category all Cramer's V coefficients over the significant words. Cramer's V is a chi-square-based measure of association between nominal variables, with "1" indicating a complete positive association and "0" indicating no association at all. The coefficients were normalized according to the length of the texts (ie, the number of words). The selection criterion for including a parent term's Cramer's V was the error probability of the corresponding chi-square test. Its significance level was alternatively set at 1%, 2%, 5%, 10%, 20%, 30%, and 40%, leading to seven indicator variables per category. Principle Component Analysis We conducted PCA to reduce the seven indicator variables of varying significance levels per category into five orthogonal dimensions. PCA transforms a number of correlated variables into a few uncorrelated variables [17]. Each principal component is a linear combination of the original variables, with coefficients equal to the eigenvectors of the correlation. PCA can be used for dimensionality reduction in a dataset by retaining those characteristics of the dataset that contribute most to its variance. The data are transformed to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the principal component), the second greatest variance on the second coordinate, and so on [22]. Singular Value Decomposition The 500 dimensional SVD was based on the standard settings of the SAS Text Miner software [23]. To understand SVD, the whole text of all requests can be visualized as being a document by term matrix, as described above. The text from each individual request (rows) is divided into its parent terms (columns) by listing the frequency of each term in a given text. Documents are represented as vectors with length m, where m is the number of unique terms indexed in the text. The original document by term matrix is transformed, or decomposed, into smaller matrices, thus, creating a factor space. An SVD projection is a linear combination of the singular values in a row or column of the term × document frequency matrix. A high number of SVD dimensions usually summarizes the data in a better way but requires significant computing resources [24,25]. Statistical Analyses The sample was split into 75% training data and 25% test data. On the basis of our predictor variables (ie, 38×7 Cramer's V indicators per category, 38×5 principle components per category, and approximately 500 SVDs), we trained logistic regression models to predict the categories. However, if all these predictor variables would be used in a regression model, it would be rather unlikely to detect any significant variables since many of these are highly correlated. Therefore, we chose a more appropriate modelling approach, a stepwise logistic regression. The choice of predictive variables was carried out by an automatic procedure. To assess the most appropriate model for a classification, we used the following selection methods: (1) Akaike Information Criterion, (2) Schwarz Bayesian Criterion, (3) cross validation misclassification of the training data (leave one out), (4) cross validation error of the training data (leave one out), and (5) variable significance based on an individually adjusted variable significance level for the number of positive cases. For a more detailed description of most of these selection criteria, see Beal [26]. We trained for each target category, each selection criterion, and each type of input variable (Cramer's Vs, principle components, SVDs) one logistic regression. This resulted in 1369 logistic regression models. The detailed notes and the table in the Multimedia Appendix make this procedure more transparent. For the final regression, we used meta-models, which proved the best for each of the 38 categories. The complete training process produced an automatic method to evaluate both requests from the training sample and new requests. The corresponding software program is called score code. This score code is a function that generates, for any text (request), the probability of belonging to each of the 38 different categories. To assess the accuracy of our approach, we calculated recall and precision as standard statistics in information retrieval and text mining for each of the 38 categories. Precision is the percentage of positive predictions that are correct (ie, a sort of specificity), whereas recall is the percentage of documents of a given category that were retrieved (sensitivity). We calculated recall and precision at the maximum F-measure [27]. To determine whether our approach yielded better results for precision and recall in the subject matter dimension or the expectation dimension, we compared the macroaverage values for precision and recall between both dimensions [28]. All statistical analyses were performed with SAS 9.1 (SAS, Carey, NJ, USA). Table 2 shows the results of our manual classification of the 988 documents. A total of 102 (10%) documents fell into the category "in vitro fertilization (IVF)", 81 (8%) into the category "ovulation," 79 (8%) into "cycle," and 57 (6%) into "semen analysis." These were the four most frequent categories in the subject matter dimension (consisting of 32 categories). The expectation dimension comprised six categories; we classified 533 documents (54%) as "general information" and 351 (36%) as a wish for "treatment recommendations." Automatic Classification We used different selection criteria to find the best regression models for training and validation. In about half of the categories, the generation of indicator variables based on the chi-square analysis proved to be the best approach for automatic classification. Other categories were best predicted by using either PCA or SVD. Statistical details are shown in the Multimedia Appendix. A 100% precision and 100% recall was realized in 18 out of 38 categories on the validation sample (see Table 2). The lowest rates for precision and recall were 75% and 61%, respectively. The rates for precision and recall were, on average, somewhat lower in the expectations dimension (78.2% and 74.8%, respectively) compared to the subject matter dimension (93.6% and 96.4%, respectively). Table 3 and Table 4 provide exemplary impressions of the power and the limits of the chi-square analysis. Table 3 lists the most significant words in the category "general information." Interestingly, nearly all of the first 50 words for the category "general information" were negatively associated. This means that the word "injection," for example, is a strong indicator that a document containing this word does not belong to this code. The 51st word ("fertile") was the first one with a positive Cramer's V; it represents a typical question about the fertile days of the menstrual cycle. For nearly all other categories, the most predictive words were positively associated with the respective category. Table 4 lists the most significant words for the categories "oviduct" and "examination of the oviduct." These categories have been listed separately because "oviduct" was mainly associated with lay requests about reproductive medicine in general while "examination of the oviduct" was used in conjunction with questions about specific treatments or treatment options. Some of the predictive words (eg, "tube," "fallopian tube," "level") are the same in both categories. For example, the word "tube" appears in all requests that we categorized by hand as "oviduct" (n = 16), showing a strong predictive value (Cramer's V of 0.44). However, this word also appears in 79% of the requests that were categorized as "examination of the oviduct" (n = 19). Again, Cramer's V was high (0.37), signalling also the strong predictive value of this word for "examination of the oviduct". In this situation, only the summary of the Cramer's V statistic as an indicator variable guaranteed high precision and recall, and not a single word alone. For some categories, the input variables also included variables from other categories, most often with a negative sign. For example, the meta-model for "pregnancy test" included a sample of words (as an indicator variable) predictive for the category "menstruation" with a negative sign. This means that absence of words predictive for "menstruation" was a strong indicator for the category "pregnancy test". For other categories, consideration of a sender's expectation also contributed to a better classification of requests. For example, the meta-model for "hormones" included a sum of relevant terms (on the basis of Cramer's V) as well as significant terms demonstrating the expectation to learn more about one's own situation or to have laboratory data interpreted (both with negative signs, meaning that the absence of these expectations were, besides others, indicators for "hormones"). Exemplary Comparison Between Automatic and Manual Classification To give a more vivid picture of the results of our method, we present some of the visitors' requests, including our own manual classification and the automatic classification with scoring values for the probability of falling into a particular category (see Table 5). The first example is a very short request in which the sender wants to know whether a short cycle could be caused by a particular hormone. The automatic classification did not find the central topic of the request, probably because the term "prolactinspiegel" (prolactin level) was not recognized as "hormones." The subject category with the highest probability was "cycle," with a probability of only 2%-meaning that no classification was automatically assigned. In the two other examples, all our manual codes were recognized by the automatic classification. This was also the case in most other requests, representing a high sensitivity of our approach. In several instances, and also in two of the three examples presented in Table 5, the automatic classification sometimes gave a high score not only for the correct subject category (as determined by the authors) but also for additional subject categories. In the second example, there was a high score for "stimulation" (in addition to the correct "IVF"), and the categories "clomifen" and "stimulation" scored highly in the third example (together with the correct category "multiples"). Consequently, precision, which is a measure of specificity, was not always entirely satisfactory. Some of these additional classifications such as "stimulation" in the second and third examples are provoked by the word "stimulation" or other misleading words in the request. While the additional categories in the automatic classification are not entirely correct, they are also not completely wrong. In all three of these examples, our classification according to the expectation of the sender was confirmed by the automatic classification with different probabilities. Only in the last example did the automatic classification also select "treatment options," which in fact is not entirely incorrect. Expert classification: multiples; general information; current treatment Automatic classification: multiples (98%); clomifen (68%); stimulation (54%); general information (67%); current treatment (98%); treatment options (53%) Discussion A combination of different text-mining strategies should classify requests to a medical expert forum into one or several of 38 categories, representing either the subject matter or the sender's expectations. This combined strategy yielded rates of precision and recall above 80% in nearly all categories. Even in the worst classified categories, the rates were at least above 60%. Meaning of the Study In order to evaluate these results, the exceptional character of this text-mining process should be considered. The documents to be classified were complex, sometimes rather long, and, most importantly, needed to be classified not only according to content but also to their (sometimes subliminal) expectations. We were able to show that a combination of different text-mining procedures was superior to a single method. Two factors have particularly contributed to this success: (1) an elaborated starting list and (2) a combination of chi-square statistics, PCA, and an SVD method. These factors mirror a recommendation and an experience reported by Balbi and Meglio [29], who built their specific text-mining strategy according to the "nature" of the data. The creation of good starting (or stopping) lists is necessary to obtain valid and useful results, and comprehensive domain knowledge is essential for creating reasonable lists in the first place. The lists described here contain valuable expert knowledge in the field of involuntary childlessness. It seems reasonable to suppose that creating synonym lists in other medical areas could also be a powerful tool for successful text mining in other Internet forums. In their extensive paper on predictive data mining, Bellazzi and Zupan [30] stress the importance of additional knowledge that domain experts can make use of for the modeling methods. This starting list demonstrated its full potential when used to generate indicator variables that summed all Cramer's V values for each request and each category over the significant words. This way, we escaped the danger of overestimating the predictive power of single words, especially if words are negated (eg, "I'm not interested in IVF" or "my cycle is not normal"). Nearly all words predictive of the category "general information" were negatively associated in the chi-square statistic. This seems to be a "perfect" finding and evidence for our content-related approach since any treatment with injections, for example, would belong to the categories "treatment options," "interpretation," or "current treatment" rather than the category "general information." It is precisely the lack of technical terms or results from prior investigations that defined this category. Experts usually classify requests, such as the ones we analyzed, in a dichotomous way (ie, either they do or do not belong to a respective category). In contrast, automatic classification with a scoring system similar to the one presented in this paper gives a probability for any given request to fall into any of the categories. Especially in the case of complex texts, it seems appropriate to classify them into multiple dimensions and multiple categories. We defined a cutoff of 50% for our scoring system (ie, we defined a request to fall into a category if the respective score was over 50%). At the same time, it is possible to change the cutoff according to the purpose of an analysis. For example, if we are interested in recognizing possible health needs, a 50% cutoff may contribute to a high recall (sensitivity) so that we do not miss relevant requests. If we are interested in high precision (ie, specificity of classification) to sort out the requests and thus to support the experts' work, a higher cutoff may be reasonable. Our analysis procedure permits an easy assignment of different cutoff values. There is another reason why this scoring procedure seems adequate or even superior to a dichotomous expert classification. When we analyze the sender's expectation, we are usually confronted with a mix of different expectations. In many cases, we classified a request into several expectation dimensions. This seems intuitively better represented by a scoring procedure such as the one presented in this paper. And even the subject matter classifications that we employed in our manual procedure as separated (disjunct) categories may not be as clear as they seem in many requests. It is rather likely that a given request may also fall into more than one subject matter, as demonstrated by the examples in Table 5, so that in these cases, a scoring procedure that also permits overlapping categories seems most appropriate [6,20]. In contrast, most studies, even if they have used a multidimensional categorizing scheme such as Shuyler and Knight [20], only permit one category per dimension. As SVD is a powerful method for automatic classification, it seemed quite logical that this approach proved best to predict categories in about a quarter of instances. However, there is sometimes reluctance to use SVD-based classification strategies because this process can be controlled only to a limited degree [31]. In other words, text mining based on SVD is a procedure that cannot be consciously monitored. As a sort of black box, it automatically runs in the background and we have to rely on the validity of this procedure. In contrast, according to Reincke [31], the data-mining process should be mapped into a continuous IT flow that controls the entire information from the raw data, cleaning aggregation and transformation, analytical modeling, operative scoring, and last but not least, final deployment. In this sense, our analysis is actually far more transparent as demonstrated in the case of the predictive words given in Table 3. That is to say, our analysis not only yields good rates for precision and recall, but it also provides us with a complete view of the analytic process and thus contributes to face validity. In the last decade, the medical profession has witnessed new developments whereby patients have become their own experts, often through the adoption of strategies to empower themselves [32] and often supported by the Internet [33,34] with email consultation services for electronic patient-caregiver communication [35]. A crucial factor to be able to make use of all of this potential information is time. The Internet is a rapid medium and when questions go unanswered for a few days, users are disappointed and may even resend their queries, as Marco et al [3] experienced in their Internet survey on AIDS and hepatitis. A complex technological solution such as that presented in this paper may effectively help medical experts to process the information needs of requests in advance and to accelerate response times. Once the information needs have been understood, it will also be possible to find similar previous requests, allowing experts to make efficient use of their earlier answers. This technology can therefore be used to both enable experts to answer requests promptly and to lighten their workload. As a further advantage of our approach, we would like to emphasize our comprehensive list of categories. To date, analyses of email requests [5,6] have tried to categorize these requests in more or less simple categories, especially to learn more about information needs and the possible workload of experts. In contrast, we have been far more specific in the classification of the information needs with 32 categories representing the subject matter dimension. This detailed classification is exactly what experts need if machine-based analysis is to support their work. Limitations of the Study The classification of requests according to the senders' expectations could be improved. That this process is not optimal may be due to the somewhat vague definition of what exactly constitutes a certain patient's expectation, and this requires improvement if health experts are to make conclusions about the health needs of a population. However, the overall performance of the subject classification seems to be sufficient, so much so that semiautomated answers to senders' requests, in this medical area, may be a realistic option for the future. Future Considerations We consider there to be three relevant applications of our text-mining procedures in the near future: 1. If our scoring procedure proves successful in further tests, it could be integrated into the Rund ums Baby website to facilitate semiautomated answer proposals to be used by the experts and, in cases when classification accuracy is high, direct automated answers to the patients [36]. A multidimensional classification of texts, as in our approach, may be especially appropriate for this purpose since we recognize not only the plain content (ie, subject matter) but also the sender's expectations, something like a hidden subtext. 2. A retrospective application of the scoring procedure to all accumulated requests would allow their mapping into different categories, thus providing an objective historical seismograph and allowing a better understanding of medical and psychological needs that have yet to be met by the current health care system. 3. The scored database forms the basis for a sophisticated FAQ Internet page that does not address those questions and issues considered by experts to be the most important, as is usually the case, but one which is more oriented to the real needs of visitors and patients. We are not aware of any studies that have tried to analyze similarly complex texts in Internet forums. Further studies are therefore needed to compare and refine our methodology. Then it should also be possible to decide which aspects of our text-mining strategies-the expert-based synonym list or the combination of different strategies-were most important for the success of our automatic classification. Conclusions Our analysis suggests a way of classifying and analyzing complex documents to provide a significant as well as a valid information source for politicians, administrators, researchers, and/or counselors. In the case of involuntary childlessness, it will be possible to fulfill not only patients' information and health needs with this Internet expert forum, but also to analyze and follow-up these needs over long periods of time. These techniques also seem promising for the analysis of large samples of documents from other Internet health forums, chat rooms, or email requests to doctors.
7,989.8
2009-07-22T00:00:00.000
[ "Computer Science", "Medicine" ]
IntEREst: intron-exon retention estimator Background In-depth study of the intron retention levels of transcripts provide insights on the mechanisms regulating pre-mRNA splicing efficiency. Additionally, detailed analysis of retained introns can link these introns to post-transcriptional regulation or identify aberrant splicing events in human diseases. Results We present IntEREst, Intron–Exon Retention Estimator, an R package that supports rigorous analysis of non-annotated intron retention events (in addition to the ones annotated by RefSeq or similar databases), and support intra-sample in addition to inter-sample comparisons. It accepts binary sequence alignment/map (.bam) files as input and determines genome-wide estimates of intron retention or exon-exon junction levels. Moreover, it includes functions for comparing subsets of user-defined introns (e.g. U12-type vs U2-type) and its plotting functions allow visualization of the distribution of the retention levels of the introns. Statistical methods are adapted from the DESeq2, edgeR and DEXSeq R packages to extract the significantly more or less retained introns. Analyses can be performed either sequentially (on single core) or in parallel (on multiple cores). We used IntEREst to investigate the U12- and U2-type intron retention in human and plant RNAseq dataset with defects in the U12-dependent spliceosome due to mutations in the ZRSR2 component of this spliceosome. Additionally, we compared the retained introns discovered by IntEREst with that of other methods and studies. Conclusion IntEREst is an R package for Intron retention and exon-exon junction levels analysis of RNA-seq data. Both the human and plant analyses show that the U12-type introns are retained at higher level compared to the U2-type introns already in the control samples, but the retention is exacerbated in patient or plant samples carrying a mutated ZRSR2 gene. Intron retention events caused by ZRSR2 mutation that we discovered using IntEREst (DESeq2 based function) show considerable overlap with the retained introns discovered by other methods (e.g. IRFinder and edgeR based function of IntEREst). Our results indicate that increase in both the number of biological replicates and the depth of sequencing library promote the discovery of retained introns, but the effect of library size gradually decreases with more than 35 million reads mapped to the introns. Electronic supplementary material The online version of this article (10.1186/s12859-018-2122-5) contains supplementary material, which is available to authorized users. Background Alternative pre-mRNA splicing is a cellular process in eukaryotes that generates multiple transcripts from a single gene. Of the various types of alternative splicing (reviewed by Hamid and Makeyev [1]) intron retention (IR) events have been less characterized than the alternative splicing events that are more frequent in mammals, such as exon skipping and choice of alternative 5' splice site (5'ss) and 3' splice site (3'ss). While the best characterized IR events have been detected from humans with diseases caused by mutations in the core pre-mRNA splicing machinery, recent work has established that regulated IR events are also part of the normal regulation of gene expression [2,3] and function in important biological processes such as cellular differentiation [4]. Furthermore, in some taxa such as plants, IR is one of the most prominent mechanisms of alternative splicing [5]. A well-established example of IR involves U12-type introns (also called minor introns), which are spliced less efficiently compared to the U2-type (major) introns [6]. The classification to major U2-type introns and minor U12-type introns derives from the coexistence of two parallel pre-mRNA splicing machineries in the cells of most metazoan species. Majority of metazoan introns are excised by the "major" U2-dependent spliceosome, and are therefore referred as the U2-type or major introns. A small subset of metazoan introns, approximately 0.35% or roughly 700-800 introns in mammals, are excised by a parallel U12-dependent spliceosome, also known as the minor spliceosome [7]. The targets of the U12-dependent spliceosome are minor introns, which feature highly conserved, but divergent 5'ss and branch point sequences (BPS), which makes it possible to identify these introns computationally [8,9]. One of the main characteristics of the minor spliceosome is that it is less efficient compared to the major spliceosome [10][11][12]. As a result of the inefficient splicing, elevated levels of transcripts containing unspliced minor introns are retained in the nucleus and targeted by nuclear RNA decay pathways [6]. Moreover, disease-causing mutations in the snRNA and protein components of the minor spliceosome (e.g. U4atac and U12 snRNAs, U11/U12-65K and ZRSR2 proteins) show, among other splicing defects, a further increase in IR levels of the U12-type introns [13][14][15][16][17][18][19]. Various alternative splicing analysis tools have been developed [20][21][22], however few tools exist that focus on extracting novel intron retention (IR) events and perform differential IR analysis [23]. For a robust analysis of retention levels of introns within and between various samples we developed IntEREst, i.e. Intron-Exon Retention Estimator that is based on the intron retention analysis used in Niemelä et al. [6]. IntEREst accepts standard binary sequence alignment/map (.bam) files as an input and estimates the genome-wide retention levels of the introns using sequencing reads mapping to introns, intron-exon boundaries, or to exon-exon junctions. The results are provided both as IR fold changes and relative PSI or Ψ (percent spliced in) [24] values and can be further analyzed by any of the several statistical packages included, e.g. differential intron retention test based on the "exon usage test" provided by DEXSeq [25,26], differential IR test based on count data differential analysis tools provided by DESeq2 [27], or exact test, generalized linear models and quasi-likelihood test adapted from edgeR [28,29]. The statistical tests calculate p-values based on the null hypothesis that IR does not vary across the analyzed sample groups. The resulting p-values estimated for each intron allow subsequent identification of introns that show statistically significant difference of IR between the sample groups. IntEREst also provides tools for plotting the distribution of retention levels of the introns of interest within single or multiple samples. In addition, large datasets that demand significant computation time can be analyzed in parallel on multiple computing cores. IntEREst is available as a Bioconductor package and together with the manuals are accessible through https://bioconductor.org/packages/ release/bioc/html/IntEREst.html. Implementation IntEREst is an R package that supports various functions to measure the retention levels of the introns, perform statistical differential intron retention analysis across various samples, and plot the distribution of retention levels of different types of introns across various samples. The main design aim of IntEREst has been to support analysis of relative low level IR values (>10%) that are more challenging to implement with the existing software [24] but are typical for the U12-type introns [6] and for U2-type introns in human diseases with a mild defects with spliceosome function. In such cases the commonly used Ψ values, particularly with default cutoffs, may underestimate the extent of IR. Specifically, the advantages of IntEREs are the ability to use multiple test samples and controls, possibility to define complicated design experiments (incorporating various sample annotations such as age, sex, and etc.) for IR comparisons across samples, parallelization of the computation and running on multiple nodes/cores, integration to Bioconductor environment and the use of both intronic and exon junctions reads, either alone or together, to estimate the IR levels. Additionally, besides providing a global IR analysis, IntEREst supports analysis of user-defined subset of introns, e.g. U12-type and U2-type introns. The RNAseq read summarization functions (i.e. interest() and interest.sequential()) accept a .bam read alignment file and a reference as inputs, and output the raw (un-normalized) and normalized number of fragments mapping to each exon or intron. The reference includes coordinates of exons and introns together with their annotations, such as gene and transcript names, and intron type identifier. The reference can be built using the referencePrepare() function supported by IntEREst. Note that the intron identifiers used in our analysis are U12and U2-type introns, but the application of IntEREst is not limited to the comparison of these intron types. Other classifications can be defined by the user and the retention levels of the introns can be plotted and compared across the user-defined classes. The functions in the IntEREst package that are specific to the comparisons of the U2-vs U12-type introns, e.g. u12Boxplot(), u12DensityPlot(), u12Index() start with "u12". IntEREst features two functions that estimate the raw and normalized intron retention levels: 1) interest(), capable of running in parallel on multiple computing cores and 2) interest.sequential(), that runs sequentially on a single computing core. These functions use the bpiterate() function from the BiocParallel R Bioconductor package [30] to read and analyze the mapped reads, m reads at a time (by default m = 1 million) to comply with the limitations of the memory usage in the running environment. When running interest.sequential(), the mapped reads are analyzed as batches of m reads (or read pairs if the isPaired parameter is set TRUE) at a time on a single computing core. With interest() it is possible to analyze n batches of m reads (i.e. m×n reads or read pairs) simultaneously while they are distributed over n computing cores and repeat this process until all reads have been analyzed. The summarization functions interest.sequential() and interest() support two distinct analysis modes: 1) intronexon junction estimation and 2) exon-exon junction estimation. It is possible to configure the analysis to include only the reads that map to intron-exon or exon-exon junctions, however with default settings reads that map entirely to the intronic or exonic regions are also included in the calculation of retention level estimates. For a typical intron-exon junction estimation analysis, we recommend to collapse the overlapping exonic coordinates across various splicing isoforms in the reference to avoid any biases in the IR calculation that may be introduced by the read counts of alternative exons, or by exonic regions overlapping with sequences annotated as introns in other transcripts. To improve the running time and avoid repetitive processes, in exon-exon junction analysis mode, we recommend using a filtered reference resulting from the unionRefTr() function. This function identifies all repeating exons and uses only a single copy of each. Moreover, because repetitive sequence elements may bias the read mapping and thus affect the IR estimates, the read summarization functions support the possibility to exclude the repeat regions and reads that map to such regions. The default normalization method applied in the read summarization functions is Fragments Per Kilobase per Million mapped fragments (FPKM) however, it is scaled at transcript level (formula 1). For every intron i of gene g with I introns, if the length of the intron is L ig and the number of fragments mapped to the intron is X ig its normalized retention value will be FPKM ig : IntEREst provides a function lfc() that estimates the log 2 FC of the retention levels across two various conditions, moreover it includes a function psi() to measure the Ψ values, i.e. the percent spliced in, for all studied introns. We have adapted several statistical tests from multiple sources for intron retention and exon-junctions analysis: DESEq2 [27], edgeR [21,22], and DEXSeq [25,26]. All these methods can be used to study the intron retention changes across the samples in a genome-wide scale. However, the DEXSeq based method (i.e. DEXSe-qInterest() function) differs from the others as it uses the differential exon usage method to perform gene-wise comparisons Results and discussion Genome-wide analysis of retention of U2 and U12-type introns To demonstrate the application of IntEREst in comparing retention levels of various types of introns across several samples, we reanalyzed the RNAseq data from myelodysplastic syndrome (MDS) patients and control subjects included in Madan et al. [17] study. Specifically, we compared the genome-wide retention levels of U12type introns vs U2-type across the MDS samples. This disease is caused by mutations in the ZRSR2 gene that encodes an integral protein component of the minor spliceosome. Moreover, the original analysis of the dataset reported that the ZRSR2 mutations in the patient samples led to increased retention of primarily U12type introns while the U2-type introns were reported to be less affected [17]. The dataset represents 16 individuals: 8 were diagnosed with MDS and featured mutations in the ZRSR2 gene (referred to as ZRSR2mut), 4 were diagnosed with MDS but lacked the ZRSR2 mutations (referred to as ZRSR2wt), and 4 were healthy individuals (HEALTHY). We ran genome-wide retention comparison of U12type introns to U2-type introns. To carry out the analysis, we used RefSeq as a reference and identified and annotated 510 U12-type introns using the annotateU12() function that uses Position Weigh Matrices (PWM) extracted from the U12DB database [9]. Next we performed the differential IR analysis using the DESeq2based function of IntEREst (comparing the ZRSR2mut samples vs ZRSR2wt and HEALTHY). The DESeq2 test was run by considering both results from intron retention and exon-exon junction runs of interest() function. Initially, by using the interestResultIntEx() function a result object was built that includes information of both intron retention and exon-exon junction levels (see Additional file 1 for more details). The results show an increased retention of U12-type introns in the ZRSR2mut samples as opposed to U2type introns. Specifically, after the low retention filtering and using a 0.01 adjusted p-value cutoff on the DESeq2 results, we identified 1521 introns representing either the U12-or U2-type that displayed higher retention levels in the ZRSR2mut samples compared to the controls (i.e. ZRSR2wt and HEALTHY samples). Of the 510 U12-type introns in the data, 269 (i.e. 52.7% of the U12-type introns) showed significant up-regulation of IR in the ZRSR2mut samples when compared to the controls, while none of the U12-type introns showed a significant reduction in IR (see Fig. 1a). In contrast, only 1252 of the 228524 (~0.54%) of U2-type introns analyzed showed a significant increase of IR and 89 (~0. 03%) showed a significant decrease (see Fig. 1b). Our analysis also confirmed the earlier observation of increased intron retention levels with U12-type introns compared to U2-type introns [6,11,31] since we observed that the overall FPKM retention values (formula 1) of U12-type introns were higher than that of U2-type in all the samples of the MDS study, including the ZRSR2mut, ZRSR2wt and HEALTHY samples (Fig. 2a, b). However, this effect was more prominent in the ZRSR2mut samples, suggesting that the ZRSR2 mutations were exacerbating the IR of the U12-type introns. Similar increase in IR was not observed with the U2type introns between ZRSR2mut and controls, regardless of whether they were located in the genes containing U12-type introns or other genes, or in the close proximity of U12-type introns (immediately up-or downstream position). Rather, the median log 2 fold-change of the U2-type introns was approximately zero whereas the median log 2 fold-change of U12-type introns was~1.5 (see Fig. 2c). Moreover, the Jonckheere Trend test [32,33] with 10000 number of permutations, under the null hypothesis that the values are similar (and with the alternative that the values for the U12-type introns are higher) returned a highly significant p-value of 0.0001. In line with these results, the median of ΔΨ values (i.e. the increase of percentage spliced in when comparing ZRSR2mut samples to the controls) for all U12-type introns was about 1% as compared to 0.6% with U2-type introns (see Fig. 2d). Moreover, the average ΔΨ values for introns showing a significant increase in IR were~33% and 23% for U12-type and U2-type introns, respectively. To further evaluate the validity and generality of our results, we compared the MDS results to the similar results that we obtained from analyzing an additional Maize data [34] (see Additional file 1 for more details). The Maize data is constructed of 6 samples (i.e. 3 roots and 3 shoots referred to as RGH3mut) that feature mutations in the gene RGH3 (ortholog of Human ZRSR2 gene) and 6 samples (3 roots and 3 shoots referred to as RGH3wt) that lack the mutation. The results of the Maize data analysis mirror our findings with the MDS data. Analogous to MDS data, the RGH3mut samples showed increased IR with~46% of U12-type introns, while only a~0.46% of the U2-type introns showed an increase in IR (see Additional file 1: Figure S7). Together, our results suggest that IntEREst provides reliable quantification of differential IR events; Specifically, our results are not only consistent with the welldocumented increased retention levels of U12-type introns [6,11,31], but are also in concordance with the molecular function of the ZRSR2 protein (and its Maize ortholog, i.e. RGH3) in the recognition of U12-type introns [17,34]. Benchmarking and comparison to other methods We evaluated the performance of the IntEREst in two ways using the MDS benchmark dataset. First, we carried out internal analysis comparing IntEREst results in conjunction with different statistical analysis packages implemented in IntEREst. Subsequently, we carried out comparison with both, the published results of the MDS analysis [17] and IRfinder [23], i.e. dedicated software for IR analysis. Note that all comparisons described in the following are based on the introns that were available in the both references used by the compared counterparts. Differential up-and down-regulated introns in methods implemented in IntEREst We compared the three methods implemented in IntER-Est for differential intron retention analysis, i.e. DESeq2, GLM function of edgeR and DEXSeq, referred hereafter as IntEREst-DESeq2, IntEREst-edgeR and IntEREst-DEXSeq, respectively. The DESeq2 and edgeR have been previously reported to result in somewhat dissimilar results in differential gene expression analysis [35]. In contrast, DEXseq method differs in its application (see above). For IntEREst-DESeq2 and IntEREst-edgeR comparison, we first merged the intron-exon and the exon-exon junction results (obtained by running interest() in its two running modes) using interestResul-tIntEx(). Subsequently, we used deseqInterest() and glmInterest() functions (i.e. the IntEREst functions based on DESeq2 and edgeR-GLM) to analyze the change of IR relative to the change of the junction levels of their flanking exons. We used an adjusted p-value (Benjamini and Hochberg [36]) threshold cutoff of 0.01 to identify introns that are retained at significantly higher or lower level in the ZRSR2mut samples compared to controls (see Additional file 1 for more details). We found that there is a significant overlap both with upregulated and downregulated introns between the IntEREst-DESeq2 and IntEREst-edgeR (Fig 3a, b), with a bias towards upregulated introns. Furthermore, of the introns not shared between the two methods, the IntEREst-DESeq2 identified more introns with an increase in IR, while the IntEREst-edgeR identified more downregulated IR events. The majority of the IR events not shared by the two methods (specifically those discovered by IntEREst-DESeq2 and missed by IntEREst-edgeR) display a weaker IR fold-change compared to those in the shared intron group (See Additional file 1: Figure S3). The observed differences between the two methods are in line with the recent DEG analysis results [35] and are due to the variability of the methods used and an extra filtering step based on Cook's distance which is used in DESeq2 by default. Comparison of the IntEREst-DESeq2 to IntEREst-DEXSeq revealed a considerable overlap between the two methods (Fig. 3 c). However, IntEREst-DEXSeq identified a large number of significantly less retained introns not identified by the IntEREst-DESeq2 (Fig. 3 d). This outcome reflects the gene-wise method adapted in DEXSeq where the variation in the retention levels of each intron is compared to the relative retention variation of all other introns within the same gene, rather than solely comparing the genome-wide changes of IR levels. This results in a more symmetric distribution of up/down regulated intron retention signals (Fig. S4). As a consequence, the significantly more and less retained introns discovered by IntEREst-DEXSeq were more than twice more frequently observed in the same genes compared to those identified by IntEREst-DESeq2. Furthermore, the IntEREst-DEXSeq only consider the reads that map to either introns or exons (here the intron read counts were used) and does not support the usage of both intron retention and exonexon junction information. IntEREst-DESeq2 and IRFinder show extensive overlap We next compared the IntEREst-DESeq2 to IRFinder, a dedicated IR analysis software, which also uses DESeq2 package in its downstream analysis [27]. Since IntEREst-DESeq2 counts reads that map to the exons, we used the mean of the number of reads mapping to the 5' and (See figure on previous page.) Fig. 2 FPKM-scaled retention levels of U12-type and U2-type introns across various samples in MDS data, excluding transcripts that feature only introns with low average read counts over all samples (i.e. 1 read or less). a Boxplot showing FPKM-scaled retention levels of the U12-type introns (middle) as compared to their upstream and downstream U2-type intron. The thick horizontal lines in boxplots represents the median values and the whiskers represent 1.5 times the interquartile range. The box extends from the first quartile to the third quartile. b Boxplot showing the distribution of the FPKM-scaled retention levels of U12-type introns compared to the U2-type intron in ZRSR2mut, ZRSR2wt, and HEALTHY samples. c Density plot illustrating the frequency of the fold change (log2) of the retention levels of U12-type introns, randomly picked U2-type introns, U2-type introns upstream and downstream of the U12-type introns when comparing ZRSR2mut to the control samples of the MDS data. d Density plot illustrating the frequency of the ΔΨ values (increase of percentage spliced in) of the U12-and U2-type introns when comparing ZRSR2mut to the control samples. The Ψ values are between -1 and 1 3' flanking exons. In contrast, the IRFinder counts the junction reads that map across the flanking exons. Running IRFinder with the default parameters extracted 250 introns showing significantly increased IR in ZRSR2mut samples, most of which (i.e. 235) overlapped with the introns discovered by the IntEREst-DESeq2 (Fig. 3 e). Note that IntEREst utilized more intron/exon-mapped reads compared to IRFinder. This was particularly evident with introns with lower retention levels, thus providing better-supported fold-change estimates for such introns (Additional file 1: Figure S5). Enhanced discovery of IR events in MDS samples We further compared our IR results with the original analysis of the MDS dataset by Madan et al. [17]. We found that IntEREs-DESeq2 was able to identify most (i.e. 177 out of 205 introns) of the significant IR events reported by Madan et al. [17], but it also discovered a large number of additional events not reported in the original study (Fig. 3f), representing both U12-type (149) and U2-type (1195) introns. On the contrary, the events that were reported in the original study, but missed in our analysis all represent borderline cases featuring low fold-changes and statistical significance (Additional file 1: Figure S6). Together, our results revealed that the different methods implemented in IntEREst are able to identify a highly overlapping set of high-confidence differentially retained introns. Additionally, each method also identified IR events that are unique to a particular method. This provides the flexibility to select an approach best fitting to the particular research questions. Sample size and sequencing library size sensitivity Finally, we studied the effect of the number of biological replications and the intron read coverage levels again using the MDS dataset. To investigate the effect of biological replication, we randomly picked 2 to 8 MDS, ZRSR2mut and control samples for our analysis with IntEREst-DESeq2 (i.e. DESeq2Interest() ) and repeated this 10 times. As expected, the results reveal that increasing numbers of biological replicates lead to a discovery of an increased number of statistically significant IR events (Fig. 4a, b). This observation is similar to what has been observed earlier with gene expression analyses [35]. A similar trend was also observed when analyzing the effect of intron/exon read coverage levels. Here we distributed 5-50 million reads according to the relative retention levels of the introns and exon-exon junction levels (based on the complete data) in each sample, followed by analysis with IntEREst-DESeq2. In our analyses we assumed that the quality and read coverage is equal in all the individual MDS datasets. As a result, we observed that an increase in the sequencing library size leads to a discovery of increasing numbers of introns showing statistically significant deviation in the IR levels. However, the slope of increase of the number of [17], labeled with "MDS". All the significant more/less retained introns were extracted from the unfiltered MDS data, comparing the ZRSR2mut to the control samples discovered IR events decreases and levels off at the highest library sizes (more than 35M; Fig. 4c). Conclusion Here we present IntEREst, an R package for intron retention and exon-exon junction analysis. Our method is able to extract the significantly retained introns and carry out intra-and inter-sample comparisons of the retention levels of the introns and exon junction levels. We used IntEREst to analyze the publicly available MDS data [17] and our results confirm that mutations in the ZRSR2 gene, a component of the minor spliceosome involved in recognition of 3΄ splice site of the U12-type introns, leads to increased IR particularly with the U12-type introns. Furthermore, our results show that compared to the U2-type introns, the IR of U12-type introns is already higher in the control samples, but the mutations in the ZRSR2 gene further exacerbate the IR in the patient cells. These conclusion are further supported by our analysis of Maize data with a mutations in plant ortholog of the ZRSR2 gene, which, similarly to human data, also show strong bias towards increased IR of the U12-type, but not U2-type introns. The introns showing significantly higher or lower IR in the ZRSR2mut samples vs control samples in MDS dataset that we discovered using the IntEREst-DESeq2 (Additional file 2) overlap with the introns identified by IRFinder and IntEREst-edgeR. Furthermore, our results not only detect the same IR events reported in the original study by Madan et al. [17], but we also discovered additional significant IR events featuring both the U12-and U2-type introns. The resampling analysis of ZRSR2mut vs control samples show that by including more biological replications and considering a larger sequencing library size, increasing number of significant IR events can be discovered. While the maximum number of biological replicates (eight) used in this study is not sufficient to estimate the optimal required for IR discovery, we note that library sizes with more than 35M mapped reads start to approach the point where the improvements in detecting novel IR events are marginal. In sum, we believe that IntEREst is a reliable tool in R/Bioconductor environment for detailed intron retention analysis of RNAseq datasets. Availability and requirements IntEREst is implemented as an R package freely available at the Bioconductor repository. Project name: IntEREst Archived version: 1.2.2 Project home page: https://github.com/gacatag/IntEREst/ Fig. 4 The effect of sample size and sequencing library size sensitivity. a The number of significantly higher retained introns in ZRSR2mut samples vs controls, relative to the various number of biological replicates. b Similar to the panel A but for the significantly less retained introns in ZRSR2mut samples. c The significantly higher retained introns in ZRSR2mut samples vs controls, relative to the number of reads mapped to the introns and exons. A p adj < 0.01 threshold was used for all analyses. The data points on the far right in each panel (8 biological replicates in panels a and b;~60M reads in panel c represent the complete MDS dataset used in the analysis. This leads to zero variance in panels a and b because the resampling size for the complete data is 1 (8 ZRSR2mut vs 8 controls), and less significantly differential IR events compared to the resampled in panel C due to variable size datasets. The conditions of the resampled data in panel C are idealistic, as in these analyses the overall mapped reads for all samples are assumed to be equal (as opposed to in the MDS data where it varies from 51-75 million); hence their number of retained introns are higher compared to the real MDS data
6,385.4
2018-04-11T00:00:00.000
[ "Computer Science", "Biology" ]
Nanowire melting modes during the solid–liquid phase transition: theory and molecular dynamics simulations Molecular dynamics simulations have shown that after initial surface melting, nanowires can melt via two mechanisms: an interface front moves towards the wire centre; the growth of instabilities at the interface can cause the solid to pinch-off and breakup. By perturbing a capillary fluctuation model describing the interface kinetics, we show when each mechanism is preferred and compare the results to molecular dynamics simulation. A Plateau-Rayleigh-type of instability is found and suggests longer nanowires will melt via an instability mechanism, whereas in shorter nanowires the melting front will move closer to the centre before the solid pinch-off can initiate. Simulations support this theory; preferred modes that destabilise the interface are proportional to the wire length, with longer nanowires preferring to pinch-off and melt; shorter wires have a more stable interface close to their melting temperature, and prefer to melt via an interface front that moves towards the wire centre. Nanostructured objects have lower stability with respect to their molten phase due to large surface area to volume ratios [1][2][3] . In the case of nanowires, their stability has been studied at elevated temperatures both experimentally [4][5][6] and theoretically 7,8 indicating the presence of Plateau-Rayleigh (PR) type of instabilities which can cause a nanowire to neck and breakup into a chain of nanospheres. In fact, PR like instabilities have been used as a means of self-assembly of chains of nanospheres for several different initial geometries ranging from rings 9 , wires 10 , and thin films 11,12 . PR theory generally predicts that the wavelength of the perturbations which cause a liquid wire to become unstable are proportional to the initial wire circumference (i.e. wires become unstable when c > 2πR 0 ). Moreover, linear stability analysis predicts a preferred wavelength that will drive a liquid wire to breakup. Much work has been done in regards to nanocluster stability during solid-liquid coexistence 13 , and the stability of liquid nanojets 14,15 , nanocylinders 16 , and even in cylindrical metal alloys 17 , relatively few studies address the stability of the solid core of nanowires close to their melting point. It was found that for finite-sized cylinders during phase coexistence, differences in curvature and fluctuations would lead to the formation of random breaches at the material interface, causing the growth of instabilities which lead to the melting of the solid 18 . For finite-sized boxes, different crystal geometries are realised by overcoming nucleation barriers, where a crystal nucleus surrounded by its own fluid could change from a slab geometry to a cylinder, and then to a solid droplet. It suggests that the solid prefers metastable forms as the box approaches the freezing (or melting) density 19 . This indicates that stable geometries of a given phase depend on the volume fraction of said phase (or medium) to the system volume [20][21][22] . Recent work has studied the breakage of gold nanofilaments connecting two nanoparticles where the filaments connecting the two nanoparticles would break apart by Joule heating 23 . Moreover, it was observed that the temperature at the breakage point had a strong dependence on the filament width, and had a dependence on the length in some, but not all cases 23 . The thermally induced breaking of nanowires becomes important when considering the role they play in devices that utilise nanowire networks. Heat can be generated in nanowire networks via current passing through the network, and as such can influence the morphology and breakup of the nanowires making up the network 24,25 . This could be a hindrance to device stability, where it is important to understand the limitations of interconnecting materials like nanowires. In this paper, we investigate the stability of metal nanowires as they approach their melting temperature for copper nanowires of varying lengths and radii. To describe the nanowire stability, we perturb a capillary fluctuation model that describes the kinetics of the solid-liquid interface. The model is then tested against molecular dynamics (MD) simulations, where it is found that longer nanowires are more unstable with respect to the melt. Results Capillary fluctuation model. Melting at the nanoscale is thought to initiate at the surface, and move from the outside inwards, with the interface consuming the solid as it melts. However, observations in nanowires show the solid will begin to neck and breakup as the nanowire approaches its melting point T m 8 . Figure 1a shows a topdown view of a nanowire at a temperature T that sits between its surface melting temperature T s and bulk melting temperature T m (where T m is the bulk melting temperature of finite-sized nanowire). Figure 1b shows that as T → T m the solid is consumed radially as the interface moves towards the wire centre. Figure. 1c, d shows the melting instability mechanism. Figure 1c shows a side-on view of a nanowire where T s < T < T m . However, as T → T m , rather than the interface moving towards the centre, a portion of the solid begins to thin out and neck, as seen in Fig. 1d, initiating the solid breakup and causing the remaining solid to be consumed. We first consider the Gibbs free energy difference per unit length in an infinitely long cylinder surrounded by its own melt close to T m (see 18 supplementary theories) The value of r that minimizes this equation represents the equilibrium solid radius for an infinitely long nanowire in the vicinity of T m Here, γ sl represents the solid-liquid interfacial energy, L v is the bulk latent heat of melting per unit volume, T c melting temperature of the bulk materials and T is the undercooling. Now we look at the interface velocity for a cylindrical nucleus 18 . For an infinite flat interface there is zero undercooling, and if T < T c , the solid-liquid interface will propagate towards the liquid phase with a velocity V [26][27][28][29] where V 0 represents a maximum velocity that depends on temperature, Q is defined to be a thermodynamic driving force, and k b is Boltzmann's constant. Q is defined as the difference between the solid and liquid phases per atom, so in a flat interface limit, it can be approximated as Q ≃ L v �T/N T c , where N is the number density. Taking Eq. (3), substituting for Q and taking T = T c − T , and expanding in the small undercooling limit the interface velocity can be linearised as where ζ is a kinetic coefficient. This gives the planar interface velocity in the small undercooling limit. We now consider the interface kinetics by looking at the dynamic behaviour of a cylindrical interface with a profile r(z, t) 18,30,31 where Ŵ = (γ sl + γ ′′ sl )T c /L v , with γ sl + γ ′′ sl the interfacial stiffness of the solid-liquid interface 32 . Assuming an isotropic solid-liquid interface and small anisotropy, this can be approximated as where C is a constant, and the delta functions suggest the noise is uncorrelated in space and time. We perturb the solid-liquid interface by a small parameter ε , and express it as the surface r(z, t) = r * + εe ikz+ωt , with k and ω being the wavenumber and instability growth rate respectively (see in Fig. 2). The term (r * + εe ikz+ωt ) −1 can be approximated as By substituting r into Eq. (5), using the expression in Eq. (6), solving to O(ε) , and using the definition of r * in Eq. (2) we recover A PR type instability can be found by observing that ω > 0 when kr * < 1 , bringing us to the familiar solution Combining Eqs. (8) and (2) and taking T = T c − T m , the interface will remain stable when ω < 0 and leads to the moving interface front seen in Fig. 1a to b giving the condition If ω > 0 then T ω > T m , and perturbations will grow to destabilise the solid-liquid interface, giving the scenario in Fig. 1c-d. If * ∝ L then T ω becomes larger than T m quickly, giving a criteria describing when each melting mode is preferred. Finally, we look at the equation for the bulk melting temperature of a cylindrically symmetric nanowire of radius R 0 from previous work 8 , where κ = 1−�γ /γ sl 1+�γ /γ sl ( �γ is the spreading parameter that determines the wettability of a material 8 ), ξ represents the correlation length, I 0 and I 1 represent modified Bessel functions of the first kind respectively. Equation (10) is found by solving a two-parabola Landau-type model for the free energy of a cylindrically symmetric nanowire 8 . Combining Eqs. (2) and (10), a equation for r * in terms of R 0 and interfacial energies κ can be found Methods MD simulations were performed using LAMMPS 33 using an embedded-atom-model (EAM) potential for copper 34 , with a bulk melting temperature of 1320 K. This potential was chosen because yielded good results for melting temperatures and dynamics 8 . Periodic boundary conditions in all directions were used, with the periodicity along the wire axis effectively simulating an infinitely long wire, and additionally suppress long-wavelength instabilities that may otherwise cause the wire to break apart prior to completely melting. As such, the maximum wavelength allowed by the system will be equal to the box size along the axis of the nanowire. The equations of motion were integrated using a Verlet method using a timestep of 2.5 fs. The temperature was controlled with a Langevin thermostat, which effectively simulates Brownian motion, and is a appropriate choice of thermostat since To control the temperature a Langevin thermostat with a damping parameter of 1.0 ps −1 . This was to ensure a quick equilibration at each timestep of the simulation. The simulations were initialized at an initial temperature T i for 1.0 ns. Afterwards, a production phase for each wire was run from a temperature T i to a temperature T i + 1 . Then an equilibration phase around the temperature T = T i + 1 was run. Each production phase was 0.40 ns, with an equilibration phase of 0.60 ns, creating an effective heating rate of around 1K/ ns. This ensured us that at each temperature the wires were sufficiently close to equilibrium. See supplementary materials S2 and S3 for more information on computational details and methods used in this study. We can study the stability of the solid-liquid interface by examining when the solid core begins to pinch-off for two wires of the same radii, but different lengths, as shown in Fig. 3. As T → T m , the interface will either move towards the wire centre Fig. 3a or begin to pinch-off Fig. 3b. In Fig. 3a the size of the liquid nucleus is large compared to the much longer wire. The presence of solid atoms close to the liquid surface in Fig. 3b can be seen by the presence of a 'noisy' interface (red solid line). We will see in this section, wires with lengths that satisfy L > 2πR 0 will pinch-off and melt at a temperature consistently lower than wires with lengths L ≤ 2πR 0 . To study the stability of the solid-liquid interface, the Fourier transform of the solid is taken to extract modes that destabilise the interface (see supplementary details S3). Figure 4 represents a stability diagram in terms of the fastest growing modes k sol r * against wire aspect ratio L/R 0 , where each k sol r * is the averaged value of kr * that represents the maximum Fourier transform amplitude (see supplementary details Fig. S3). The modes k sol r * for each wire aspect ratio are similar, which indicates destabilising modes depend strongly on the wire aspect ratio. As nanowires get shorter, modes that destabilise the interface approach unity, in violation of classical PR theory (see red dashed line). Also seen are two regimes that identify the preferred melting mode, as seen in Fig. 1. To the left (light pinkish region) we see the regime where T ω < T m which indicates the solid-liquid interface must move closer to the centre before the pinch-off can initiate. On the right (dark bluish region) we see the regime where T ω > T m and identify when the pinch-off melting mechanism is favoured. Observations from MD simulation agree with the theory developed, represented by Eqs. (7), (8) and (9). Longer wires will be more thermodynamically unstable since wire lengths will generally be greater than the circumference of the coexisting solid, as indicated by Eq. (8). Included in Fig. 4 is the scaling relation k sol r * ∝ 2πR 0 L , which follows the trend observed in MD simulations, as well as predictions by classical PR theory which states k max R 0 ≃ 0.697 . For wires with L < 2πR 0 , k sol r * approaches and exceeds unity, as predicted by the scaling relation, violating classical PR theory and indicating regions of interface stability. Moreover, we see that for higher wire aspect ratios, PR theory overpredicts the fastest growing modes in the solid. This has been previously reported when examining the stability of liquid nanojets, implying that at small length scales classical PR theory is not wholly sufficient to predict interface stability 16 . This too is the case in liquid nanowires (see supplementary details). www.nature.com/scientificreports/ We now examine how the wire length influences the stability of the solid-liquid interface by looking at how r * obtained from simulation behaves close to the melting point for wires of different radii and lengths. From Eq. (2) we can see that r * ∝ T −1 . As observed in Fig. 5, MD results consistently show that longer wires melt at a lower temperature since they are prone to the growth of instabilities that initiate the pinch-off. Shorter wires not only melt at a slightly higher temperature, but the value of r * at the point when the pinch-off initiates is consistently smaller, in agreement with the theory developed. It supports the idea that there are two melting mechanisms that depend on the wire aspect ratio, as evident in Fig. 4. The simulated results point to the bulk melting temperature of the potential used being between 1220 and 1230 K, rather than the 1320 K stated. In Fig. 6 we see that r * depends not only on the initial wire radii R 0 , but also on the wire aspect ratio as well. This evidence supports the theory and previous claims that as the wire aspect ratio gets smaller, the solid core radius prior to the pinch-off decreases. The ratio of r * /R 0 appears to converge in the limit of large L/R 0 . Each Figure 4. This figure shows how the wire aspect ratio affects the fastest growing modes. A thick red line plots 2π/(L/R 0 ) , which we assume is proportional to k sol r * . The light shaded area shows the region where the interface is stable at temperatures close to T m (radial mode), whereas the darker shaded region shows where the interface is expected to be unstable (instability mode). Figure 5. Values of r * obtained from simulation against T −1 . Yellow circles are for r * when L ≤ 2πR 0 , and red diamond points represent r * and T for when L > 2πR 0 . Yellow circles indicate when the radial mode is the preferred melting mechanism, and red diamonds indicate where the instability mode is preferred. The first two points (bottom left) are for wires where R 0 ≈ 22 Å the middle two are for R 0 ≈ 30 Å, and the last two are for R 0 ≈ 38 Å. We assume yellow and red points represent the bulk and instability melting temperatures, T m and T ω , respectively. www.nature.com/scientificreports/ r * /R 0 for the aspect ratios studied are similar when L > 2πR 0 . Once L ≤ 2πR 0 the difference in r * /R 0 becomes appreciable. This could be indicative that quantities like the interfacial energies and correlation length ( γ sl and ξ respectively) become important quantities for small wires, implying size and curvature play key roles in observations for small aspect ratios. (See supplementary details for the mean and standard deviations for T m , r * and k sol r * ). Discussion By perturbing the interface of a surface melted metal nanowire, we can describe the existence of two mechanisms for nanowire melting; an instability mode or radial mode. The model showed the fastest growing modes that destabilise the interface are inversely proportional to the wire length, where a PR type instability for the solid in a surface melted nanowire is recovered. By using classical nucleation theory and exploring the nanowire stability in the vicinity of T m , we were able to define the condition that determines the preferred melting mechanism. Moreover, we recovered an expression for the equilibrium solid radius in terms of the initial wire radius and the interplay of the interfacial energies of the nanowire. Simulations show the fastest growing modes are inversely proportional to the wire length, and in fact that k sol r * ∝ 2πR 0 L . Additionally, we observe longer nanowires consistently melt at a lower temperature than shorter wires, in agreement with our developed theory and other recent observations 23 . The implication is that shorter nanowires have a more stable interface when close to their melting temperature. For nanowires where the instability mechanism is the preferred melting mode, once the pinch-off has initiated the remaining solid will be consumed. This is because the solid core tries to stabilise itself by forming into a sphere, minimising its surface energy. In some cases it was observed that for the longest, thinnest nanowires, the liquid-vapour interface would begin to neck, being driven by surface diffusion, which in turn influenced the breakup of the solid. For longer heating rates or overdamped Brownian dynamics, this feature would become more pronounced. However, due to the quick equilibration at each timestep, this was not an issue. Evidence can also be seen that r * depends on the wire aspect ratio and not just the initial radius. This has been reported in previous work, where for nickel and aluminium nanowires of a single length but increasing radii, the solid core remained stable down to smaller radii 8 . Studies have explored the size-dependence of interfacial energies 35 . Our study, however, shows surface area of metal nanowires becomes an important factor in interfacial energies for small wire lengths. Curvature too plays a role in the interfacial energy, where for spherical clusters the solid-liquid interface energy is linear with inverse radius 36,37 . This size and curvature dependence explains why values of r * /R 0 begin to deviate away at low aspect ratios. The ratio of atoms at the surface compared to the bulk becomes far more appreciable for the smallest wires, giving the curvature a greater role in the solid-liquid interface dynamics. Given the fact that the fastest growing modes are inversely proportional to the nanowire length, it would be of no surprise that interfacial energies will depend on this too since their surface area will scale with radius and length. The theory and simulations show that long nanowires are thermodynamically unstable at high temperatures since the nanowire length will almost always be much greater than its equilibrium solid radius. This has ramifications when considering device stability that utilise nanowires subjected to heating. We observed that for long, thin nanowires, the liquid-vapour interface can begin to destabilise even before the solid begins to neck. This implies ultra-long, thin nanowires will be particularly unstable at elevated temperatures and should be considered when constructing nanowire devices. Figure 6. The ratio of equilibrium solid radius to initial wire radius against the wire aspect ratio. Values of r * /R 0 for each aspect ratio are clustered closely together, where they begin to deviate markedly when the initial wire length L < 2πR 0 . Conclusion We studied the stability of the solid in copper nanowires as they approach their melting temperature by perturbing a model describing interface kinetics and compared the results to MD simulations. The model found a stability criterion that dictates the preferred melting mode a nanowire will take. We found that longer nanowires are thermodynamically unstable, and will preferentially pinch-off and melt, indicating a melting mechanism driven by a PR type of instability. In shorter nanowires, the interface front moved radially towards the nanowire centre before the solid would breakup, indicating higher interface stability, with MD results in agreement with our model. Moreover, we proposed modes that destabilise the solid-liquid interface are dominated by the nanowire length, in contrast to PR theory which states they are proportional to the circumference. Additionally, it was observed from the MD simulations that longer nanowires consistently have a melting temperature a few degrees below shorter nanowires, indicating the nanowire aspect ratio influences the preferred melting mode and solid-liquid interfacial stability. Data availability The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
5,019
2022-06-14T00:00:00.000
[ "Physics" ]
Solving high-dimensional optimal stopping problems using deep learning Nowadays many financial derivatives, such as American or Bermudan options, are of early exercise type. Often the pricing of early exercise options gives rise to high-dimensional optimal stopping problems, since the dimension corresponds to the number of underlying assets. High-dimensional optimal stopping problems are, however, notoriously difficult to solve due to the well-known curse of dimensionality. In this work, we propose an algorithm for solving such problems, which is based on deep learning and computes, in the context of early exercise option pricing, both approximations of an optimal exercise strategy and the price of the considered option. The proposed algorithm can also be applied to optimal stopping problems that arise in other areas where the underlying stochastic process can be efficiently simulated. We present numerical results for a large number of example problems, which include the pricing of many high-dimensional American and Bermudan options, such as Bermudan max-call options in up to 5000 dimensions. Most of the obtained results are compared to reference values computed by exploiting the specific problem design or, where available, to reference values from the literature. These numerical results suggest that the proposed algorithm is highly effective in the case of many underlyings, in terms of both accuracy and speed. In this work, we propose an algorithm for solving general possibly high-dimensional optimal stopping problems; cf. Framework 3.2 in Subsection 3.2. In spirit, it is similar to the algorithm introduced in [9]. The proposed algorithm is based on deep learning and computes both approximations of an optimal stopping strategy and the optimal expected pay-off associated with the considered optimal stopping problem. In the context of pricing early exercise options, these correspond to approximations of an optimal exercise strategy and the price of the considered option, respectively. The derivation and implementation of the proposed algorithm consist of essentially the following three steps. (I) A neural network architecture for, in an appropriate sense, 'randomised' stopping times (cf. (31) in Subsection 2.4) is established in such a way that varying the neural network parameters leads to different randomised stopping times being expressed. This neural network architecture is used to replace the supremum of the expected pay-off over suitable stopping times (which constitutes the generic optimal stopping problem) by the supremum of a suitable objective function over neural network parameters (cf. (38)- (39) in Subsection 2.5). (II) A stochastic gradient ascent-type optimisation algorithm is employed to compute neural network parameters that approximately maximise the objective function (cf. Subsection 2.6). (III) From these neural network parameters and the corresponding randomised stopping time, a true stopping time is constructed which serves as the approximation of an optimal stopping strategy (cf. (44) and (46) in Subsection 2.7). In addition, an approximation of the optimal expected pay-off is obtained by computing a Monte Carlo approximation of the expected pay-off under this approximately optimal stopping strategy (cf. (45) in Subsection 2.7). It follows from (III) that the proposed algorithm computes a low-biased approximation of the optimal expected pay-off (cf. (48) in Subsection 2.7). Yet, a large number of numerical experiments where a reference value is available (cf. Section 4) show that the bias appears to become small quickly during training and that a very satisfying accuracy can be achieved in short computation time, even in high dimensions (cf. the end of this introduction below for a brief overview of the numerical computations that were performed). Moreover, in (I) we resort to randomised stopping times in order to circumvent the discrete nature of stopping times that attain only finitely many different values. As a result, it is possible in (II) to tackle the arising optimisation problem with a stochastic gradient ascent-type algorithm. Furthermore, while the focus in this article lies on American and Bermudan option pricing, the proposed algorithm can also be applied to optimal stopping problems that arise in other areas where the underlying stochastic process can be efficiently simulated. Apart from this, we only rely on the assumption that the stochastic process to be optimally stopped is a Markov process (cf. Subsection 2.4). But this assumption is no substantial restriction since, on the one hand, it is automatically fulfilled in many relevant problems and, on the other hand, a discrete stochastic process that is not a Markov process can be replaced by a Markov process of higher dimension that aggregates all necessary information (cf., e.g., [9,Subsection 4.3] and, e.g., Subsection 4.4.4). Next we compare our algorithm to the one introduced in [9]. The latter splits the original problem into smaller optimal stopping problems at each time step where stopping is permitted and decides to stop at that point in time or later (cf. [9, (4) in Subsection 2.1]). Starting at maturity, these auxiliary problems are solved recursively backwards until the initial time is reached. Thereby, in every new step, neural network parameters are learned for an objective function that depends, in particular, on the parameters found in the previous steps (cf. [9,Subsection 2.3]). In contrast, in (I) a single objective function is designed. This objective function allows to search in (II) for neural network parameters that maximise the expected pay-off simultaneously over (randomised) stopping times which may decide to stop at any of the admissible points in time. Therefore, the algorithm proposed here does not rely on a recursion over the different time points. In addition, the construction of the final approximation of an optimal stopping strategy in (III) differs from a corresponding construction in [9]. We refer to Subsection 4.3.2.1 for a comparison between the two algorithms with respect to performance. The remainder of this article is organised as follows. In Section 2, we present the main ideas from which the proposed algorithm is derived. More specifically, in Subsection 2.1 we illustrate how an optimal stopping problem in the context of American option pricing is typically formulated. Thereafter, a replacement of this continuous-time problem by a corresponding discrete time optimal stopping problem is discussed by means of an example in Subsection 2.2. Subsection 2.3 is devoted to the statement and proof of an elementary but crucial result about factorising general discrete stopping times in terms of compositions of measurable functions (cf. Lemma 2.2), which lies at the heart of the neural network architecture we propose in Subsection 2.4 to approximate general discrete stopping times. This construction, in turn, is exploited in Subsection 2.5 to transform the discrete time optimal stopping problem from Subsection 2.2 into the search of a maximum of a suitable objective function (cf. (I) above). In Subsection 2.6, we suggest to employ stochastic gradient ascent-type optimisation algorithms to find approximate maximum points of the objective function (cf. (II) above). As a last step, we explain in Subsection 2.7 how we calculate final approximations of both the American option price and an optimal exercise strategy (cf. (III) above). In Section 3, we introduce the proposed algorithm in a concise way, first for a special case for the sake of clarity (cf. Subsection 3.1) and second in more generality so that, in particular, a rigorous description of our implementations is fully covered (cf. Subsections 3.2-3.3). Following this, in Section 4 first a few theoretical results are presented (cf. Subsection 4.1), which are used to design numerical example problems and to provide reference values. Thereafter, we describe in detail a large number of example problems, on which our proposed algorithm was tested, and present numerical results for each of these problems. In particular, the examples include the optimal stopping of Brownian motions (cf. Main ideas of the proposed algorithm In this section, we outline the main ideas that lead to the formulation of the proposed algorithm in Subsections 3.1-3.2 by considering the example of pricing an American option. The proposed algorithm in Framework 3.2 in Subsection 3.2 is, however, general enough to also be applied to optimal stopping problems where there are no specific assumptions on the dynamics of the underlying stochastic process, as long as it can be cheaply simulated (cf. Subsection 3.3). Furthermore, often in practice and, in particular, in the case of Bermudan option pricing (cf. many of the examples in Section 4), the optimal stopping problem of interest is not a continuous-time problem but is already formulated in discrete time. In such a situation, there is no need for a time discretisation, as described in Subsection 2.2 below, and the proposed algorithm in Framework 3.2 can be applied directly. The American option pricing problem Let T ∈ (0, ∞), d ∈ N = {1, 2, 3, . . .}, let (Ω, F, P) be a probability space with a filtration F = (F t ) t∈[0,T ] that satisfies the usual conditions (cf., e.g., [59,Definition 2.25 in Section 1.2]), let ξ : Ω → R d be an F 0 /B(R d )-measurable function which satisfies for × Ω → R d be a standard (Ω, F, P, F )-Brownian motion with continuous sample paths, let µ : R d → R d and σ : R d → R d×d be Lipschitz continuous functions, let X : [0, T ] × Ω → R d be an F -adapted continuous solution process of the stochastic differential equation let F = (F t ) t∈[0,T ] be the filtration generated by X, and let g : [0, T ] × R d → R be a continuous and at most polynomially growing function. We think of X as a model for the price processes of d underlyings (say, d stock prices) under the risk-neutral pricing measure P (cf., e.g., Kallsen [58]) and we are interested in approximatively pricing the American option on the process (X t ) t∈[0,T ] with the discounted pay-off function g : that is, we intend to compute the real number In addition to the price of the American option in the model (1), there is also a high demand from the financial engineering industry to compute an approximately optimal exercise strategy, that is, to compute a stopping time which approximately reaches the supremum in (2). In a very simple example of (1)-(2), we can think of an American put option in the onedimensional Black-Scholes model, in which there are an interest rate r ∈ R, a dividend yield δ ∈ [0, ∞), a volatility β ∈ (0, ∞), and a strike price K ∈ (0, ∞) such that it holds for all Stochastic gradient ascent optimisation algorithms Local/global maxima of the objective function (39) can be approximately reached by maximising the expectation of the random objective function by means of a stochastic gradient ascent-type optimisation algorithm. This yields a sequence of random parameter vectors along which we expect the objective function (39) to increase. More formally, applying under suitable hypotheses stochastic gradient ascenttype optimisation algorithms to (39) results in random approximations for m ∈ {0, 1, 2, . . . } of the local/global maximum points of the objective function (39), where m ∈ {0, 1, 2, . . . } is the number of steps of the employed stochastic gradient ascenttype optimisation algorithm. Price and optimal exercise time for American-style options The approximation algorithm sketched in Subsection 2.6 above allows us to approximatively compute both the price and an optimal exercise strategy for the American option (cf. Subsection 2.1). Let M ∈ N and consider a realisation Θ M ∈ R ν of the random variable Θ M : Ω → R ν . Then for sufficiently large N, ν, M ∈ N a candidate for a suitable approximation of the American option price is the real number and a candidate for a suitable approximation of an optimal exercise strategy for the American option is the function Note, however, that in general the function (43) does not take values in {0, 1, . . . , N } and hence is not a proper stopping time. Similarly, note that in general it is not clear whether there exists an exercise strategy such that the number (42) is equal to the expected discounted pay-off under this exercise strategy. For these reasons, we suggest other candidates for suitable approximations of the price and an optimal exercise strategy for the American option. More specifically, for every θ ∈ R ν let τ θ : Ω → {0, 1, . . . , N } be the F-stopping time given by (cf. (30) above). Then for sufficiently large N, ν, M ∈ N we use a suitable Monte Carlo approximation of the real number as a suitable implementable approximation of the price of the American option (cf. (2) in Subsection 2.1 above and (59) in Subsection 3.1 below) and we use the random variable as a suitable implementable approximation of an optimal exercise strategy for the American option. Note that (30) ensures that This shows that the exercise strategy τ Θ M : Ω → {0, 1, . . . , N } exercises at the first time index n ∈ {0, 1, . . . , N } for which the approximate stopping time factor associated with the mesh point t n is at least as large as the combined approximate stopping time factors associated with all later mesh points. Finally, observe that This implies that Monte Carlo approximations of the number (45) typically are low-biased approximations of the American option price (2). Details of the proposed algorithm 3.1 Formulation of the proposed algorithm in a special case In this subsection, we describe the proposed algorithm in the specific situation where the objective is to solve the American option pricing problem described in Subsection 2.1, where batch normalisation (cf. Ioffe & Szegedy [53]) is not employed in the proposed algorithm, and where the plain vanilla stochastic gradient ascent approximation method with a constant learning rate γ ∈ (0, ∞) and without mini-batches is the employed stochastic approximation algorithm. The general framework, which includes the setting in this subsection as a special case, can be found in Subsection 3.2 below. and for every n ∈ {0, 1, . . . , N }, θ ∈ R ν let U n,θ : (R d+1 ) n+1 → (0, 1) be the function which satisfies for all z 0 , z 1 , . . . , z n ∈ R d+1 that for every m ∈ N let φ m : R ν × Ω → R be the function which satisfies for all θ ∈ R ν , ω ∈ Ω that let Θ : N 0 × Ω → R ν be a stochastic process which satisfies for all m ∈ N that and for every j ∈ N, θ ∈ R ν let τ j,θ : Ω → {0, 1, . . . , N } be the random variable given by Consider the setting of Framework 3.1, assume that µ and σ are globally Lipschitz continuous, and assume that g is continuous and at most polynomially growing. In the case of sufficiently large N, M, J ∈ N and sufficiently small γ ∈ (0, ∞), we then think of the random number as an approximation of the price of the American option with the discounted pay-off function g and for every j ∈ N we think of the random variable as an approximation of an optimal exercise strategy associated with the underlying timediscrete path (X M +j n ) n∈{0,1,...,N } (cf. Subsection 2.1 above and Section 4 below). Formulation of the proposed algorithm in the general case In this subsection, we extend the framework in Subsection 3.1 above and describe the proposed algorithm in the general case. , g(t n , X m,j ) , for every n ∈ {0, 1, . . . , N }, θ ∈ R ν , s ∈ R ς let u θ,s n : R d+1 → (0, 1) be a function, for every n ∈ {0, 1, . . . , N }, θ ∈ R ν , s ∈ R ς let U θ,s n : (R d+1 ) n+1 → (0, 1) be the function which satisfies for all z 0 , z 1 , . . . , z n ∈ R d+1 that for every j ∈ N, θ ∈ R ν , s ∈ R ς let τ j,θ,s : Ω → {0, 1, . . . , N } be the random variable given by and let P : Ω → R be the random variable which satisfies for all ω ∈ Ω that Consider the setting of Framework 3.2. Under suitable further assumptions, in the case of sufficiently large N, M, ν, J 0 ∈ N, we think of the random number as an approximation of the price of the American option with the discounted pay-off function g and for every j ∈ N we think of the random variable as an approximation of an optimal exercise strategy associated with the underlying timediscrete path (X 0,j n ) n∈{0,1,...,N } (cf. Subsection 2.1 above and Section 4 below). Comments on the proposed algorithm Note that the lack in Framework 3.2 of any assumptions on the dynamics of the stochastic process (X 0,1 n ) n∈{0,1,...,N } allows us to approximatively compute the optimal pay-off as well as an optimal exercise strategy for very general optimal stopping problems where, in particular, the stochastic process under consideration is not necessarily related to the solution of a stochastic differential equation. We only require that (X 0,1 n ) n∈{0,1,...,N } can be simulated efficiently and formally we still rely on the Markov assumption (cf. Subsection 2.4 above). In addition, observe that the choice of the functions u θ,s N : R d+1 → (0, 1), s ∈ R ς , θ ∈ R ν , has no influence on the proposed algorithm (cf. (61)). Furthermore, the dynamics in (65) associated with the stochastic processes (Ξ m ) m∈N 0 and (Θ m ) m∈N 0 allow us to incorporate different stochastic approximation algorithms such as • plain vanilla stochastic gradient ascent with or without mini-batches (cf. (57) [53] and the beginning of Section 4 below) into the algorithm in Subsection 3.2. In that case, we think of (S m ) m∈N 0 as a bookkeeping process keeping track of approximatively calculated means and standard deviations as well as of the number of steps m ∈ N 0 of the employed stochastic approximation algorithm. [60]) with varying learning rates and with mini-batches (cf. Subsection 4.2 below for a precise description). In the example in Subsection 4.4.4 below, the initial value X 0,1 0 is random. Therefore, we use N fully connected feedforward neural networks to model the functions u θ,s 0 , . . . , u θ,s N −1 : Then it can be decided whether it is better to stop at time 0 or not by comparing the deterministic pay-off g(0, X 0,1 0 ) to a standard Monte Carlo estimate of the expected pay-off generated by the stopping strategy given by u θ,s 0 = 0 and the functions u θ,s 1 , . . . , u θ,s N −1 : R d+1 → (0, 1); cf. [9, Remark 6 in Subsection 2.3]. The standard network architecture we use in this paper consists of a (d+1)-dimensional input layer, two (d+50)-dimensional hidden layers, and a one-dimensional output layer. As non-linear activation functions just in front of the hidden layers, we employ the multidimensional version of the rectifier function R x → max{x, 0} ∈ [0, ∞), whereas just in front of the output layer we employ the standard logistic function R x → exp(x) /(exp(x)+1) ∈ (0, 1). In addition, batch normalisation (cf. Ioffe & Szegedy [53]) is applied just before the first linear transformation, just before each of the non-linear activation functions in front of the hidden layers as well as just before the non-linear activation function in front of the output layer. We use Xavier initialisation (cf. Glorot & Bengio [45]) to initialise all weights in the neural networks. Two hidden layers work well in all our examples. However, the examples in Subsection 4.3 have an underlying one-dimensional structure, and as a consequence, fewer hidden layers yield equally good results; see Tables 2-3 below. On the other hand, the examples in Subsection 4.4 are more complex. In particular, it can be seen from Table 11 that for the max-call option in Subsection 4.4.1.1, two hidden layers give better results than zero or one hidden layer, but more than two hidden layers do not improve the results. All examples presented below were implemented in Python. The corresponding Python source codes (cf. Section 5) were run, unless stated otherwise (cf. Subsection 4.4.1.2 as well as the last sentence in Subsection 4.4.1.3 below), in single precision (float32) on a NVIDIA GeForce RTX 2080 Ti GPU. The underlying system consisted of an AMD Ryzen 9 3950X CPU with 64 GB DDR4 memory running Tensorflow 2.1 on Ubuntu 19.10. We would like to point out that no special emphasis was put on optimising computation speed. In many cases, some of the algorithm parameters could be adjusted in order to obtain similarly accurate results in shorter runtime. Theoretical considerations Before we present the optimal stopping problem examples on which we tested the algorithm of Framework 3.2 (cf. Subsections 4.3-4.4 below), we recall a few theoretical results, which are used to design some of these examples, determine reference values, and provide further insights. Option prices in the Black-Scholes model The elementary and well-known result in Lemma 4.1 below specifies the distributions of linear combinations of independent and identically distributed centred Gaussian random variables which take values in a separable normed R-vector space. The next elementary and well-known corollary follows directly from Lemma 4.1. The next elementary result, Proposition 4.3, states that the distribution of a product of multiple correlated geometric Brownian motions is equal to the distribution of a single particular geometric Brownian motion. -adapted stochastic process with continuous sample paths, let Y : [0, T ] × Ω → R be an F (2) -adapted stochastic process with continuous sample paths, and assume that for all t ∈ [0, T ] it holds P-a.s. that (ii) it holds that P and G are continuous functions, and Proof of Proposition 4.3. Throughout this proof let γ = (γ 1 , . . . , γ d ) ∈ R d be the vector Observe that for all i ∈ {1, . . . , d}, t ∈ [0, T ] it holds P-a.s. that In addition, note that for all i ∈ {1, . . . , d}, t ∈ [0, T ] it holds P-a.s. that Itô's formula hence shows that for all i ∈ {1, . . . , d}, t ∈ [0, T ] it holds P-a.s. that Combining this and (78) This establishes (i). In the next step note that (ii) is clear. It thus remains to prove (iii). For this observe that (i) establishes that for all t ∈ [0, T ] it holds P-a.s. that Continuity hence implies that it holds P-a.s. that Moreover, note that (i) shows that for all t ∈ [0, T ] it holds P-a.s. that This and continuity establish that it holds P-a.s. that Furthermore, observe that Corollary 4.2 ensures that The fact thatG : The proof of Proposition 4.3 is thus complete. In the next result, Lemma 4.4, we recall the well-known formula for the price of a European call option on a single stock in the Black-Scholes model (cf., e.g., Øksendal [78,Corollary 12.3.8]). 2 y 2 dy, let (Ω, F, P) be a probability space with a filtration F = (F t ) t∈[0,T ] that satisfies the usual conditions, let W : [0, T ] × Ω → R be a standard (Ω, F, P, F )-Brownian motion with continuous sample paths, and let X : [0, T ] × Ω → R be an F -adapted stochastic process with continuous sample paths which satisfies that for all t ∈ [0, T ] it holds P-a.s. that Then it holds for all K ∈ R that Approximating American options with Bermudan options In our numerical simulations, we approximate Bermudan options with a finite number of execution times rather than American options, which theoretically can be executed at infinitely many time points (any time before maturity). However, the following result shows that the prices of American options can be approximated with prices of Bermudan options with equidistant execution times if the number of execution times is sufficiently large. Setting Framework 4.6. Assume Framework 3.2, let ζ 1 = 0.9, assume for all n ∈ {0, 1, . . . , N } that = 2ν, Ξ 0 = 0, and t n = nT N , and assume for all m ∈ N, x = (x 1 , . . . , x ν ), y = (y 1 , . . . , y ν ), η = (η 1 , . . . , η ν ) ∈ R ν that and ψ m (x, y) = Equations (97) • of T as the maturity, • of d as the dimension of the associated optimal stopping problem, • of N as the time discretisation parameter employed, • of M as the total number of training steps employed in the Adam optimiser, • of g as the discounted pay-off function, • of {t 0 , t 1 , . . . , t N } as the discrete time grid employed, • of J 0 as the number of Monte Carlo samples employed in the final integration for the price approximation, • of (J m ) m∈N as the sequence of batch sizes employed in the Adam optimiser, • of ζ 1 as the momentum decay factor, of ζ 2 as the second momentum decay factor, and of ε as the regularising factor employed in the Adam optimiser, • of (γ m ) m∈N as the sequence of learning rates employed in the Adam optimiser, • and, where applicable, of X as a continuous-time model for d underlying stock prices with initial prices ξ, drift coefficient function µ, and diffusion coefficient function σ. Moreover, note that for every m ∈ N 0 , j ∈ N the stochastic processes W m,j,(1) = (W Examples with known one-dimensional representation In this subsection, we test the algorithm of Framework 3.2 on different d-dimensional optimal stopping problems that can be represented as one-dimensional optimal stopping problems. This representation allows us to employ a numerical method for the one-dimensional optimal stopping problem to compute reference values for the original d-dimensional optimal stopping problem. We refer to Subsection 4.4 below for more challenging examples where a one-dimensional representation is not known. A Bermudan put-type example with three exercise opportunities In this subsection, we test the algorithm of Framework 3.2 on the example of optimally stopping a correlated Brownian motion under a put option inspired pay-off function with three possible exercise dates. Among other things, we examine the performance of the algorithm for different numbers of hidden layers of the employed neural networks. Assume The random variable P given in (67) provides approximations of the real number sup E g(τ, S W 0,1 τ ) : τ : Ω→{t 0 ,t 1 ,t 2 } is an (Ft) t∈{t 0 ,t 1 ,t 2 } -stopping time . The numbers in Table 1 were obtained with our standard network architecture with two hidden layers. It shows approximations of the mean of P, of the standard deviation of P, and of the relative L 1 -approximation error associated with P, the uncorrected sample standard deviation of the relative approximation error associated with P, and the average runtime in seconds needed for calculating one realisation of P for d ∈ {1, 5, 10, 50, 100, 500, 1000}. For each case, the calculations of the results in Tables 1-3 are based on 10 independent realisations of P, which were obtained from an implementation in Python. Furthermore, in the approximative calculations of the relative approximation error associated with P, the exact number (100) was replaced, independently of the dimension d, by the real number (cf. Corollary 4.2), which, in turn, was replaced by the value 7.894. The latter was computed in Matlab R2017b using the binomial tree method implemented as Matlab's function optstockbycrr with 20,000 nodes. Note that (101) corresponds to the price of a Bermudan put option on a single stock in the Black-Scholes model with initial stock price χ, interest rate r, volatility β, strike price K, maturity T , and N possible exercise dates. Due to the underlying one-dimensional structure, all examples in Subsection 4.3 admit an optimal stopping rule which, at each possible exercise date t n , checks whether the current pay-off is above a threshold c n ∈ R. Therefore, we also apply our algorithm to the example in Subsection 4.3.1.1 using networks with one input neuron and no hidden layers. This corresponds to learning the thresholds c n from simulated pay-offs with onedimensional logistic regressions. We used the same number of simulations as in Table 1 and batch normalisation before the first linear transformation but no batch normalisation before the logistic function. As can be seen from have the same accuracy as the ones of Table 1 and, in addition, the computations times are shorter. However, as we will see in Subsection 4.4, it cannot be hoped that good results can be obtained with a simplified network architecture if the stopping problem is more complex. Table 3 shows approximations of (100) for d = 10 obtained with networks with (d + 1)dimensional input layers and different numbers of hidden layers. Again, it can be seen that in this example hidden layers do not improve the accuracy of the results. An American put-type example In this subsection, we test the algorithm of Framework 3.2 on the example of optimally stopping a standard Brownian motion under a put option inspired pay-off function. Assume The random variable P given in (67) We report approximations of the mean of P, of the standard deviation of P, and of the relative L 1 -approximation error associated with P, the uncorrected sample standard deviation of the relative approximation error associated with P, and the average runtime in seconds needed for calculating one realisation of P for d ∈ {1, 5, 10, 50, 100, 500, 1000} in Table 4. For each case, the calculations of the results in Table 4 are based on 10 independent realisations of P, which were obtained from an implementation in Python. Furthermore, in the approximative calculations of the relative approximation error associated with P, the exact number (103) was replaced, independently of the dimension d, by the real number . . . , β d ) ∈ R d , ρ,δ,β, δ 1 , δ 2 , . . . , δ d ∈ R, r = 0.6, K = 95,ξ = 100 satisfy for all i ∈ {1, . . . , d} that × Ω → R be an F -adapted stochastic process with continuous sample paths which satisfies that for all t ∈ [0, T ] it holds P-a.s. that and that The random variable P given in (67) (108) In Table 5, we show approximations of the mean of P, of the standard deviation of P, and of the relative L 1 -approximation error associated with P, the uncorrected sample standard deviation of the relative approximation error associated with P, and the average runtime in seconds needed for calculating one realisation of P for d ∈ {40, 80, 120, 160, 200}. For each case, the calculations of the results in Table 5 are based on 10 independent realisations of P, which were obtained from an implementation in Python. Furthermore, in the approximative calculations of the relative approximation error associated with P, the exact value of the price (108) was replaced, independently of the dimension d, by the real number (cf. Proposition 4.3), which, in turn, was replaced by the value 6.545. The latter was calculated using the binomial tree method on Smirnov's website [87] with 20,000 nodes. Note that (109) corresponds to the price of an American put option on a single stock in the Black-Scholes model with initial stock priceξ, interest rate r, dividend yieldδ, volatilitỹ β, strike price K, and maturity T . Table 6: Numerical simulations of the algorithm from [9] for pricing the American geometric average put-type option from the example in Subsection 4.3.2.1. In the approximative calculations of the relative approximation errors, the exact value of the price (108) was again replaced by the value 6.545. In addition, we computed approximations of the price (108) using the algorithm introduced in [9], where the random variableL from [9, Subsection 3.1] plays the role analogous to P. In order to maximise comparability, the hyperparameters and neural network architectures employed for the algorithm from [9] were chosen to be identical to the corresponding ones used for computing realisations of P. Table 6 shows approximations of the mean ofL (cf. [9, Subsection 3.1]), of the standard deviation ofL, and of the relative L 1 -approximation error associated withL, the uncorrected sample standard deviation of the relative approximation error associated withL, and the average runtime in seconds needed for calculating one realisation ofL for d ∈ {40, 80, 120, 160, 200}. For each case, the calculations of the results in Table 6 are based on 10 independent realisations ofL, which were obtained from an implementation in Python. In the approximative calculations of the relative approximation error associated withL, the exact value of the price (108) again was replaced, independently of the dimension d, by the value 6.545. Comparing Table 5 with Table 6, we note that in the present cases the algorithm of Framework 3.2 and the algorithm from [9] exhibit very similar performance in terms of both accuracy and speed, with a slight runtime advantage for the algorithm of Framework 3.2. An American geometric average call-type example In this subsection, we test the algorithm of Framework 3.2 on the example of pricing an American geometric average call-type option on up to 200 correlated stocks in the Black-Scholes model. This example is taken from Sirignano & Spiliopoulos [86,Subsection 4.3]. Assume Framework 4.6, let r = 0%, δ = 0.02 = 2%, β = 0.25 = 25%, K =ξ = 1, × Ω → R be an F -adapted stochastic process with continuous sample paths which satisfies that for all t ∈ [0, T ] it holds P-a.s. that and that The random variable P given in (67) and of the relative L 1 -approximation error associated with P, the uncorrected sample standard deviation of the relative approximation error associated with P, and the average runtime in seconds needed for calculating one realisation of P for d ∈ {3, 20, 100, 200} in Table 7. The approximative calculations of the mean of P, of the standard deviation of P, and of the relative L 1 -approximation error associated with P, the computations of the uncorrected sample standard deviation of the relative approximation error associated with P as well as the computations of the average runtime for calculating one realisation of P in Table 7 each are based on 10 independent realisations of P, which were obtained from an implementation in Python. Furthermore, in the approximative calculations of the relative approximation error associated with P, the exact value of the price (113) was replaced by the number (114) (cf. Proposition 4.3), which was approximatively calculated using the binomial tree method on Smirnov's website [87] with 20,000 nodes. Note that (114) corresponds to the price of an American call option on a single stock in the Black-Scholes model with initial stock priceξ, interest rate r, dividend yieldδ, volatilityβ, strike price K, and maturity T . In the approximative calculations of the relative approximation errors, the exact value of the price (113) was replaced by the number (114), which was approximatively calculated using the binomial tree method on Smirnov's website [87]. Another American geometric average call-type example In this subsection, we test the algorithm of Framework 3.2 on the example of pricing an American geometric average call-type option on up to 400 distinguishable stocks in the Black-Scholes model. Assume Framework 4.6, assume that d ∈ {40, 80, 120, . . .}, let β = (β 1 , . . . , β d ) ∈ R d , α 1 , . . . , α d ∈ R, r,β ∈ (0, ∞), K = 95,ξ = 100 satisfy for all i ∈ {1, . . . , d} that be an F -adapted stochastic process with continuous sample paths which satisfies that for all t ∈ [0, T ] it holds P-a.s. that and that The random variable P given in (67) provides approximations of the price sup E g(τ, X τ ) : τ : Ω→[0,T ] is an In Table 8, we show approximations of the mean of P, of the standard deviation of P, of the real number E e −rT max{Y T − K, 0} , and of the relative L 1 -approximation error associated with P, the uncorrected sample standard deviation of the relative approximation error associated with P, and the average runtime in seconds needed for calculating one realisation of P for d ∈ {40, 80, 120, 160, 200, 400}. The approximative calculations of the mean of P, of the standard deviation of P, and of the relative L 1 -approximation error associated with P, the computations of the uncorrected sample standard deviation of the relative approximation error associated with P as well as the computations of the average runtime for calculating one realisation of P in Table 8 each are based on 10 independent realisations of P, which were obtained from an implementation in Python. Moreover, in the approximative calculations of the relative approximation error associated with P, the exact value of the price (118) was replaced by the real number sup E e −rτ max{Y τ − K, 0} : τ : Ω→[0,T ] is an (cf. Proposition 4.3). It is well known (cf., e.g., Shreve [84,Corollary 8.5.3]) that the number (120) is equal to the number (119), which was approximatively computed in Matlab R2017b using Lemma 4.4 above. Note that (120) corresponds to the price of an American call option on a single stock in the Black-Scholes model with initial stock priceξ, interest rate r, volatilityβ, strike price K, and maturity T , while (119) corresponds to the price of a European call option on a single stock in the Black-Scholes model with initial stock priceξ, interest rate r, volatilityβ, strike price K, and maturity T . . Assume Framework 4.6, let H ∈ N 0 , r = 0.05 = 5%, δ = 0.1 = 10%, β = 0.2 = 20%, K = 100, let F = (F t ) t∈[0,T ] be the filtration generated by X, assume that each of the employed neural networks has H hidden layers, and assume for all m, j ∈ N, n ∈ {0, 1, . . . , N }, In Table 9, we show approximations of the mean and of the standard deviation of P, binomial approximations as well as 95% confidence intervals for the price (123) according to Andersen & Broadie [3, Table 3 in Subsection 5.3] (where available), and the average runtime in seconds needed for calculating one realisation of P for (d, ξ 1 ) ∈ {2, 3, 5} × {90, 100, 110} and H = 2. The approximative calculations of the mean and of the standard deviation of P as well as the computations of the average runtime for calculating one realisation of P in Tables 9-11 each are based on 10 independent realisations of P, which were obtained from an implementation in Python (cf. Python code 2 in Subsection 5.2 below). In Table 10, we list approximations of the mean and of the standard deviation of P and the average runtime in seconds needed for calculating one realisation of P for (d, ξ 1 ) ∈ {10, 20, 30, 50, 100, 200, 500} × {90, 100, 110} and H = 2. To see the impact of the number of hidden layers used in the neural networks, we additionally report in Table 11 approximation results for d = 5, ξ 1 = 100, and H ∈ {0, 1, 2, 3, 4, 5}. We used the same number of simulations as in Tables 9-10, but due to the higher number of hidden layers, we chose M = 5000 and ∀ m ∈ N : γ m = 5[10 −2 1 [1,1000] (m)+ 10 −3 1 (1000,3000] (m) + 10 −4 1 (3000,∞) (m)]. It can be seen that, in this example, two hidden layers yield better results than zero or one hidden layer. But more than two hidden layers do not lead to an improvement. A high-dimensional Bermudan max-call benchmark example In this subsection, we test the algorithm of Framework 3.2 on the example of pricing the Bermudan max-call option from the example in Subsection 4.4.1.1 in a case with 5000 underlying stocks. All Python source codes corresponding to this example were run in single precision (float32) on a NVIDIA Tesla P100 GPU. Assume and that g(s, x) = e −rs max max{x 1 , . . . , x d } − K, 0 . For sufficiently large M ∈ N, the random variable P provides approximations of the price sup E g(τ, X τ ) : In Table 12, we show a realisation of P, a 95% confidence interval for the corresponding realisation of the random variable Ω w → E g τ 1,Θ M (w),S M (w) , X 0,1 the corresponding realisation of the relative approximation error associated with P, and the runtime in seconds needed for calculating the realisation of P for M ∈ {0, 250, 500, . . . , 2000} ∪ {6000}. In addition, Figure 1 depicts a realisation of the relative approximation error associated with P against M ∈ {0, 10, 20, . . . , 2000}. For each case, the 95% confidence interval for the realisation of the random variable (127) in Table 12 was computed based on the corresponding realisation of P, the corresponding sample standard deviation, and the 0.975 quantile of the standard normal distribution (cf., e.g., [9,Subsection 3.3]). Moreover, in the approximative calculations of the realisation of the relative approximation error associated with P, in Table 12 and Figure 1 the exact value of the price (126) was replaced by the value 165.430, which corresponds to a realisation of P with M = 6000 (cf. and that g(s, x) = e −rs max max{x 1 , . . . , x d } − K, 0 . Dimen-Matu-Strike Mean Standard European Price Average runtime sion d rity T price K of P deviation price (131) in [5] in sec. for one of P realisation of P and that The random variable P given in (67) one realisation of P in Table 14 each are based on 10 independent realisations of P, which were obtained from an implementation in Python (cf. Python code 4 in Subsection 5.4 below). A put basket option in Dupire's local volatility model In this subsection, we test the algorithm of Framework 3.2 on the example of pricing an American put basket option on five stocks in Dupire's local volatility model. This example is taken from Labart & Lelong [70,Subsection 6.3] with the modification that we also consider the case where the underlying stocks do not pay any dividends. Assume Framework 4.6, let L = 10, r = 0.05 = 5%, δ ∈ {0%, 10%}, K = 100, assume for all i ∈ {1, . . . , d}, x ∈ R d that ξ i = 100 and µ(x) = (r − δ) x, let β : and σ(t, x) = diag(β(t, x 1 ), β(t, x 2 ), . . . , β(t, x d )), let S = (S (1) , . . . , S (d) ) : [0, T ] × Ω → R d be an F -adapted stochastic process with continuous sample paths which satisfies that for all t ∈ [0, T ] it holds P-a.s. that The random variable P given in (67) which, in turn, is an approximation of the price In Table 15, we show approximations of the mean and of the standard deviation of P and the average runtime in seconds needed for calculating one realisation of P for (δ, N ) ∈ {0%, 10%} × {5, 10, 50, 100}. For each case, the calculations of the results in Table 15 are based on 10 independent realisations of P, which were obtained from an implementation in Python (cf. Python code 5 in Subsection 5.5 below). According to [70,Subsection 6.3], the value 6.30 is an approximation of the price (141) for δ = 10%. Furthermore, the European put basket option price E g(T, Y 0,1 T ) corresponding to (141) was approximatively calculated using a Monte Carlo approximation based on 10 10 realisations of the random variable Ω ω → g(T, Y 0,1 T (ω)) ∈ R (cf. Python code 5 in Subsection 5.5 below), which resulted in the value 1.741 in the case δ = 0% and in the value 6.304 in the case δ = 10%. A path-dependent financial derivative In this subsection, we test the algorithm of Framework 3.2 on the example of pricing a specific path-dependent financial derivative contingent on prices of a single underlying stock in the Black-Scholes model, which is formulated as a 100-dimensional optimal stopping problem. This example is taken from Tsitsiklis & Van Roy [90, Section IV] with the modification that we consider a finite instead of an infinite time horizon. Assume Framework 4.6, let r = 0.0004 = 0.04%, β = 0.02 = 2%, let W m,j : [0, ∞)×Ω → R, j ∈ N, m ∈ N 0 , be independent P-standard Brownian motions with continuous sample paths, let S m,j : [−100, ∞) × Ω → R, j ∈ N, m ∈ N 0 , and Y m,j : N 0 × Ω → R 100 , j ∈ N, m ∈ N 0 , be the stochastic processes which satisfy for all m, n ∈ N 0 , j ∈ N, t ∈ [−100, ∞) that S m,j t = exp r − 1 2 β 2 (t + 100) + β W m,j t+100 ξ 1 and In Table 16, we show approximations of the mean and of the standard deviation of P and the average runtime in seconds needed for calculating one realisation of P for T ∈ {100, 150, 200, 250, 1000}. For each case, the calculations of the results in Table 16 are based on 10 independent realisations of P, which were obtained from an implementation in Python (cf. Python code 6 in Subsection 5.6 below). Note that in this example time is measured in days and that, roughly speaking, (144) corresponds to the price of a financial derivative which, if the holder decides to exercise, pays off the amount given by the ratio between the current underlying stock price and the underlying stock price 100 days ago (cf. [ which corresponds to the price (144) in the case of an infinite time horizon. Since the mean of P is a lower bound for the price (144), which, in turn, is a lower bound for the price (145), a higher value indicates a better approximation of the price (145). In addition, observe that the price (144) is non-decreasing in T . While in our numerical simulations the approximate value of the mean of P is less or equal than 1.282 for comparatively small time horizons, i.e., for T ≤ 150, it is already higher for slightly larger time horizons, i.e., for T ≥ 200 (cf.
11,188.8
2019-08-05T00:00:00.000
[ "Computer Science", "Mathematics", "Business" ]
Exploring the neighborhood of 1-layer QAOA with Instantaneous Quantum Polynomial circuits We embed 1-layer QAOA circuits into the larger class of parameterized Instantaneous Quantum Polynomial circuits to produce an improved variational quantum algorithm for solving combinatorial optimization problems. The use of analytic expressions to find optimal parameters classically makes our protocol robust against barren plateaus and hardware noise. The average overlap with the ground state scales as $\mathcal{O}(2^{-0.31 N})$ with the number of qubits $N$ for random Sherrington-Kirkpatrick (SK) Hamiltonians of up to 29 qubits, a polynomial improvement over 1-layer QAOA. Additionally, we observe that performing variational imaginary time evolution on the manifold approximates low-temperature pseudo-Boltzmann states. Our protocol outperforms 1-layer QAOA on the recently released Quantinuum H2 trapped-ion quantum hardware and emulator, where we obtain an average approximation ratio of $0.985$ across 312 random SK instances of 7 to 32 qubits, from which almost $44\%$ are solved optimally using 4 to 1208 shots per instance. I. INTRODUCTION Since its introduction by Farhi et al. [1] in 2014, the Quantum Approximate Optimization Algorithm (QAOA) has been explored in the quantum computing literature as one of the most promising heuristics for achieving quantum advantage on near-term devices [2,3].This is only one example of a larger class of variational quantum optimization algorithms, which attempt to produce good solutions to combinatorial optimization problems by sampling a parameterized quantum circuit [4][5][6][7][8].In the absence of full quantum error correction [9] the required circuits must be sufficiently shallow to withstand noise, yet expressive enough to find states with high overlap onto the ground state.QAOA is a particularly good choice for satisfying these criteria, as it has an adjustable number of layers p.It can be understood as a Trotterized version of the quantum adiabatic algorithm (QAA), for which compelling theoretical evidence of performance exists [10].Additionally, it was shown that even for small numbers of layers, sampling from the QAOA ansatz is a hard task for classical computers [11]. In this regime of a small number of layers, the form of the Trotterized QAOA operators may not be the best choice.This has motivated [12][13][14][15] the addition of extra parameters to the QAOA ansatz so that, instead of evolving the state according to the problem Hamiltonian, each parameter in the ansatz has the freedom to evolve independently.By doing this, an ansatz of the same depth may incorporate corrections that would otherwise require multiple layers. In particular, 1-layer QAOA circuits-with and without the additional parameterization-belong to the class FIG. 1. Diagrammatic representation of the algorithm.The 1-layer QAOA ansatz is a submanifold of the IQP ansatz and provides a warm start in the optimization protocol.The trajectory between the QAOA optimum and the IQP optimum is defined via the McLachlan variational principle and is computed classically.Color coding the optimization landscape represents the effective temperature of the associated state, with lower temperature states (blue) having a higher chance of sampling the ground state.The quantum computer is only used during the sampling step, which is known to be difficult classically. of parameterized quantum circuits known as Weighted Graph States (WGS) used to simulate condensed matter systems [16][17][18][19][20][21][22].For these states, the reduced density matrix in a subsystem of fixed size can be computed classically, allowing the efficient evaluation of local observables on a classical computer.This property permits the derivation of analytic and exact expressions for 1-layer QAOA on arbitrary local Hamiltonians [23] and for extra-parameterized circuits on some restricted local Hamiltonians [12,13].Such expressions are used to train the model classically, bypassing typical limitations such as the appearance of barren plateaus [24]. In this manuscript, we explore the embedding of 1-layer QAOA into the broader class of parameterized Instantaneous Quantum Polynomial (IQP) circuits, for which similar hardness of sampling theorems exist [25,26], even in the presence of moderate noise [27].IQP circuits also belong to the class of WGS, but compared to QAOA and existing extra-parameterized variants our ansatz uses allto-all two-qubit interactions, making its implementation problem-independent and most natural for trapped-ion quantum computers.We additionally show that analytic and exact expressions can be obtained for arbitrary local Hamiltonians, and use them to train the model via robust classical techniques like the Runge-Kutta method [28].We emphasize the role of starting the training from the optimal QAOA and finding a nearby local minimum rather than aiming for a global optimum, which avoids the challenging exploration of non-trivial landscapes [29]. This leaves only the key ingredient of sampling from the final quantum state to be performed on the quantum device, as illustrated in Fig. 1.A recent investigation of the states produced by 1-layer QAOA [30] shows that sampling produces a distribution close to a Boltzmann distribution, at temperatures beyond the reach of classical sampling techniques such as Markov Chain Monte Carlo (MCMC) [31].We improve on this result by lowering the temperature further, using variational quantum imaginary time evolution (VarQITE) [32,33].However, the constraint of keeping the state in the variational manifold limits our ability to follow exact imaginary time evolution, distorting the distribution. The manuscript is structured as follows.Section II provides a brief review of QAOA.Our IQP ansatz is introduced in Section III, where we make the connection to 1-layer QAOA, describe the derivation of analytical expressions and how to use them for classical training, and discuss a previous work [34] that challenges the possibility of quantum advantage with IQP circuits.In Section IV we describe our protocol for approximating thermal distributions and solving combinatorial optimization problems, while Section V presents numerical performance results.First, the average overlap with the ground state obtained with an exact state-vector simulator is polynomially better than for 1-layer QAOA on random Sherrington-Kirkpatrick (SK) Hamiltonians of up to 29 qubits.Second, when approximating thermal distributions we can reach lower temperatures than 1-layer QAOA but the approximation quality reduces.Third, we demonstrate a better performance than 1-layer QAOA at solving random SK Hamiltonians of up to 32 qubits in the recently released Quantinuum's trapped-ion H2 quantum hardware and emulator.Using a reduced number of shots, the best solution per instance presents a large approximation ratio and is optimal for a large fraction of instances.Finally, Section VI discusses the methods, results, and future research directions. II. THE QUANTUM APPROXIMATE OPTIMIZATION ALGORITHM The standard implementation of the QAOA [1] attempts to create states with large overlap onto the ground eigenspace of some optimization problem, typically defined through an Ising Hamiltonian, where the Z i variables can be interpreted as the projections onto the Z-axis of a classical or quantum mechanical ensemble of N spin- where p is called the level of the QAOA and the sets β, γ of real coefficients β k , γ k are used as variational parameters.The most commonly used cost function in the optimization of the ansatz is the expectation value of the problem Hamiltonian although alternative objective functions have been proposed [35,36].For the rest of this work, we will only consider the 1-layer QAOA, which is sufficiently shallow to withstand the effects of moderate noise and obtains an enhanced average probability of sampling the ground state quadratically larger than random guessing [30], i.e., scaling as 2 −0.5N . III. THE INSTANTANEOUS QUANTUM POLYNOMIAL CIRCUIT The IQP is a non-universal model of quantum computation with similar roots to the boson sampling problem, whose aim is to strengthen the general belief that quantum computers are more powerful than classical machines [25,26].Under certain widely believed complexity-theoretic assumptions, sampling from the IQP state H ⊗N exp(−iH IQP ( ⃗ θ)) |+⟩ ⊗N in the computational basis of all qubits is a hard task for a classical computer [26].Here the IQP Hamiltonian is defined as The IQP ansatz employed in this work is a generalization where Hadamard gates are replaced with independent parameterized single-qubit rotations R x (ϕ) = exp(−iϕX/2), leading to the quantum circuit where θ = ( ⃗ ϕ, ⃗ θ) are free, real parameters.The IQP state is recovered by setting ϕ i = π/2 and making the transformation θ i −→ θ i − π/2.Since the IQP state can be brought to this form by modifying the final layer of single qubit rotations, we expect generic states of this form to be difficult to sample classically as well.We also make the important observation that, up to single qubit rotations and energy rescaling, the IQP state in Eq. ( 4) is the same as that produced by a 1-layer QAOA designed to solve for the ground state of H IQP . This ansatz generalizes the optimization cost function of Eq. ( 3) to which we refer to as the optimization landscape.The task of computing the cost function defined in Eq. ( 5) is then reduced to estimating the expectation values of the spins ⟨Z i ⟩ θ and correlators ⟨Z i Z j ⟩ θ in an arbitrary state |Ψ(θ)⟩.In the Supplemental Material we show that the latter expression can be reduced to calculating partition functions of reduced Ising Hamiltonians of the form where e's are single or two qubit subsets.The reduced generator H e retains only the terms in H IQP that anticommute with the operator X e = i∈e X i .This leads to a highly restricted graph topology, for which partition functions can be evaluated exactly.We generalize this method to show that IQPs have simple analytic expressions for all expectation values of the form ⟨Z e ⟩ θ , with a number of terms that scales like O(2 |e| ). These properties of the IQP ansatz make it a good candidate for solving optimization problems, as it is guaranteed to be at least as powerful as 1-layer QAOA and the training can be performed efficiently using only classical resources.The access to exact, analytic expressions for the cost function also means we do not need to worry about finite sampling or device errors during training.Barren plateau issues can also be ruled out, as we can evaluate gradients to arbitrary precision and use adaptive step sizes.Access to a quantum computer is only necessary during the final sampling step, so we expect our protocol to perform well under moderate hardware noise. As opposed to the standard QAOA ansatz, the IQP is sufficiently flexible to produce all computational states.In particular, this means that, if a classical algorithm were able to find the global optimum of Eq. ( 5), it would also find the exact ground state of H.In [34], it is shown that the optimization landscapes of IQP ansatze with only polynomially many terms (like our ansatz) are generally non-convex and computational states other than the solution may form local minima, which we call trivial minima.Consequently, converging to such local minima would imply the algorithm does not need access to a quantum computer, as the bits x i of the solution corresponding to the optimal parameters are given by ⟨Z i ⟩ θ , which can be efficiently computed classically. We prove that the optimization landscapes can contain non-trivial minima, and give a minimal example of this in the Supplemental Material.Remarkably, we provide numerical evidence that for the SK model such a local minimum is located in the vicinity of the QAOA parameters, and show that sampling the IQP circuit at this point greatly enhances the chance of finding the ground state compared to QAOA. IV. METHODS A remarkable result of [30] is that for a wide range of optimization problems that can be formulated as in Eq. ( 1), the 1-layer QAOA is capable of approximating pseudo-Boltzmann states proportional to exp(−βH/2) |+⟩ ⊗N , with large inverse temperature β, up to relative phases that do not affect the distribution.This is important because sampling this state produces the same distribution as sampling the mixed thermal state ρ β = e −βH /Z for classical Hamiltonians, which is useful for a variety of optimization tasks. In our work, we use this result to justify the QAOA as a good starting point in optimizing the IQP ansatz.Since the 1-layer QAOA ansatz can be recovered by restricting the parameters of the full IQP, we find the optimal QAOA position classically, using the BFGS [37] algorithm on the submanifold.To find a local optimum in the vicinity of this position, it is sufficient to use simple gradient descent.However, we also explore the feasibility of our algorithm for producing low-energy thermal states, which is achieved using a different approach called VarQITE [32,33].This protocol aims to find the trajectory on the manifold that best approximates the action of exp(−τ H) on the state.If the initial state is pseudo-Boltzmann, then applying this operator leads to a decrease in temperature.The parameters in the ansatz are evolved according to the McLachlan variational principle [38]: where the coupling matrix A describes the geometry of the variational manifold (i.e. it is the Gram matrix of the tangent vectors corresponding to each parameter) and τ is the imaginary time variable.In the Supplemental Material we show that the coefficients of the Gram matrix can be expressed as expectation values of low-weight Pauli operators in the IQP, for which we find simple analytic expressions.However, this calculation is computationally expensive, so when the focus is on finding a local minimum rather than preserving a thermal profile, we set A = I and perform simple gradient descent. In both cases, this linear system of ODEs defines a flow on the variational manifold, that we solve numerically using the Runge-Kutta method [28].We stop this procedure when we arrive at a local minimum, or when A becomes non-invertible.This typically happens after a long plateau in the energy profile, which we illustrate in the Supplemental Material.Such event becomes a rare occurrence when we increase the number of qubits, but for problems that exhibit this behavior, we choose the optimal parameters in the middle of the plateau.After finding the optimal parameters, we sample the circuit and compute the probability of finding the ground state.We share the code used for implementing this protocol in [39]. We characterize our distributions using an effective inverse temperature β.This is obtained by minimizing the Kullbach-Leibler divergence of the IQP distribution to the family of thermal distributions.Here, we compute the KL divergence exactly, but in practice, this would be estimated from samples [40]. V. RESULTS We test our method on Sherrington-Kirkpatrick (SK) Hamiltonians [41][42][43] of up to N = 29 spins using Qiskit exact state-vector quantum simulators [44].These Hamiltonians are of the form of Eq. ( 1) with h i = 0 and J ij independent and identically distributed Gaussian random variables of 0 mean and a standard deviation of 1/ √ N .The unbiased SK model presents a Z 2 symmetry, so the ground state is unique up to flipping all qubits.This is a well-understood spin model with compelling classical solvers [45].In quantum optimization it is one of the most studied benchmark problems [30,[46][47][48][49][50]. In Fig. 2, we show how the overlap of the optimized IQP state onto the ground eigenspace varies with the problem size, and how it compares to the overlap achieved by the initial QAOA.Both plots show a clear exponential trend with relatively low and slowly increasing variance.This confirms that our algorithm has a significantly better exponential scaling compared to 1-layer QAOA. We also study how the temperature of the distribution changes as we perform imaginary time evolution on our variational manifold up to time τ = 10, close to convergence.In Fig. 3 we show how the optimal normalized temperatures achieved by the final optimized IQP state are lower than those achieved by the starting QAOA state.However, the KL divergence between the optimized IQP state and the best-fitting thermal state is higher and presents more dispersion than QAOA across Hamiltonian instances.This indicates that IQP states might be beneficial for the task of sampling low-energy eigenstates while QAOA provides a better approximation to thermal distributions. In Fig. 4 we plot example distributions produced by the QAOA and the optimized IQP ansatz.From the qualitative aspect of the IQP distribution, we see that the performance of our algorithm in increasing the ground state overlap cannot be entirely explained as a consequence of having a lower temperature.The distribution becomes arched, and the probabilities of sampling the low-energy eigenstates rise orders of magnitude above the predictions of the thermal fit.Future theoretical work is necessary to understand how this effect emerges, and whether it is recovered in more general optimization problems. Our algorithm is also studied in a more realistic setting, where quantum circuits are affected by hardware noise.We use the recently released Quantinuum H2 trapped-ion quantum hardware and emulator [51].The emulator performs exact state-vector simulation under a noise model that replicates the noisy behavior of the real device.The device presents all-to-all connectivity and high-fidelity parameterized gates of the form exp(−iθZZ), making it ideal for our protocol and for QAOA on densely connected Hamiltonians. For this analysis, we study biased SK models with the coefficients h i independently sampled from the same Gaussian distribution as the coefficients J ij .The presence of the bias breaks the Z 2 symmetry, halving the initial overlap with the ground eigenspace and making the problem slightly more general.The bias adds an additional slope in the vicinity of QAOA, that sometimes dissolves the local minimum that we exploit in the previous study, leaving no obvious method to pick a point in the gradient-descent trajectory.Our aim for this analysis is however to study the performance in the neighborhood of QAOA, rather than providing the most optimized form of our protocol.For this purpose we pick the optimized 1-layer QAOA as the first circuit, and three equally-spaced circuits corresponding to three of the first gradient-descent steps.The Supplemental Material describes the criterion we used to pick these circuits. Figure 5 compares the quality of the best solutions obtained by the corresponding four circuits.From the 312 instances, we optimally solve 5, 21, 59, and 86, respectively for the four circuits.The best solution sampled for each instance has an average approximation ratio and standard deviation of (0.87, 0.10), (0.935, 0.083), (0.948, 0.083), and (0.970, 0.060), respectively.When considering for each instance only the best solution obtained from the four circuits as the output of our algorithm, 136 instances are solved optimally (almost 44%) and the distribution has an average approximation ratio and standard deviation of (0.985, 0.029). VI. DISCUSSION The algorithm we introduce explores the natural connection between the 1-layer QAOA state and IQP circuits.Studying the vicinity of the QAOA in this broader variational manifold leads to a better understanding of its optimality as a shallow-depth quantum heuristic, as well as how it can be improved. We show that, for the case of SK Hamiltonians, our approach amplifies the probability of sampling the ground state, beyond what can be obtained using classical tools such as MCMC.The hardware implementation is as resource-demanding as it is for 1-layer QAOA, and parameter training can be performed classically in time O(N 3 ).Results on the Quantinuum H2 show the reliability of our protocol to solve large instances with scarce quantum resources. We leave as a future work the development of an optimized strategy to pick points along the gradient-descent trajectory where sampling from the quantum computer might yield even better performance. The results presented motivate the development of strategies to compare the performance of our protocol first IQP For each instance we pick four steps along the gradient-descent trajectory, corresponding to the standard 1-layer QAOA, and three IQP circuits.Then take ∼ 2 0.32N ∈ [4, 1208] shots, equally distributed across the four circuits.Each data point in the figure corresponds to the best solution sampled for each instance.If the best solution is optimal the point is placed in the lower row, while for sub-optimal solutions we place the point in the upper row to visualise the approximation error. against state-of-the-art classical algorithms at the scale of real-world combinatorial optimization problems.For example, the access to the analytical expectation value of the problem Hamiltonian and higher powers of it might provide an efficient way to estimate the probability of sampling the low-energy tail for large-scale problems. In this supplemental material we present some technical aspects of the work that are not shown in the main text.Section VII contains a full derivation of the analytic expressions of expectation values of operators in the SD-IQP ansatz, as well as the application of this general formula to the particular case of a problem Hamiltonian.In Section VIII we prove that it is possible for an IQP optimization landscape to have local minima that do not correspond to eigenstates of the problem Hamiltonian, by constructing a minimal example.Section IX shows how to obtain analytic expressions of the Gram matrix elements, which are essential to perform VarQITE.In Section X we show examples of the cost function evolution until convergence and discuss the implications of the choice of sampling at different locations.Section XI discusses a criterion for selecting IQP circuits when the energy profile displays no plateaus. VII. ANALYTIC EXPRESSION FOR EXPECTATION VALUES IN THE IQP STATE In this appendix, we derive exact analytic expressions of expectation values of low-weight Pauli operators in the IQP ansatz.A similar derivation is presented in [52] for 1-layer QAOA circuits, that contain only two parameters, so that resulting expressions are a particular case of the ones derived in this Appendix.In contrast to the expressions derived in [12] for extra-parameterized 1-layer QAOA circuits, our expressions apply to arbitrary local Hamiltonians.During the write-up of the second version of this manuscript similar analytical expressions for arbitrary local Hamiltonians were obtained [53]. Let P e be some Pauli string that applies non-identity Pauli operators to a subset of qubits e ∈ N = {1, 2, . . ., N } and let w e = |e| be the weight of P e .If we identify Pauli strings that differ only by a phase, we can characterize them using two length-N boolean vectors a, b ∈ Z ⊗N 2 by the decomposition P e = Z a X b , where we used the notation Z a = N i=1 Z ai .Denote by e Z and e X the subsets of N corresponding to the nonzero elements of a and b respectively, and let w a = |e Z | and w b = |e X | be corresponding weights.Our goal is then to compute ⟨Z a X b ⟩ θ .First, we show how the layer of single qubit X rotations transforms this operator This can be expanded to a sum of 2 wa Pauli strings, whose expectation values are then to be calculated in the state exp(−iH IQP ) |+⟩ ⊗N .To simplify notation we will denote all expectation values in this state by ⟨•⟩, which differs from the expectation value in the full ansatz by omitting the subscript θ.If we recycle previous notation for brevity, we are now interested in computing expectation values of the form Since we are only working with IQPs we can expand the Hamiltonian as The terms in the Hamiltonian that commute with Z a X b can be straight-forwardly canceled out, while those that anti-commute with Z a X b can be moved through with a flipped sign.Then we have where in the last equality we expanded the state |+⟩ ⊗N as a sum over all spin configurations x i ∈ {+1, −1} and made use of the fact that the central operator is manifestly diagonal in this basis.We can absorb the Z a in the propagator by noting that exp(−iπZ/2) = −iZ and using the transformed angles θi = θ i − a i π/2, giving us the simple expression This form has the interpretation of a partition function over the bipartite graph formed by splitting the set of all qubits N into e X and its complement.This suggests we should separate the spins corresponding to different subsets, so we will denote by r the configurations of spins in e X and by s configurations of the complement.The expression is then rewritten as which is now a sum over the configurations of spins in e X only.For simplified notation, we introduced the Q function, which is defined as We can simplify this even further by grouping configurations that differ only by the Z 2 operation of flipping the sign of all spins to obtain where a P = j / ∈e X a j mod 2. We can now merge the two complex phases to obtain the final expression This expresses the expectation value as a sum of 2 w b −1 terms and can be computed efficiently when the operators we are interested in have small weight w ≪ N .In particular, to perform the optimization of the ansatz as described in the main text, we only make use of this expression with w b up to 2. Note that when w b = 0 we have a vanishing expectation value. We will now show how Eq. ( 17) can be used to efficiently compute the expectation of the Hamiltonian in the IQP We may now use Eq. ( 17) to expand each term in this expression.First, we note that expectation values with no X operators simply vanish, so ⟨Z i ⟩ = 0 and ⟨Z i Z j ⟩ = 0. Then we can compute the remaining terms individually If we plug these expressions into Eq.( 18) we arrive at the final analytic form for our Hamiltonian expectation value Note that the computational time of evaluating this expectation value, as well as the gradient in θ, is O(N 3 ), if the problem Hamiltonian has all to all connectivity, as is the case for the SK model.It can be reduced to O(DN 2 ) if our problem can be formulated on a graph whose degree is bounded by D. Analogous efficient expressions may be obtained for problem Hamiltonians that include many-body interactions, as long as the weights scale at most like O(log N ) with the problem size. VIII. EXAMPLE OF NON-TRIVIAL MINIMA OF THE OPTIMIZATION LANDSCAPE In this appendix, we give a minimal example of a problem for which the IQP ansatz leads to non-trivial local minima, where by non-trivial we mean that the state produced is not an eigenstate of the Hamiltonian.Additionally, the state is shown to have an overlap of 0.5 onto the degenerate ground eigenspace, so the problem solution can be recovered by sampling. Consider the 4-qubit Hamiltonian We explore this Hamiltonian using the IQP ansatz given by all 1-body and 2-body operators in the X-basis which is equivalent to the IQP state defined in the main text.We compute the expectation value ⟨Z 0 Z 1 ⟩ in the IQP state.The other 2 terms in H are obtained by cyclic permutations in 1, 2, and 3. We provide a symbolic implementation of this expression in Python [39].Using symbolic differentiation, we show that the line given by equations θ 0 = π/2, θ 1 = π/2, θ 3 = π/2, θ 01 = π/2, θ 02 = π, θ 12 = 0, θ 03 = π/2, θ 13 = π/2, θ 23 = 0 (note that θ 2 is free and parameterizes the line) is a critical line of local minima.This is done by verifying that the gradient is 0 for all θ 2 and the hessian is positive semi-definite, with a single null eigenvalue corresponding to the direction going along the curve (except at the isolated point θ 2 = π/2 which we exclude from our analysis).In addition, the expected value of the Hamiltonian on this line is ⟨H⟩ = −2, while it is easy to check that all eigenvalues of the Hamiltonian must be odd integers.Therefore, the states created by the ansatz with the specified parameters must be superpositions of eigenstates with different eigenvalues.We claim that this is sufficient to show that all points on the line are non-trivial local minima and give the following proof: Proof.Consider an optimization space parameterized by (ϕ, ⃗ θ) variables, with the usual parameter range of 0 to 2π.Let us call the cost function defined on this space by J(ϕ, ⃗ θ).Assume J is infinitely smooth, so we can freely Taylor expand around all points (this can be verified from the analytic form of J in terms of sums of products of smooth functions).Assume the line of critical points found in the counterexample is defined by the condition ⃗ θ = ⃗ θ 0 for some constant ⃗ θ 0 .ϕ can then be considered a parameterization of the critical line (changing its value moves us along the line).On this line, we showed that the cost function is constant and the gradient is 0 At the Hessian level, we find that all second-order derivatives that contain ϕ are zero and the restriction of the Hessian to the ⃗ θ subspace is positive definite on some interval [ϕ < , ϕ > ] with 0 < ϕ < < ϕ > < π/2: for all nonzero vectors ⃗ v and ϕ ∈ [ϕ < , ϕ > ].Einstein summation convention is employed.In particular, for all unit vectors n we have that with λ min (ϕ) the smallest eigenvalue of the Hessian at (ϕ, ⃗ θ 0 ) when restricted to the θ i variables.Note that the role of ϕ is played by θ 2 in our example, but we changed the name for brevity.Suppose we want to focus our attention on the point (ϕ 0 , ⃗ θ 0 ) with ϕ 0 ∈ [ϕ < , ϕ > ] and show that this is indeed a local minimum.The coordinates of every point in the vicinity of this critical point that does not lie on the critical line can be written as (ϕ 0 + ϵ∆ϕ, ⃗ θ 0 + ϵ∆ ⃗ θ), where ∆ ⃗ θ is a unit vector and ϵ is some small quantity.We can write the value of the cost function at this point as where from the theory of Taylor series we know that R(ϵ) is continuous and The linear term in the equation above has been dropped due to Eq. ( 31) stating that all points of ⃗ θ = ⃗ θ 0 have vanishing gradients.Since the cost function is constant under shifts in ϕ this is equivalent to Since the minimum eigenvalue is strictly positive in the ⃗ θ subspace and R(ϵ)/ϵ 2 is a continuous function that decays to 0 for ϵ → 0 we conclude that there must exist some ϵ 0 > 0 such that the RHS of the above inequality is strictly positive for all |ϵ| < ϵ 0 .Since this implies that the LHS must also be positive for all values of ϵ in this ball around 0, then this proves that the value of the cost function in the neighborhood must be strictly larger than its value on the critical line.The above prescription can be applied to all perturbations around the critical point except those with ∆ ⃗ θ = 0, for which we know that the cost function must be constant.This proves that all points on the critical line represent local minima (similar to the minima of the Mexican hat potential). IX. DERIVATION OF THE GRAM MATRIX In order to perform VarQITE we must compute the Gram matrix A corresponding to the tangent vectors of our variational manifold.According to its definition in [32] we have where we used greek indices µ, ν that run over all variational parameters in the ansatz.We can now separate this matrix into several blocks based on the type of variational parameter From the definition given in the main text we can explicitly compute the tangent vectors as where e is a subset of N present in the IQP ansatz.For the θθ part of the matrix we get where we again employ the notation ⟨•⟩, which stands for the expectation value in the state exp (−iH IQP ) |+⟩ ⊗N .This result states that varying the parameters θ e one at a time with the same starting point on the manifold will take us along orthogonal directions.For the θϕ part and |e| = 1 we have and for |e| = 2 we get Since A must be hermitian, we can get the other off-diagonal block matrix as A ϕθ = (A θϕ ) T .Finally in the ϕϕ sector we have X. ENERGY LANDSCAPE In the main text, we claim that, for small problem sizes of the unbiased SK model, we sometimes do not find a non-trivial local minimum and instead the ansatz loses its overlap with the ground state.In this case, we still hit a plateau during the optimization and we overcome this issue by choosing to sample the IQP ansatz close to the middle of the plateau.This situation is illustrated in Fig. 6(b).In the attached histogram we see that the state obtained at the end of the optimization (orange) has non-zero support only on a small number of states and does not find the ground state, despite having much lower average energy.Sampling in the middle of the first plateau (blue) results in a wider spread of states, including a high overlap onto the ground state.In Fig. 7 we show that the cases where the algorithm does not converge to a good local optimum become very unlikely as we increase the number of qubits. XI. CRITERION TO SELECT IQP CIRCUITS In the case of biased SK Hamiltonians the local minimum in the vicinity of QAOA is usually dissolved by the bias and the average energy does not present an intermediate plateau.This leaves no obvious strategy to pick a point along the gradient-descent trajectory that provides some intuition on good performance.However, rather than optimizing this step of our protocol, the aim of this work is to study the performance in the neighborhood of QAOA.We then follow a simple criterion to pick four circuits in this vicinity. The first circuit is precisely the optimized 1-layer QAOA, which serves as a warm-start to our protocol.To pick the other three circuits we observe the evolution of the parameters θ ij in the two-qubit gates of the ansatz.When these parameters reach values close to 0 or π the implementation of the corresponding gate can be replaced by single-qubit gates: the identity or two single-qubit π-rotations, respectively.The remaining two-qubit gates define the connectivity graph of the ansatz at every step. The motivation to remove two-qubit gates with parameters close to 0 or π is that the noise inserted by their hardware implementation can be larger than the error incurred when they are replaced by their approximation as the identity or single-qubit gates, respectively.We decide to remove a two-qubit gate with parameter θ ij close to 0 (or π) if the effect of the gate is smaller than the reported infidelity p ∼ 10 −3 in Quantinuum's devices, i.e., if |sin(θ ij /2)| < p (or |cos(θ ij /2)| < p). Our criterion for a fair comparison to 1-layer QAOA is that this graph is at least connected, i.e. formed by a single graph component.This ensures that, as for QAOA, the IQP circuit can not be split and implemented via separate unconnected circuits. We pick the fourth circuit as the last circuit in the trajectory where the graph is still connected, and the second and third circuits at equidistant steps in that range.The fraction of entangling gates left in the three IQP circuits compared to QAOA has an average of 0.83 and standard deviation across problem instances of 0.25. FIG. 2 . FIG. 2. Optimization results for 300 randomly generated Sherrington-Kirkpatrick Hamiltonians of up to 29 spins.a) Probability of sampling the ground state configuration in the optimal IQP ansatz.b) Enhancement factor pIQP/pQAOA for finding the ground state in the optimized IQP ansatz compared to the original QAOA.The IQP was optimized until convergence using simple gradient descent.Using a linear fit, we find the average probability of sampling the ground state pIQP ∼ 2 −αN with α = 0.31 ± 0.02 and the average enhancement factor pIQP/pQAOA ∼ 2 δN with δ = 0.23 ± 0.02.The errors indicate the variability in gradient at one standard deviation. FIG. 3 . FIG.3.Normalized effective inverse temperatures β∥J∥ in the QAOA state and the IQP state after VarQITE evolution for a time τ = 10, for 20 randomly generated Sherrington-Kirkpatrick Hamiltonians of each size from 10 to 20 qubits.We also show the average and standard deviations for the KL divergences of each problem size. FIG. 4 . FIG. 4. Overlap of the state produced by our ansatz onto different Hamiltonian eigenvalues as a function of energy for the QAOA parameters (top), and optimized IQP parameters (bottom), for a randomly generated 20 qubit Sherrington-Kirkpatrick Hamiltonian.Brighter color indicates higher coarse-grain point density.Red line illustrates the thermal distribution model that minimizes the KL divergence.A red circle marks the location of the ground state. FIG. 5 . FIG.5.Optimization results on the Quantinuum H2 trapped-ion quantum hardware and emulator for randomly generated biased Sherrington-Kirkpatrick Hamiltonians of 7 to 32 qubits: two instances per problem size on the device (stars) and ten instances on the emulator (circles).For each instance we pick four steps along the gradient-descent trajectory, corresponding to the standard 1-layer QAOA, and three IQP circuits.Then take ∼ 2 0.32N ∈ [4, 1208] shots, equally distributed across the four circuits.Each data point in the figure corresponds to the best solution sampled for each instance.If the best solution is optimal the point is placed in the lower row, while for sub-optimal solutions we place the point in the upper row to visualise the approximation error. FIG. 6 . FIG.6.Energy plot during simple gradient descent, starting from the optimal QAOA parameters for randomly generated 25 qubit optimization problems.a) Nearby non-trivial local minimum is found and the probability of sampling the ground state is amplified, b) Ansatz becomes degenerate after a long plateau.Orange shows samples collected from the final step and blue shows samples collected in the middle of the first plateau.All histograms are obtained from a total of 200 samples. 1 2particles and (h i , J ij ) are real coefficients.This is achieved by starting with the ground state |+⟩ ⊗N of the trivial transverse field mixing Hamiltonian H x = − i X i and evolving the state under the alternating application of the propagators of H and H x .The final trial state is of the form
8,869
2022-10-11T00:00:00.000
[ "Physics", "Computer Science" ]
Synthesis and Biological Evaluation of 1-(Diarylmethyl)-1H-1,2,4-triazoles and 1-(Diarylmethyl)-1H-imidazoles as a Novel Class of Anti-Mitotic Agent for Activity in Breast Cancer We report the synthesis and biochemical evaluation of compounds that are designed as hybrids of the microtubule targeting benzophenone phenstatin and the aromatase inhibitor letrozole. A preliminary screening in estrogen receptor (ER)-positive MCF-7 breast cancer cells identified 5-((2H-1,2,3-triazol-1-yl)(3,4,5-trimethoxyphenyl)methyl)-2-methoxyphenol 24 as a potent antiproliferative compound with an IC50 value of 52 nM in MCF-7 breast cancer cells (ER+/PR+) and 74 nM in triple-negative MDA-MB-231 breast cancer cells. The compounds demonstrated significant G2/M phase cell cycle arrest and induction of apoptosis in the MCF-7 cell line, inhibited tubulin polymerisation, and were selective for cancer cells when evaluated in non-tumorigenic MCF-10A breast cells. The immunofluorescence staining of MCF-7 cells confirmed that the compounds targeted tubulin and induced multinucleation, which is a recognised sign of mitotic catastrophe. Computational docking studies of compounds 19e, 21l, and 24 in the colchicine binding site of tubulin indicated potential binding conformations for the compounds. Compounds 19e and 21l were also shown to selectively inhibit aromatase. These compounds are promising candidates for development as antiproliferative, aromatase inhibitory, and microtubule-disrupting agents for breast cancer. Introduction Designing single agents that act against multiple biological targets is of increasing interest and prominence in medicinal chemistry [1][2][3][4]. Dual-targeting drugs are designed with the potential to be more potent and efficient and overcome many of the disadvantages of single drugs such as low solubility, side effects [5], and multidrug resistance (MDR). While the molecular mechanisms of resistance to chemotherapeutics have been identified, MDR is known to be a key factor in the failure of breast cancer chemotherapy [6]. Traditionally, drugs have been designed to target a single biological target (protein), aiming for high selectivity and thus avoiding unwanted effects due to off-target events. The interaction of a drug with multiple target proteins has been regarded as potentially associated with adverse side effects. However, for complex diseases such as cancer, it is now recognised that a single-target drug may not achieve the optimum therapeutic effect. Molecules that are effective at more than one target protein may overcome incomplete efficacy and demonstrate an increased safety profile compared to single-targeted ones [2]. Dual-targeting strategies may offer a more favourable outcome of cancer treatment. A possible strategy to improve the outcome for postmenopausal breast cancer patients is to design compounds with dual aromatase and tubulin targeting activities, which may offer the potential benefits of improved efficacy and fewer side effects [7,8]. The objective of our research is to investigate a new series of 1-(diarylmethyl)-1H-1,2,4-triazoles and 1-(diarylmethyl)-1H-imidazoles as a novel class of antimitotic compounds with an interesting biochemical profile particularly as tubulin-targeting agents and aromatase inhibitors for the treatment of breast cancer. Breast cancer is the most commonly diagnosed cancer in women; it is estimated that approximately one in eight women will develop breast cancer during their lifetime, and it is the most frequent cause of death for women in the age group 35-55 [9]. There were over two million new cases in 2018 [10], and the number of cases is predicted to rise due to an ageing population [11,12]. Mortality has decreased due to improved screening and early detection together with the use of adjuvant therapy [13]. Approximately 70-80% of breast cancers are hormone-dependent; their growth is stimulated in response to the hormone estrogen, with the majority of these estrogen receptor positive (ER+) cancers also expressing the progesterone receptor (ER+/PR+ cancers). Upregulation of the gene encoding the PR is directly mediated by ER, and PR modulates ERα action in breast cancer [14]. Aromatase (CYP19A1), a member of the cytochrome P-450 enzyme superfamily, catalyses the aromatisation of C-19 androgens to C-18 estrogens in the final step in estrogen biosynthesis, and it is an attractive target for selective inhibition [15][16][17]. Estrogen deprivation is an effective therapeutic intervention for hormone-dependent breast cancer (HDBC) and has been clinically established by the inhibition of the aromatase enzyme. The aromatase inhibitors (AIs), e.g., letrozole 1 [18], anastrozole 2 [19], and exemestane [20] (Figure 1a), prevent the stimulating effects of estrogen in breast tissue [19], and they are approved in the treatment of a wide spectrum of breast cancers [21]. These AIs have demonstrated superior efficacy in postmenopausal women and have few associated risks apart from reduction in bone density [8,[21][22][23], and emerging resistance [24,25]. The selective estrogen receptor modulator (SERM) tamoxifen 3a (Figure 1a) is effective for the treatment of ER+ breast cancer [13]; however, resistance is a clinical problem [26] together with a small increase in incidences of blood clots and endometrial cancers for postmenopausal women [27,28]. The potential advantage of the tamoxifen metabolites endoxifen (3b) and norendoxifen (3c) in endocrine-refractory metastatic breast cancer is reported [29]. Breast cancers that are (ER+/PR+) are likely to respond to hormone therapy such as tamoxifen and anastrozole [23], while the prophylatic use of tamoxifen, raloxifene, or anastrozole is recommended for postmenopausal women at high risk of developing breast cancer [30,31]. Approximately 20% of breast cancers overexpress the human epidermal growth factor receptor 2 (HER2), which promotes the growth of cancer cells. Effective treatments for HER2+ breast cancers include the monoclonal antibody trastuzumab [32], the antibody-drug conjugate ado-trastuzumab emtansine [33], and the dual tyrosine kinase inhibitor lapatinib which targets both the HER/neu and the epidermal growth factor receptor (EGFR) [34]. Breast cancers are classified as triple negative (TNBC) when their growth is not supported by estrogen and progesterone nor by the presence of HER2 receptors. The clinical options for treatment of TNBC are limited due to poor response to hormonal therapy, resulting in low 5-year survival rates [35]. There is extensive diversity among breast cancer patients, and each sub-type of breast cancer has unique characteristics. The identification of sub-type-specific network biomarkers can be useful in predicting the survivability of breast cancer patients [36]. FDA-approved drugs for breast cancer in 2019 include the antibody-drug conjugate Fam-trastuzumab deruxtecan [37] (HER2-directed antibody and topoisomerase inhibitor) for the treatment of unresectable or metastatic HER2-positive breast cancer [38], the phosphoinositide-3-kinase (PI3Kα) inhibitor alpelisib [39] for the treatment of HER2negative, PIK3CA-mutated, advanced or metastatic breast cancer [40] and in 2020, tucatinib, an orally bioavailable, small molecule tyrosine kinase inhibitor for patients with HER2-positive metastatic breast cancer [41]. The microtubule-stabilising drugs paclitaxel, docetaxel, and the epothilone ixabepilone were approved for use in patients with metastatic breast cancer (MBC), alongside the microtubule destabilising vinca alkaloid eribulin [42,43]. The FDA recently granted accelerated approval to the antibody-drug (topoisomerase inhibitor) conjugate sacituzumab govitecan (Trodelvy) for previously treated metastatic TNBC [44], while ladiratuzumab vedotin (a LIV-1-targeted antibody linked to the microtubule-disrupting agent monomethyl auristatin E (MMAE)) is in clinical trials for locally advanced or metastatic triple-negative breast cancer [45]. The steroid sulfatase inhibitor (STS) e.g., STX64 (Irosustat) has entered clinical trials for ER+ locally advanced or metastatic breast cancer [46], while inhibitors of mutant p53, e.g., PRIMA-1 and PRIMA-1 MET , overexpressed in TNBC have been demonstrated to be effective in vitro [47]. Since the potent tubulin-inhibiting activity of the 3,4,5-trimethoxyaryl function is very well documented in colchicine-binding site inhibitors [90], the 1,2,4-triazole heterocycle was next reacted with several phenstatin-type 3,4,5-trimethoxyaryl substituted benzhydryl alcohols in order to maximise the potential tubulin activity in the scaffold structures with aromatase-inhibiting action (Series 2). It was decided to retain in most compounds the 3,4,5-trimethoxyaryl group substitution (ring A) and introduce alternative substituents on the second ring (ring B). A modified synthetic procedure allowing access to the desired benzhydryl alcohol intermediates 15a-h and 18a-f is shown in Schemes 2 and 3 (step a) [91]. Scheme 2 shows the alcohols (15a-h) obtained by treatment of the appropriate aryl bromides 14a-h with n-butyllithium followed by reaction with 3,4,5-trimethoxybenzaldehyde (A ring) to afford the alcohols 15a-h in yields of 21-89%. For the preparation of compounds (18a-d) (Scheme 3), the A ring was derived from 3,4,5-trimethoxybromobenzene followed by reaction with the appropriate aldehyde 17a-d. The nitrile-containing compounds 18e,f were similarly obtained from the aldehydes 17e,f and 4-bromobenzonitrile (Scheme 3). The benzhydryl compounds were obtained in good yield after purification via flash column chromatography and the presence of the hydroxyl group was confirmed from IR (ν 3200-3600 cm −1 ). Then, the secondary alcohols 15a-h and 18a-f were reacted with 1,2,4-triazole to afford the hybrid phenstatin/letrozole compounds 16a-h and 19a-d,f,g as racemates, except for 19b, (Schemes 2 and 3, step b). The phenolic compounds 16i, 19e, 19h, and 19i were obtained by hydrogenolysis over palladium hydroxide of the benzyl ethers 16b, 19a, 19f, and 19g respectively. From the 1 H-NMR spectrum of compound 16c, the singlet at 6.62 ppm was assigned the tertiary aliphatic proton. The singlets at 7.91 and 8.01 ppm were assigned to the triazole H-3 and H-5. In the 13 C-NMR spectrum, the tertiary CH signal was identified at 67.4 ppm, while the triazole ring C3 and C5 signals were identified at 143.5 and 152.3 ppm, respectively. X-ray crystal structures of the triazole compounds 16e, 16f, and 19c (recrystallised from dichloromethane/n-hexane) are displayed in Figure 2, while the crystal data and structure refinement are displayed in Table 1. The length of the C-N bond between the methine carbon and the triazole N-1 for compounds 16e, 16f, and 19c was measured at 1.470, 1.471, and 1.479 Å, respectively. The N1-N2 bond length was 1.366 Å (16e), 1.363 Å (16f), and 1.365 Å (19c). The N1-C5 bond length of the triazole ring was observed as 1.334 Å (16e), 1.342 Å (16f), and 1.343 Å (19c). The angle between the methine carbon and the two aromatic rings (Ar-C1-Ar) was measured as 112.51 • , 115.08 • , and 113.53 • respectively for compounds 16e, 16f, and 19c. The corresponding value for the letrozole structure is 114.0 • , while the C-N bond between the methine carbon and the triazole N-1 was 1.46 Å [92]. 1-(Diarylmethyl)-1H-imidazoles (Series 3 and 4) A series of related imidazole-containing compounds were also prepared 20a-l (Series 3) and 21a-k (Series 4). The secondary alcohols 12a-h, j, k, and m were coupled to imidazole using CDI (carbonyldiimidazole) [93] to afford products 20a-k, Series 3, (Scheme 4, step a). The associated carbamate derivatives were not isolated in our reactions [94]. The hydrolysis of 20i afforded the amine 20l in 50% yield (Scheme 4, step b). Structures were optimised with variations in electron-releasing and electron-withdrawing substituents on the aryl rings. A further series of compounds containing the ring A type 3,4,5-trimethoxyaryl substituents was prepared by reacting alcohols 15a,c-h, and 18a-d with CDI to afford imidazole products 21a-k, Series 4, (Scheme 5, step a). The benzyl ether 21h was treated with Pd(OH) 2 to afford the phenol 21l as a racemate in 93% yield (Scheme 5, step b). In the 1 H NMR spectrum of compound 21i, the imidazole H4 was observed as a singlet at 6.88 ppm, while the H2 and H5 were observed at 7.44, and 7.12 ppm, respectively. The singlet at 6.38 ppm was assigned to the tertiary aliphatic CH. From the 13 C-NMR spectrum, the aliphatic tertiary CH was identified at 65.2, while the signals at 138.0, 129.4, and 119.4 ppm were assigned to the imidazole C2, C4, and C5, respectively. Single crystal X-ray analysis was obtained for compound 21i (recrystallised from dichloromethane/n-hexane), and the crystal structure is shown in Figure 3. The crystal data and structure refinement for compound 21i are displayed in Table 1. The angle between the methine carbon and the aryl rings (114.16 • ) and also the bond length between the methine carbon and the N-1 imidazole nitrogen (1.471 Å) were similar to the corresponding values obtained for the triazole compounds 16e, 16f, and 19c ( Table 1). The bond angles between the aryl rings and the imidazole ring were determined as 111.36 • and 111.80 • , also similar to the corresponding values of 109.99 • and 112.6 • reported for letrozole [92]. An alternative approach for the preparation of phenstatin and related azole compounds using a Friedel-Crafts acylation with Eaton's reagent was also investigated (Scheme 6) [67]. 3,4,5-Trimethoxybenzoic acid was reacted with anisole (22a), 1,2-dimethoxybenzene (22b), or compound 22c (prepared by the protection of 2-methoxyphenol with chloroacetyl chloride) using Eaton's reagent (readily prepared from phosphorus pentoxide and methanesulfonic acid) to afford respectively benzophenones 23a, 23b, and 23c (Scheme 6, Step a). Then, these benzophenones were reduced to the benzhydryl alcohols 15c, 15d, and 15i, respectively with sodium borohydride (Scheme 6, step a), with the concomitant removal of the chloroacetyl protecting group of 23c. Although requiring an additional step, this method was followed after the reaction of the aryl bromide with the aldehyde to afford the alcohol as shown in Schemes 2 and 3 was not successful or did not afford a sufficient quantity of product for the next step e.g., for compound 15d, the overall yield increased to 51% compared with 30%. Then, compounds 15c and 15d were treated with CDI azole to afford the imidazole-containing products 21b and 21c (Scheme 6, step e). The phenol 15i was also reacted with 1,2,3-triazole to afford the product 24 in 77% yield, Series 4, (Scheme 6, step d). Compound 24 is the only phenstatin derivative substituted with 1,2,3-triazole synthesised in this project and was investigated for comparison with the 1,3,4-triazole compound series. In the 1 H NMR spectrum of 24, the signal at 6.99 ppm was assigned to the tertiary CH. Interestingly, the two protons of the 1,2,3-triazole ring were observed as a singlet with an integration of 2H at 7.83 ppm, while the signal at 134.9 ppm in the 13 C-NMR spectrum of 24 was assigned to the C4 and C5 of the triazole ring, indicating that alkylation occurred at N2 of the 1,2,3-triazole [95]. The alkylation of 1,2,3-triazoles may result in the formation of regioisomers depending on the reaction conditions e.g., solvent, temperature, and catalyst used [96]. The signal for the tertiary CH was observed at 71.0 ppm. The benzophenone 23c was also used in the preparation of phenstatin 7a [67]; the deprotection of 23c by reaction with sodium acetate afforded 7a in 89% yield (Scheme 6, step b), which was used as a positive control in the cell viability tests. The preparation of a series of benzhydryl derivatives substituted on the tertiary carbon with the heterocycles pyrrolidine, piperidine, and piperazine was next investigated (Series 5, Schemes 7 and 8). These products allow a comparison of biochemical activity with the related imidazole and triazole compounds from Series 1-4. The advantages of incorporating such heterocyclic rings into drugs are well known; i.e., they can increase the lipophilicity, polarity, and aqueous solubility of the drug [97]. In particular, piperazine is ranked 3 rd among the 25 most common heterocycles contained in FDA-approved drugs [98]. In the present work, the corresponding secondary benzhydryl chloride was prepared from the secondary alcohols 12b-12g, 15c, and 18a using thionyl chloride (Schemes 7 and 8, step a) [93]. The intermediate alkyl chlorides were reacted with piperidine to afford products 26a-c (Scheme 7, step c), while reaction with pyrrolidine yielded derivatives 25a-g (Scheme 7, step b). An alternative synthesis of 1-(diarylmethyl)piperidines is reported using a copper(I)catalysed coupling reaction of aryl boronic acids with N,O-acetals and N,N-aminals [99]. All compounds are racemates apart from compound 25e and were obtained in moderate yields (23-93%). In the 1 H-NMR spectrum of compound 25b, the multiplets at 1.71-1.80 and 2.35-2.43 ppm were assigned to the pyrrolidine methylene protons at H-3,4 and H-2,5 respectively, while the tertiary CH was observed as a singlet at 4.11 ppm. In the 13 C-NMR spectrum, the pyrrolidine C-3 and C-2 signals were at 23.5 and 53.6 ppm, respectively. The signal at 75.7 ppm was assigned to the tertiary carbon. Single crystal X-ray analysis for compound 26a is shown below in Figure 3 (obtained by crystallisation in dichloromethane/n-hexane). The crystal data and structure refinement for compound 26a are displayed in Table 1. In 26a, the disordered fluorine was modelled in two positions with occupancies of 84% and 16%. The C1-N bond length was observed as 1.473 Å and the central C 14 -C 1 -C 8 and C 14 -C 1 -N 2 angles were observed as 109.28 • and 112.09 • , respectively. The piperidine ring bond lengths were 1.471 Å (N2-C3), 1.474 Å (N2-C7), and 1.514 Å (C3-C4), which differ from the N1-C bond length of the triazole ring 1.334 Å due to unsaturation. As a further extension of this research, a related series of piperazine-containing compounds was prepared by coupling selected secondary alcohols with the appropriate piperazine derivative (Series 5, Scheme 8). The preparation of diarylmethylamines has been reported by Le Gall et al. by reaction of the aldehyde and piperidine derivative to a solution of the organozinc reagent in acetonitrile in a Mannich-type reaction [100,101]. The secondary alcohols 12e, 15c, and 18a were treated with thionyl chloride (Scheme 8, step a), and the resulting alkyl chloride was used immediately for the next reaction step (step b) by addition of the appropriate piperazine (N-phenylpiperazine, N-benzylpiperazine, pmethoxyphenylpiperazine, or N-Boc-piperazine) to afford the products 27a-g in yields up to 80%. For the preparation of compound 27e, Boc-protected piperazine was used to avoid the possible formation of the dimer. In the 1 H-NMR spectrum of compound 27d, the broad signal at 2.47 ppm is assigned to piperazine methylene protons; the singlet at 3.50 ppm is assigned to the benzyl methylene, while the singlet at 4.09 ppm corresponds to the tertiary C-1 proton. The 13 C-NMR spectrum of compound 27d further confirms the proposed structure. The signals at 51.8 and 53.3 ppm were characteristic of the piperazine ring protons, the signals at 63.0 ppm and 75.6 ppm are assigned to the benzyl methylene and tertiary CH, respectively. The deprotection of compound 27e with TFA afforded compound 27h as a yellow oil (42%), (Scheme 8, step c). A palladium-catalysed hydrogenolysis of 27g afforded the phenolic compound 27i in 45% yield (step d), which is the phenylpiperazine derivative of phenstatin. Its formation was confirmed by IR spectroscopy (3475 cm −1 ). When the secondary alcohol 15c was treated with thionyl chloride followed by an excess of piperazine (5 equivalents), the product obtained was a piperazine dimer 28 (Scheme 8). In the 1 H-NMR spectrum of the dimer 28, the broad signal (2.40 ppm) is characteristic of the piperazine methylene protons, while the signal at 4.08 ppm integrating for 2H was assigned to the two tertiary CH protons. In the 13 C-NMR spectrum, the signal at 52.0 ppm was assigned to the piperazine ring carbons; the signal at 75.7 ppm was assigned to the CH, while the duplication of the aromatic signals confirmed the formation of the product. Stability Studies HPLC stability studies were performed on representative compounds 21l and 24 to establish their stability at different pH systems, which mimic in vivo conditions. Compound 21l was chosen among these imidazole compounds for HPLC stability studies at three different pH systems; acidic pH 4, pH 7.4, and basic pH 9 (acid pH found in the stomach, basic found in the intestine, and pH 7.4 in the plasma). The degradation of compound 21l was minimal with 80% of 23l remaining at both pH 7.4 and pH 9 and 90% at pH 4 after 24 h. The 1,2,3-triazole compound 24 was observed to be most stable at pH 4 with 65% remaining after 24 h compared to 60% at pH 9 and 50% at pH 7.4. In Vitro Antiproliferative Activity in MCF-7 Breast Cancer Cells The antiproliferative activity of the panel of hybrid compounds 1-(diarylmethyl)-1H-1,2,4-triazoles (Series 1 and 2) and 1-(diarylmethyl)-1H-imidazoles (Series 3 and 4) was initially evaluated in the MCF-7 human breast cancer cell using the standard alamarBlue assay. In addition, a number of related compounds containing the aliphatic amines pyrrolidine, piperidine, and piperazine were investigated (Series 5). The MCF-7 human breast cancer cell line is estrogen receptor (ER)-positive, progesterone receptor (PR)-positive, and HER2 negative. Compounds were initially screened at two concentrations (1 and 0.1 µM) for antiproliferative activity in MCF-7 cells to determine the structure-activity relationship for these hybrid compounds and to identify the most potent compounds for further investigation. Compounds that were synthetic intermediates for the final compound were not screened, as they were not considered as potential actives in the study. The results obtained from this preliminary screen are displayed in Figures 4-6. Then, those compounds showing potential activity (cell viability <50%) were selected for further evaluation at different concentrations and in other cell lines. CA-4 (4a) (24% viable cells at 1 µM) and phenstatin (7a) (30% viable cells at 1 µM) induced a potent antiproliferative effect and were used as positive controls. Ethanol (1% v/v) was used as the vehicle control (with 99% cell viability). The preliminary results obtained for these novel compounds (Series 1-5) are discussed by structural type. 3.1.1. Series 1: 1-(Diarylmethyl)-1H-1,2,4-triazoles 13b-g, l-o The first class of compounds tested 1-(diarylmethyl)-1H-1,2,4-triazoles (13b-g, 13l-o, Figure 4A) were weakly active, with 68-90% viability for the two concentrations tested (1 µM and 0.1 µM). These compounds carry a single substituent at the para position on one or both aryl rings (Cl, F, Br, OH, OCH 3 , CH 3 , etc.) indicating that the triazole ring alone is not sufficient for the induction of antiproliferative activity in MCF-7 cells. The most active compounds were the diphenolic derivative 13o with 68% viability (1 µM) and the amino compound 13m (72% viability 1 µM). It appears that specific substituents are required on both the A and B rings of the benzophenone for activity, as also observed for phenstatin and analogues [67]. Since the potent tubulin inhibiting activity of the 3,4,5-trimethoxyaryl function is very well documented [90], the preliminary screening in MCF-7 cells of the panel of 1,2,4-triazole containing compounds (16a, c-i, 19b-e) synthesised having the 3,4,5-trimethoxyphenyl motif (A ring) together with various substituents on the B ring was next investigated ( Figure 4B, two concentrations of 1 µM and 0.1 µM). The most potent compound was identified as 19e having the characteristic 3-hydroxy-4-methoxyaryl B ring as in phenstatin and CA-4 (29% viability at 1 µM), while the ethanol control (1% v/v) resulted in 99% viability. Two compounds with moderate activity were identified as 16c (4-methoxy group in the B ring) with 75% cell viability at 1 µM and 16g (4-fluoro in B ring) with 77% viable cells at 1 µM. The remaining 3,4,5-trimethoxyphenylmethyl-1H-1,2,4-triazole compounds investigated having various substituents on the B ring e.g., 4-F, 4-CN, 4-OH, 4-CH 3 were not as potent as the lead compound with viability >80% at 1 µM, while compounds 19h and 19i were found to be inactive with half maximal inhibitory concentration (IC 50 ) values greater than 100 µM. This result demonstrated that even small changes to the phenstatin scaffold were unfavourable for antiproliferative activity. From the initial screening results, it was concluded that the 1,2,4-triazole heterocycle alone was not sufficient to improve activity in the benzhydryl compounds compared to phenstatin. The IC 50 value for the most potent triazole-phenstatin hybrid compound 19e was determined in MCF-7 as 0.42 ± 0.07 µM at 72 h (Table 2). 19e is a hybrid of phenstatin with the 3,4,5-trimethoxyaryl motif (ring A) and the 3-hydroxy-4-methoxyaryl B ring, but it is also related to the aromatase inhibitor letrozole due to the 1,2,4-triazole heterocycle. The hybrid structure suggests a potential for dual tubulin/aromatase activity, and therefore, this compound was selected for aromatase inhibition assay. Compound 24 is the only example synthesised containing the 1,2,3-triazole heterocycle and is also a direct analogue of phenstatin because of the presence of the 3,4,5trimethoxyaryl motif (ring A) and the 3-hydroxy-4-methoxyaryl B ring. This structure showed excellent activity in MCF-7 cells with 27% cell viability at 1 µM and the IC 50 value for the compound was determined as 52 nM (Table 2), which compares with CA-4 (IC 50 = 3.9 nM) [102,103]. 24 was selected for further studies on different cell lines and for cell cycle analysis. Phenstatin (7a) was synthesised in our laboratory for use as a positive control (IC 50 = 1.61 ± 2.7 nM) [104]. The results obtained from the preliminary screening of the benzhydryl imidazole derivatives, 20b-k and l are shown in Figure 5A. These compounds carry a single substituent at the para position on one or both aryl rings (Cl, F, Br, OH, OCH 3 , CH 3 , etc). This library of compounds did not show any significant activity, with cell viability of 67-90% at concentrations of 1 and 0.1 µM, as observed for the Series 1 1,2,4-triazole derivatives 13b-g and l-o, indicating that the imidazole ring alone is not sufficient for antiproliferative activity. The most active compounds in this panel were identified as the 4-nitro derivative 20b and the 4-fluoro substituted compound 20d (73% and 67% cell viability respectively at 1 µM). Series 4: 1-(Aryl-(3,4,5-Trimethoxyphenyl)Methyl)-1H-Imidazoles 21a-g, i-l The results obtained from the preliminary screening of the panel of phenstatin hybrid compounds carrying imidazole as the heterocyclic ring (21a-g, i-l) in MCF-7 cells are shown in Figure 5B. From the library of 3,4,5-trimethoxydiphenylmethyl-1H-imidazole derivatives (21a-g, i-l), compound 21l was significantly the most active (31% viable cells at 1 µM), confirming the observation that the phenstatin scaffold is required for optimum activity. The remaining compounds in the series demonstrated weak activity, with viability >80% at 1 µM. The IC 50 value of the most potent imidazole containing compound 21l was determined as 0.132 ± 0.007 µM in MCF-7 cells (Table 2) The results of preliminary evaluation of the panel of pyrrolidine and piperidine derivatives 25a-g and 26a-c in MCF-7 cells are shown Figure 6A. These compounds were not sufficiently active when compared to the positive controls CA-4 and phenstatin (7a). The most potent examples were identified as the piperidine derivative 26b, showing the lowest percentage of viable cells (78%) at 1 µM and containing the 3,4,5trimethoxyphenyl (ring A) and 4-methoxyphenyl (Ring B), together with the corresponding pyrrolidine containing compounds 25g and 25d (82% and 80% viability at 1 µM). The (3,4,5-trimethoxyphenyl)(methyl)piperazine derivatives (27c,d,f,h,i) were screened at three concentrations (10, 1, 0.1 µM) ( Figure 6B). Compound 27f was identified as the most active, with a percentage of viable cells of 42% at 10 µM, 76% at 1 µM and 84% at 0.1 µM. Benzylpiperazine 27i, which is more closely related in structure to phenstatin, displayed promising antiproliferative activity at 10 µM (48% cell viability). From the results obtained above, it is interesting to see that inclusion of the triazole heterocycle on the phenstatin scaffold (as in compounds 21l and 24) results in greater antiproliferative effects in the MCF-7 cell line than the corresponding imidazole compound (19e). By comparison, replacement of the azole with pyrrolidine, piperidine, or piperazine resulted in decreased antiproliferative activity. The antiproliferative activity of the most potent azole compounds 19e, 21l, and 24 may be correlated to the logP values (see Supplementary Information). The imidazole compound 19e has a lower logP (2.41) when compared to the 1,2,4-triazole compound 21l (logP of 2.91) and the 1,2,3-triazole compound 24 (logP 3.50); the antiproliferative activity of the compounds 19e, 21l, and 24 in MCF-7 cells are determined as IC 50 = 0.42, 0.13, and 0.052 µM, respectively. In addition, the total polar surface (TPSA) area for these compounds is in the range 74.22-87.86 Å 2 < 140 Å 2 . However, compounds with higher logP values e.g., the piperazine compounds 27f (5.50) and 27d (4.67) display poor activity. Antiproliferative Activity of Selected Analogues in MDA-MB-231 and HL60 Cell Lines A number of the more potent compounds synthesised were evaluated in the triplenegative MDA-MB-231 cell line with 72 h incubation time (see Table 2). For the triazole compound 19e, an IC 50 value of 0.98 µM was obtained in MDA-MB-231 cells, although this is not as potent as observed in the MCF-7 cells (IC 50 = 0.42 µM, Table 2). The lower IC 50 value for the imidazole compound 21l (0.237 µM) indicates that the imidazole heterocycle in 21l contributes to the antiproliferative activity more effectively than the 1,2,4-triazole ring in compound 19e. The novel 1,2,3-triazole compound 24 was the best of all analogues tested in MCF-7 cells (IC 50 = 0.052 µM). The result obtained for 24 in the MDA-MB-231 cell line was also very promising (IC 50 = 0.074 µM), Table 2, and compares very favourably with the reported activity of phenstatin in MDA-MB-231 cells (IC 50 = 1.5 µM [105]), indicating that the 1,2,3-triazole has very potent antiproliferative effects compared to imidazole or 1,2,4-triazole present in the related compounds 21l and 19e. Since the antiproliferative effects of 1,2,3-triazole-phenstatin hybrid compounds has not previously been investigated, this heterocycle is especially interesting for further development. In a further study, the antiproliferative effects of the novel imidazole compound 21l, 1,3,4-triazole compound 19e, and the 1,2,3-triazole compound 24 (structurally related to letrozole and phenstatin) in HL-60 leukaemia cells was also investigated. HL-60 leukaemia cells were used as an in vitro model for acute myeloid leukaemia. Both MCF-7 and HL-60 cell lines are CA-4 sensitive are highly susceptible to the effects of tubulin-targeting compounds [102]. The IC 50 value of 0.156 µM obtained for imidazole compound 21l identifies it as a lead compound for future development. The 1,2,3-triazole compound 24 was also potent in the leukaemia HL-60 cell line with an IC 50 value of 0.173 µM, while 19e was less potent, IC 50 = 261 µM. (IC 50 value for phenstatin = 0.031 µM [106]). This experiment demonstrated the selective effect of interchanging the imidazole, 1,3,4-triazole, and 1,2,3-triazole heterocycles on cell viability in HL-60 cells. NCI Cell Line Screening for 19e, 21l, 25g, 26b, and 27d Five novel substituted phenstatin compounds from the present work ((19e, (Series 2) 21l, (Series 4), 25g, 26b and 27d (Series 5)) were selected for evaluation in the NCI 60 cell line screen [107] following initial analysis of the Lipinski (drug-like) properties from the Tier-1 profiling screen, together with predictions of the relevant absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties e.g., metabolic stability, permeability, blood-brain barrier partition, plasma protein binding, and human intestinal absorption properties (see Tables S1 and S2 Supplementary Information). The compounds are predicted to be moderately lipophilic-hydrophilic, revealing their drug-like pharmacokinetic profiles and are potentially suitable candidates for further investigation. The results obtained for the triazole compound 19e in the NCI 60 cancer cell line screening (GI 50 values, five doses) [107] are shown in Table 3. (GI 50 is defined as the concentration for 50% of maximal inhibition of cell proliferation). In general, 19e showed good activity on most of the cell lines with GI 50 values in the sub-micromolar range. The activity was particularly potent in all of the leukaemia, CNS, and prostate cancer cell lines. The activity in MCF-7 cells (GI 50 = 0.347 µM) was in close agreement with the value obtained from our in-house viability assay of 0.424 µM. The compound displayed significant activity in the TNBC cell lines HS-578T (GI 50 = 0.548 µM) and MDA-MB-468 (GI 50 = 0.371 µM) and in the BT-549 invasive ductal carcinoma cell line (GI 50 = 0.618 µM). Potent anti-cancer activity was also observed against the ovarian cancer cells, e.g., OVOCAR-3 cell line (0.323 µM) and colon cancer, e.g., chemoresistant HT-29 cells (GI 50 = 0.330 µM). The best activity for 19e among all of the 60 cell lines tested was the melanoma cell line MDA-MB-435 in which the GI 50 value was 0.181 µM. The MID GI 50 was calculated as 0.243 µM over all 60 cell lines. The MID value obtained for TGI (total growth inhibition) was 53.7 µM, and for LC 50 , it was 97.7 µM, indicating that the lethal concentration of the drug is very high and well above the GI 50 value, indicating that 19e has low toxicity. The results of the NCI COMPARE analysis for compound 19e are shown in Table 4. Based on the GI 50 mean graph and on the TGI mean graph, the compound with the highest rank was vinblastine sulphate with r values of 0.586 and 0.737, respectively. Correlation values (r) are Pearson correlation coefficients. The National Cancer Institute (NCI) screening of imidazole compound 21l also demonstrated very good results showing that the compound not only is active against breast cancer cells but also against other types of cancer (see Table 2). Compound 21l proved active against all of the leukaemia cell lines; in particular, very promising activity was measured in SR cells (GI 50 Table 3, it was observed that based on the mean GI 50 value, the activity of our 21l is most closely related to paclitaxel (r = 0.587). Based on TGI values, the compound with the highest ranking was maytansine (r = 0.775); both are tubulin-targeting agents. Correlation values (r) are Pearson correlation coefficients and LC 50 values all >0.1 mM. Compounds 25g, 26b, and 27d were also selected for evaluation in the NCI 60 cell line one-dose screen (see Table S4, Supplementary Information). The mean growth percentages for 25g, 26b, and 27d were 73.1%, 34.2%, and 65.5% over the 60 cell line panel at 10 µM. Interestingly, the piperidine compound 26b displayed significant potency in the breast cancer panel, with mean growth of 30.3% over this cell panel, with notable potency in the triple negative breast cancer cell lines HS587T (16.6% growth) and MDA-MB-468 (9.3% growth). In the leukaemia panel, the mean growth obtained is 23.5% over this 60 cell panel and significantly 4.36% growth for the acute myeloid leukaemia HL-60 cell line. Compound 26b also displayed notable potency in the CA-4 resistant colon cancer cell line HT-29 with 7.93% growth recorded. Evaluation of Toxicity in MCF-10A Cells The potent phenstatin derivatives 19e, 21l, and 24 were selected for toxicity evaluation in the non-tumorigenic MCF-10A epithelial breast cancer cell line. The human mammary epithelial cell line MCF10A is widely used as an in vitro model for normal breast cell function and transformation [108]. The viability of the MCF-10A cells was determined after treatment with compounds 19e and 21l at four different concentrations of 10, 1, 0.5, and 0.4 µM for 24 h ( Figure 7A,B). It was observed that at the highest concentration (10 µM), compounds 19e and 21l show a cell death of approximately 50%. At 1 µM concentration, compound 19e does not show any loss in cell viability (99% viability), while compound 21l resulted in 73% cell viability, still above the IC 50 values of 0.42 µM (19e) and 0.13 µM (21l) in MCF-7 cells. When the experiment was repeated with an increased incubation time of 48 h, it was observed that the percentage of viable cells at 10 µM concentration decreased for compounds 19e and 21l to approximately 30% ( Figure 7A,B). The percentage of viable cells at 1 µM decreased to 64% for compound 21l, while it did not change significantly for 19e (>94%). For both compounds, viability at 0.5 µM and 0.4 µM is close to 100%, which means that the compounds are not toxic toward healthy cells at lower concentrations corresponding to their IC 50 values. The third screening for 19e and 21l was performed at 72 h, which is the incubation time used through all the screenings in MCF-7 ( Figure 7A,B). It is interesting to note that as the concentration of the drug decreases from 1 to 0.5 and 0.4 µM, the percentage of viable cells increases significantly, with viable cells percentage >80% at 0.4 µM for all compounds tested. This demonstrates that even at concentrations that are toxic to the MCF-7 cancer cells, the MCF-10A are not killed by the drug. Therefore, the compounds selected demonstrate good antiproliferative activity and additionally show good selectivity and low cytotoxicity to normal cells. Compound 24 was evaluated in MCF-10A cells at three different concentrations: 10, 1, and 0.1 µM over 72 h ( Figure 7C). The percentage of viable cells at the three different concentrations was 61%, 71%, and 96%, respectively, with higher percentage of cells alive at the lower concentration. Compound 24 demonstrates good selectivity for cancer cells and low cytotoxicity even if the percentage of viable cells at 1 µM was slightly lower than the value observed previously for compounds 19e and 21l (>80%) at 72 h. These results are also supported by the low toxicity of the compounds determined from the NCI evaluation. Tubulin-targeting drugs such as taxanes and vinca alkaloids are among the most effective anti-cancer therapeutics in the treatment of castration-resistant prostate cancer and triple-negative breast cancer. However, their use is limited by toxicities including neutropenia and neurotoxicity; additionally, tumour cells can develop resistance to these drugs [109]. Our results demonstrate that azoles 19e, 21l, and 24 were less toxic to normal human breast cells than to breast cancer cells, providing a potential window of selectivity. Effects of Compounds 21l and 24 on Cell Cycle Arrest and Apoptosis To investigate further the mechanism of action of the novel azole compounds synthesised, the effect of selected potent compounds 21l and 24 was investigated in MCF-7 cells by flow cytometry and propidium iodide (PI) staining, allowing the percentage of cells in each phase of the cell cycle to be quantified ( Figure 8). For the imidazole compound 21l, three time points were analysed (24, 48, and 72 h), and the values obtained for apoptosis and the G 2 /M phase of the cell cycle were quantified (concentration 1 µM), as shown in Figure 8A. It was observed that the percentage of cells undergoing apoptosis (sub-G 1 ) increases significantly at all three time points to 15%, 31%, and 37% respectively compared to the background level of apoptosis with the vehicle ethanol (2%, 4%, and 2%) at the corresponding time points. It is also interesting to notice how the percentage of cells in the G 2 /M phase for the treated sample (47%, 43%, and 40%) is statistically higher than the cells in the same phase for the control sample treated with the vehicle (26%, 25%, 25%) at the corresponding time points. G 2 /M cell cycle arrest is strongly associated with an inhibition of tubulin polymerisation. CA-4 and related tubulin targeting compounds cause G2/M arrest. Hence, the higher percentage of cells observed in cells treated with 21l may suggest that the mechanism of action is indeed the inhibition of tubulin polymerisation. (v/v)) and was determined by alamarBlue assay (average ± SEM of three independent experiments). Statistical analysis was performed using two-way ANOVA (***, p < 0.001). The 1,2,3-triazole compound 24, which was the most potent compound evaluated in the viability assay, demonstrated the same effects on the relative percentages of cells in apoptosis and the G 2 /M phase, as shown in Figure 8B. Apoptosis increased with time, with a statistically significant difference compared to vehicle control at 72 h. A high percentage of cells were arrested in the G 2 /M phase (52%, 56%, and 59%) at time points 24, 48, and 72 h respectively, following treatment with compound 24 with a much lower percentage of cells in the G 2 /M phase for the sample treated with the vehicle (28%, 23%, and 24%) at the same time points. Phenstatin 7a was used as a positive control through all the biological experiments. Cell cycle analysis of MCF-7 cells treated with phenstatin at time points 24, 48, and 72 h and a concentration of 1 µM showed a very low percentage of cells undergoing apoptosis at 24 and 48 h, as shown in Figure 8C. Apoptosis increased to 18% between 48 and 72 h, while the percentage of cells in the G 2 /M phase was correspondingly high (65%, 49%, and 51% at 24, 48, and 72 h, respectively). This pattern was also observed in the compounds 21l and 24 tested, but the percentage of cells in apoptosis was always higher than for phenstatin, possibly suggesting differences in the effects of these compounds on tubulin arising from the presence of the azole in the modified structures. The Figure 9A). When MCF-7 cells were treated with 21l (0.1, 0.5, and 1 µM), the average proportion of Annexin V-stained positive cells (total apoptotic cells) increased from 0.9% in control cells to 14.1%, 17.5%, and 19.3%, respectively. These results suggested that compound 21l induces the apoptosis of MCF-7 cells in a dose-dependent manner. In MDA-MB-231 cells, the percentage of cells observed in apoptosis following treatment with 21l was significantly lower with 3.6%, 5.9%, and 6.9% at 0.1, 0.5, and 1.0 µM respectively, as shown in Figure 9B. In contrast, for phenstatin, the Annexin V-stained positive cells (total apoptotic) cells were determined as 36.1% and 46% in MCF-7 cells at 0.1 µM and 0.5 µM, respectively, as shown in Figure 9A. The total apoptotic MDA-MB-231 cells were determined as 16.6% and 17.9% following treatment with phenstatin (0.1 and 0.5 µM), respectively, as shown in Figure 9B. Tubulin Polymerisation Compound 21l was selected for further analysis using a tubulin polymerisation assay. Its promising antiproliferative activity (IC 50 = 0.237 µM in MCF-7 cells) combined with structural features related to phenstatin 7a and CA-4 indicate that the mechanism of action of this compound could be the inhibition of tubulin polymerisation. The assay is based on the capacity of microtubules to scatter light proportionally to their concentration. The imidazole compound 21l (red) showed good inhibition of tubulin polymerisation after 60 min (V max value 2.84 ± 0.10 mOD/min at 10 µM), corresponding to a 1.34-fold reduction of the polymer mass compared to the vehicle, as shown in Figure 10A. Paclitaxel (in green) was used as a positive control as it stabilises polymerised tubulin, as shown in Figure 10A. Phenstatin 7a is a potent inhibitor of tubulin polymerisation comparable to CA-4 [66], as shown in Figure 10B. Incubation with either imidazole 21l or phenstatin resulted in a significant inhibition of tubulin polymerisation and assembly. Following the experiment above, the in vitro effects of compounds 19e and 21l were examined on the microtubule structure of MCF-7 breast cancer cells with confocal microscopy using anti-tubulin antibodies. Paclitaxel and phenstatin, a known polymeriser and depolymeriser of tubulin respectively, were used as controls. In Figure 11A, a wellorganised microtubule network (stained green) is clearly seen for the vehicle control, together with the MCF-7 cell nuclei (stained blue). Hyperpolymerisation of tubulin was demonstrated in the paclitaxel-treated sample ( Figure 11B), whereas the phenstatin-treated sample Figure 11C shows an extensive depolymerisation of tubulin. Cells treated with the azoles 19e ( Figure 11D) and 21l ( Figure 11E) displayed disorganised microtubule networks with similar effects to phenstatin, together with multinucleation (formation of multiple micronuclei), which is a recognised sign of mitotic catastrophe [110] previously observed by us and others upon treatment with tubulin-targeting agents such as CA-4 and related compounds in non-small cell lung cancer cells and breast cancer MCF-7 cells [111,112]. Effects of Compounds 21l and 24 on Expression Levels of Apoptosis-Associated Proteins Some of the novel compounds synthesised during the project were selected for further investigation of their mechanism of action as pro-apoptotic agents based on their effect on the expression of proteins that can regulate apoptosis or proteins involved in the regulation of DNA repair. The effects of compounds 21l and 24 on apoptosis were evaluated by Western blotting. Apoptosis regulating proteins Bcl-2 and Mcl-1 were investigated along with PARP. PARP (poly ADP-ribose polymerase) is involved in the repair of DNA singlestrand breaks in response to environmental stress [113]; and PARP cleavage is considered a hallmark of apoptosis. Bcl-2 is an anti-apoptotic protein that prevents apoptosis by sequestering caspases (apoptosis promoters) or by preventing the release of pro-apoptotic cytochrome c and AIF (apoptosis inducing factor) from the mitochondria into the cytoplasm [114]. The Mcl-1 protein belongs to the Bcl-2 family; it is also an anti-apoptotic protein localised in the mitochondrial outer membrane that acts at a very early stage in the cascade, leading to the release of the cytochrome c [115]. Pro-and anti-apoptotic members of the Bcl-2 family can heterodimerise and titrate each other's functions. If the expression levels of Mcl-1 and Bcl-2 are reduced (by drug treatment), apoptosis may be triggered. From the results obtained, no change in the expression levels of two anti-apoptotic proteins was observed, indicating that Bcl-2 and Mcl-1 may not play a critical role in the pro-apoptotic mechanism of action of the compounds (Figure 12). A significant reduction in the expression of full-length PARP (116 kDa) between the vehicle and the treated MCF-7 cells was observed (Figure 12), suggesting that 21l and 24 cause PARP cleavage. PARP enzymes play a crucial role in the DNA repair, and PARP cleavage is affected by caspase 3 activity. PARP enzymes are found in the cell nucleus and are activated by damage of the DNA single strand; therefore, the inhibition of DNA repair in cancer cells represents an attractive strategy in cancer therapy [116]. In conclusion, the proposed mechanism of action of these compounds as pro-apoptotic drugs is supported by the observed increase in the percentage of cell in subG1 in the cell cycle profile, the flow cytometric analysis of Annexin V/PI-stained cells, and also by PARP cleavage. Aromatase Inhibition An objective of this research was the design of dual-acting tubulin/aromatase inhibitors. The evaluation of the aromatase inhibitory activity of the most potent compounds prepared was next investigated. Three compounds of the phenstatin hybrid panel 21l, 24 and 19e were selected for evaluation against two cytochrome members of the P450 family: CYP19 and CYP1A1. CYP19 is the aromatase cytochrome directly responsible for the synthesis of estradiol by the aromatisation of its steroid precursors testosterone and androstenedione, while CYP1A1 is involved in the metabolism of estrogen. The specificity of aromatase inhibition was evaluated by an assay carried out with xenobiotic-metabolising cytochrome P450 enzymes CYP1A1. The methodology applied in this study requires the detection of the hydrolysed dibenzylfluorescein (DBF) by the aromatase enzyme [117]. Aromatase and CYP1A1 inhibition were quantified by measuring the fluorescent intensity of fluorescein, the hydrolysis product of dibenzylfluorescein (DBF), by aromatase as previously described [118,119]. Naringenin was used as a positive control, yielding IC 50 values of 4.9 µM. The test was initially conducted at one concentration (20 µg/mL). Further experiments to determine the IC 50 value were performed if the compound caused greater than 90% inhibition at 20 µg/mL. The results are presented in Table 5. Of these, 1,2,3-triazole 24 was inactive, as it did not show any inhibition of the enzyme at 20 µg/mL, 0.05 µM (0.01% for CYP19 and 12.81% for CYP1A1), whereas imidazole 21l (0.05 µM) and 1,2,4-triazole 19e (0.05 µM) were active in the first screen against CYP19 ( Table 5). The inhibition for imidazole 21l, although potent, was not concentration-dependent, and the IC 50 could not be determined. 1,2,4-Triazole 19e inhibited aromatase in a concentrationdependent manner, and its IC 50 was determined as 29 µM. Of all the tested compounds (21l, 24, and 19e), none showed significant inhibition of CYP1A1, yielding IC 50 values above 53 µM, which is regarded as inactive [119,120]. From the results obtained, we can suggest that the 1,2,4-triazole heterocycle is required for aromatase inhibition in the phenstatin related compound 19e. Therefore, the 1,2,4-triazole compound 19e could be identified as a potential dual-acting drug for the treatment of breast cancer targeting both aromatase inhibition and tubulin polymerisation. Molecular Docking of Phenstatin Hybrids 19e, 21l, and 24 Compounds 19e, 21l, and 24 were next examined in tubulin molecular docking experiments to rationalise the observed biochemical activities. These three molecules contain a 3-hydroxy-4-methoxy substituted aromatic ring and a 3,4,5-trimethoxyphenyl ring and differ in the nitrogen heterocycle that is substituted on the benzyhdryl linkage. The compounds phenstatin 7a and N-deacetyl-N-(2-mercaptoacetyl)colchicine (DAMA-colchicine) were used as reference compounds in the docking experiments. Since the compounds 19e, 21l, and 24 were synthesised as racemates, both R and S enantiomers of each compound were docked in the crystallised tubulin structure 1SA0 [121] and ranked based on the substituent and enantiomer giving the best binding results as illustrated in Figure 13. The co-crystallised tubulin DAMA-colchicine structure 1SA0 [121] was used for this study, as it has been demonstrated that both CA-4 4a and phenstatin 7a interact at the colchicine-binding site of tubulin. Figure 13A-C shows the binding of the S enantiomers, the ranking for the binding of the three different compounds in order: S-21l, S-24, and S-19e. All three compounds demonstrate a strong interaction with the same amino acid residue Lys352. Compound S-21l forms a hydrogen bond acceptor interaction between an imidazole nitrogen and Ser178. The imidazole also forms a π-CH interaction with Leu248. Compounds S-24 and S-19e show very similar behaviour; they do not bind Ser178 but still have the same interaction with Leu248. In the R-enantiomer series, the heterocycle is directed differently, and very different binding poses and less favourable binding interactions between the ligands and the tubulin binding site are predicted for these compounds ( Figure 13D-F). In order to maintain the A and C-ring overlays, the heterocycle would clash with binding site amino acids, so for the three R-enantiomers, the heterocycle overlays with either the A or C-ring and the 3,4,5-trimethoxyphenyl mapping is no longer possible or not as ideal. Compound S-21l was the highest ranked compound in the series; therefore, it would be of interest to obtain in vitro results for the enantiomerically pure compound. Phenstatin 7a also maps well to the colchicine binding pose with the 3,4-5-trimethoxyaryl residues overlaying effectively and the B-ring 4-methoxy group positioned to form a hydrogen bond with Lys352 ( Figure 12G). The results provide rationalisation of the observed biochemical experiments in which cell cycle and tubulin binding was confirmed, indicating that these compounds are apoptotic and tubulin depolymerising agents. Ligands are rendered as tube and amino acids as a line. Tubulin amino acids and DAMA-colchicine are coloured by atom type; the three heterocycles are coloured green. The atoms are coloured by element type, carbon = grey, hydrogen = white, oxygen = red, nitrogen = blue, sulphur = yellow. Key amino acid residues are labelled, and multiple residues are hidden to enable a clearer view. Chemistry All reagents were commercially available and were used without further purification unless otherwise indicated. Anhydrous solvents were purchased from Sigma. Uncorrected melting points were measured on a Gallenkamp apparatus. Infrared (IR) spectra were recorded on a Perkin Elmer FT-IR Paragon 1000 spectrometer. 1 H and 13 C nuclear magnetic resonance spectra (NMR) were recorded at 27 • C on a Brucker DPX 400 spectrometer (400.13 MHz, 1 H; 100.61 MHz, 13 C) in CDCl 3 (internal standard tetramethylsilane (TMS)). For CDCl 3 , 1 H NMR spectra were assigned relative to the TMS peak at 0.00 ppm, and 13 C NMR spectra were assigned relative to the middle CDCl 3 peak at 77.0 ppm. Electrospray ionisation mass spectrometry (ESI-MS) was performed in the positive ion mode on a liquid chromatography time-of-flight mass spectrometer (Micromass LCT, Waters Ltd., Manchester, UK). The samples were introduced to the ion source by an LC system (Waters Alliance 2795, Waters Corporation, Milford, MA, USA) in acetonitrile/water (60:40% v/v) at 200 µL/min. The capillary voltage of the mass spectrometer was at 3 kV. The sample cone (de-clustering) voltage was set at 40 V. For exact mass determination, the instrument was externally calibrated for the mass range m/z 100 to 1000. A lock (reference) mass (m/z 556.2771) was used. Mass measurement accuracies of < ±5 ppm were obtained. Thin-layer chromatography (TLC) was performed using Merck Silica gel 60 TLC aluminium sheets with fluorescent indicator visualising with UV light at 254 nm. Flash chromatography was carried out using standard silica gel 60 (230-400 mesh) obtained from Merck. All products isolated were homogenous on TLC. The purity of the tested compounds was determined by HPLC. Analytical high-performance liquid chromatography (HPLC) was performed using a Waters 2487 Dual Wavelength Absorbance detector, a Waters 1525 binary HPLC pump, and a Waters 717 plus Autosampler. The column used was a Varian Pursuit XRs C18 reverse phase 150 × 4.6 mm chromatography column. Samples were detected using a wavelength of 254 nm. All samples were analysed using acetonitrile (60%)/water (40%) over 10 min and a flow rate of 1 mL/min. Microwave experiments were carried using a Biotage Discover CEM microwave synthesiser on a standard power setting (maximum power supplied is 300 watts) unless otherwise stated. Details of the synthesis and characterisation of intermediate compounds and target azole products are available in the Supporting Information. General Method A: Preparation of Alcohols To a solution of the benzophenone in methanol (25 mL), NaBH 4 (1 eq) was added in small portions. The solution was stirred at 0 • C until the reaction was complete from TLC. Dilute HCl (10%) was added, and the solvent was removed with the rotary evaporator. Then, the product was dissolved in ethyl acetate (30 mL) and washed with water (20 mL) and brine (10 mL), dried over sodium sulphate, filtered, and concentrated. Purification via flash column chromatography (eluent: n-hexane/ethyl acetate 1:1) afforded the product. General Method E for the Preparation of Diarylmethylpyrrolidines, Diarylmethylpiperidines and Diarylmethylpiperazines The benzhydryl alcohol (1 eq) was reacted with thionyl chloride (5 eq) in dry DCM (30 mL) for 12 h. The reaction mixture was concentrated under reduced pressure, and the crude product was used in the next step without any further purification. The chlorinated benzhydryl alcohol was reacted with pyrrolidine or piperidine (5 eq) in dry ACN (30 mL) and refluxed for 12 h. The solvent was removed, and the residue was dissolved in DCM (50 mL) and washed with 1 M NaOH (30 mL). The organic phase was dried over sodium sulphate, filtered, and concentrated. Then, the crude product was purified via flash chromatography (eluent: n-hexane/ethyl acetate). Stability Study of Compounds 21l and 24 Stability studies for compounds 21l and 24 were performed by analytical HPLC using a Symmetry ® column (C18, 5 mm, 4.6 × 150 mm), a Waters 2487 Dual Wavelength Absorbance detector, a Waters 1525 binary HPLC pump, and a Waters 717 plus Autosampler (Waters Corporation, Milford, MA, USA). Samples were detected at λ 254 nm using acetonitrile (70%)/water (30%) as the mobile phase over 15 min and a flow rate of 1 mL/min. Stock solutions of the compounds are prepared using 10 mg of compounds 21l and 24 in 10 mL of mobile phase (1 mg/mL). Phosphate buffers at the desired pH values (4, 7.4, and 9) were prepared following the British Pharmacopoeia monograph 2020. Then, 30 µL of stock solution was diluted with 1 mL of appropriate buffer, shaken, and injected immediately. Samples were withdrawn and analysed at time intervals of t = 0 min, 5 min, 30 min, 60 min, and hourly for 24 h. X-ray Crystallography Data for samples 16e, 16f, 19c, 21e, and 26a were collected on a Bruker APEX DUO using Mo Kα and Cu Kα radiation (λ = 0.71073 and 1.54178 Å). Each sample was mounted on a MiTeGen cryoloop and data were collected at 100(2) K using an Oxford Cobra cryosystem. Bruker APEX [123] software was used to collect and reduce data, determine the space group, solve, and refine the structures. Absorption corrections were applied using SADABS 2014 [124]. Structures were solved with the XT structure solution program [125] using Intrinsic Phasing and refined with the XL refinement package [126] using Least Squares minimisation. All non-hydrogen atoms were refined anisotropically. Hydrogen atoms were assigned to calculated positions using a riding model with appropriately fixed isotropic thermal parameters. Molecular graphics were generated using OLEX2 [127]. All structures are racemates. In 26a, the disordered fluorine was modelled in two positions with occupancies of 84% and 16%. Geometric restraints (SADI) were used to model the C-F bond lengths. Crystallographic data for the structures in this paper have been deposited with the Cambridge Crystallographic Data Centre as supplementary publication nos. 201543, 2015432, 2015433, 2015434, and 2015435. Copies of the data can be obtained, free of charge, on application to CCDC, 12 Union Road, Cambridge CB2 1EZ, UK, (fax: +44-(0)1223-336033 or e-mail:deposit@ccdc.cam.ac.uk). Biochemical Evaluation of Activity All biochemical assays were performed in triplicate and on at least three independent occasions for the determination of mean values reported. Cell Cycle Analysis Cells were seeded at a density of 1 × Tubulin Polymerisation Assay The assembly of purified bovine tubulin was monitored using a kit, BK006, purchased from Cytoskeleton Inc. (Denver, CO, USA). The assay was carried out in accordance with the manufacturer's instructions using the standard assay conditions [128]. Briefly, purified (>99%) bovine brain tubulin (3 mg/mL) in a buffer consisting of 80 mM piperazine-N,N'bis(2-ethanesulfonic acid) (PIPES) (pH 6.9), 0.5 mM ethylene glycol tetraacetic acid (EGTA), 2 mM MgCl 2 , 1 mM guanosine-5'-triphosphate (GTP)GTP and 10% glycerol was incubated at 37 • C in the presence of either vehicle (2% (v/v) ddH 2 O) paclitaxel, phenstatin (7a), or 21l (all at 10 µM). Light is scattered proportionally to the concentration of polymerised microtubules in the assay. Therefore, tubulin assembly was monitored turbidimetrically at 340 nm in a Spectramax 340 PC spectrophotometer (Molecular Devices, Sunnyvale, CA, USA). The absorbance was measured at 30 s intervals for 60 min. 4.4.8. Cytochrome P450 Assays (CYP19 (Aromatase) and CYP1A1) The substrate DBF (dibenzylfluorescein) was obtained from Gentest Corporation (Woburn, MA). All human recombinant cytochrome P450 enzymes were purchased from BD Biosciences, San Jose, CA. Aromatase and CYP1A1 inhibition were quantified by measuring the fluorescent intensity of fluorescein, the hydrolysis product of dibenzylfluorescein (DBF), by aromatase, as previously described [118,119]. In brief, the test substance (10 µL) was pre-incubated with a NADPH regenerating system (90 µL of 2.6 mM NADP + , 7.6 mM glucose 6-phosphate, 0.8 U/mL glucose 6-phosphate dehydrogenase, 13.9 mM MgCl 2 , and 1 mg/mL albumin in 50 mM potassium phosphate, pH 7.4), for 10 min, at 37 • C, before 100 µL of the enzyme and substrate (E/S) mixture were added (4.0 pmol/well of CYP19/0.4 µM DBF; 5.0 pmol/well of CYP2C8/2.0 µM DBF; 5.0 pmol/well of CYP3A4/ 2.0 µM DBF and 0.5 pmol/well of CYP1A1/2.0 µM DBF). The reaction mixtures were incubated for 30 min (excepting CYP1A1, 25 min) at 37 • C to allow the generation of product, quenched with 75 µL of 2 N NaOH, shaken for 5 min, and incubated for 2 h at 37 • C to enhance the noise/background ratio. Finally, fluorescence was measured at 485 nm (excitation) and 530 nm (emission). Three independent experiments were performed, each one in triplicate, and the average values were used to construct dose-response curves. At least four concentrations of the test substance were used, and the IC 50 value was calculated (Tablecurve TM 2D, AISN Software, EUA, 1996). Naringenin was used as positive controls, yielding an IC 50 value of 4.9 µM. Compounds 19e, 21l, and 24 were dissolved in dimethyl sulfoxide (DMSO) and diluted to final concentrations. An equivalent volume of DMSO was added to control wells, and this had no measurable effect on cultured cells or enzymes. Compounds are considered for further experiments when showing inhibition great than 90%. Molecular Modelling and Docking Study The X-ray structure of bovine tubulin co-crystallised with N-deacetyl-N-(2-mercaptoacetyl)colchicine (DAMA-colchicine) 1SA0 [121] was downloaded from the PDB website. A UniProt Align analysis confirmed a 100% sequence identity between human and bovine β tubulin. The crystal structure was prepared using QuickPrep (minimised to a gradient of 0.001 kcal/mol/Å), Protonate 3D, Residue pKa and Partial Charges protocols in MOE 2015 with the MMFF94x force field [129]. Both enantiomers of selected compounds 19e, 24, and 21l were drawn in ChemBioDraw 13.0, saved as mol files, and opened in MOE. For both enantiomers of each compound, MMFF94x partial charges were calculated, and each was minimised to a gradient of 0.001 kcal/mol/Å. Default parameters were used for docking, except that 300 poses were sampled for each enantiomer, and the top 50 docked poses were retained for subsequent analysis. Conclusions In this work, a novel series of heterocyclic phenstatin-based compounds have been designed and synthesised as tubulin-targeting agents. The structural modifications introduced on the phenstatin moiety included the nitrogen heterocycles 1,2,4-triazole, 1,2,3triazole, and imidazole to afford a hybrid structure of the vascular targeting agent phenstatin and the aromatase inhibitor letrozole, which contains a 1,2,4-triazole heterocycle. The introduction of aliphatic amines such as pyrrolidine, piperazine, and various piperidine derivatives was also achieved. The resulting compounds were investigated for potential dual activity as tubulin and aromatase inhibitors. All novel compounds were initially evaluated in the MCF-7 breast cancer cell line and of particular interest were compounds 19e, 21l, and 24, which displayed antiproliferative activity in the nanomolar range e.g., 19e (IC 50 = 424 nM, 21l (IC 50 = 132 nM), and 24 (IC 50 = 52 nM). They were selected for further studies to provide a better understanding of their mechanism of action in breast cancer cells. The most potent compounds 21l and 24 were evaluated in MCF-10A cells (normal breast epithelial cells) for cytotoxicity. Minimal cell death was observed when treated at a concentration similar to the IC 50 value of the compounds in MCF-7 cells, indicating that the compounds were selective towards cancer cells. Compounds showed impressive antiproliferative activity at nanomolar levels against a range of susceptible human cancer cell lines when tested in the 60 cancer cell line panel of the NCI. Cell cycle analysis of compounds 21l and 24 resulted in an increase in G 2 /M arrest and apoptotic cell death in MCF-7 cells. Flow cytometric analysis of Annexin V/PI-stained cells indicated that compound 21l induces the apoptosis of MCF-7 cells in a dose-dependent manner. Compounds 21l and 24 were also shown to promote PARP cleavage and an inhibition of tubulin polymerisation. The tubulin effects were confirmed when MCF-7 cells treated with the azoles 19e and 21l displayed disorganised microtubule networks with similar effects to phenstatin, together with multinucleation. The molecular docking of selected compounds indicated possible binding to the colchicine-binding site of tubulin and a preference for the S enantiomer. The results showed an efficient introduction of the azoles 1,2,4-triazole, 1,2,3-triazole, and imidazole on the phenstatin scaffold structure to retain antiproliferative effects. The selective inhibition of aromatase is an important tool to select compounds that act as chemopreventative agents for hormone-dependent cancer [130]. The aromatase inhibition of the most potent antiproliferative compounds 19e, 21l, and 24 was evaluated, and compound 19e was identified as the most potent with over 85% inhibition of CYP19 at 20 µM and an IC 50 of 29 µM. We can conclude that the 1,2,4-triazole heterocycle is essential for aromatase inhibition in these compounds, and its activity was optimised when included in a phenstatin-related scaffold such as 19e. On the basis of the structural modifications of phenstatin described in this work, e.g., introduction of the azoles 1,2,4-triazole, 1,2,3-triazole, and imidazole on the phenstatin scaffold, we have developed lead compounds that exhibit promising anti-cancer properties with potential for further development. The investigation of the stereoselective effects of the compounds together with the optimisation of the dual aromatase-antiproliferative action of compound 19e is in progress. Acknowledgments: The Trinity Biomedical Sciences Institute (TBSI) is supported by a capital infrastructure investment from Cycle 5 of the Irish Higher Education Authority's Programme for Research in Third Level Institutions (PRTLI). This study was also co-funded under the European Regional Development. We thank Susan McDonnell, School of Chemical and Bioprocess Engineering, University College Dublin for the kind gift of MCF-10A cells, Gavin McManus for assistance with confocal microscopy, and Barry Moran for flow cytometry. Synthetic contributions from Rebecca Hirschberger and Ayat Sherif are also appreciated. We thank John O'Brien and Manuel Ruether for NMR spectra. DF thanks the software vendors for their continuing support of academic research efforts, in particular the contributions of the Chemical Computing Group, Biovia, and OpenEye Scientific. The support and provisions of Dell Ireland, the Trinity Centre for High Performance Computing (TCHPC), and the Irish Centre for High-End Computing (ICHEC) are also gratefully acknowledged. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
14,854
2021-02-01T00:00:00.000
[ "Medicine", "Chemistry" ]
How Reasons Make Law Abstract According to legal anti-positivism, legal duties are just a subset of our moral duties. Not every moral duty, though, is legal. So what else is needed? This article develops a theory of how moral duties come to be law, which I call the constitutive reasons account. Among our moral reasons are legal reasons—and those reasons make moral duties into legal duties. So the law consists of moral duties which have, as one of their underlying reasons, a legal reason. Such legal reasons arise from a relationship with the body for which it is the law of. The legal reasons in America, then, are the moral reasons flowing from a relationship with the United States. These reasons include consent, democracy, association and fair play. They are law’s constitutive reasons. By looking for them, we can better explain why some moral duties form part of the law, while others do not. Introduction Some say our legal duties are also moral duties. 1 The law is therefore part of the general moral landscape.It forms a subset of the broader moral picture.I shall call this view anti-positivism. Whatever its merits, anti-positivism faces a serious problem.Note what anti-positivists do not say: they do not implausibly claim all moral duties are legal.All legal duties might be moral, but not the other way around.Hence, we can intelligibly say 'this is not yet illegal, but it should be because it is a grave moral wrong'.This brings up a demarcation problem, for we need to know which moral duties are legal and which are not.The key question is this: what additional feature must moral duties possess before they are law?VOL.44 Some anti-positivists are sceptical of the stakes of this question. 2They already think legal duties are moral duties.What, then, motivates the search for a strict line between law and the rest of morality? The motivation, I think, lies in our experience.Often we refer to our rights and duties under law.In doing so, we rely on our intuitive grasp of law as a distinct set of normative incidents.It is something legislators can make, judges can apply, lawyers can argue about and students can study.But not every moral duty is legislated, judicially applied, relied on by lawyers or studied in law schools.Drawing this distinction is a fundamental feature of legal practice.To explain it, anti-positivists need to show what sets law apart. 3Currently, there are two main proposals. Greenberg offers one: that legal duties are those moral duties which legal institutions cause in the legally proper way. 4 This is a causal pedigree approach to demarcation.The law consists of changes which legal institutions make to our moral situation. At least on some readings, Dworkin disagrees. 5He says, instead, that legal duties are those moral duties which are enforceable in court.This is a judicial 2 Such scepticism travels under the label of eliminativism.The trouble is the label masks the diversity of views it describes, for there are some important differences in what those views seek to eliminate.As a preliminary matter, the label could refer to the elimination of a distinctively legal normativity, ie the sense in which a legal duty could bind without being morally binding: see Scott Hershovitz, 'The End of Jurisprudence' (2015) 124 Yale LJ 1160, 1193.But this version of eliminativism is just another way to describe anti-positivism.So let us put that view to one side.There remains a range of different possible eliminativist views.We could imagine a view which denies the existence of a discrete category of legal duties, even if that category forms part of a broader class of moral duties.Call this category eliminativism.Another view seeks to eliminate talk of law in legal practice: Lewis A Kornhauser, 'Doing Without the Concept of Law' (2015) NYU School of Law Public Law and Legal Theory Research Paper Series Working Paper 15-33.Call this discourse eliminativism.Still others are sceptical of whether the concept of law has an essential nature: Hilary Nye, 'Does Law "Exist"?Eliminativism in Legal Philosophy' (2022) 15 Washington University Jurisprudence Review 29.Call this concept eliminativism.Only category eliminativism-the denial of a discrete set of legal duties-poses a challenge, for my account is entirely consistent with the thought that, as a practical matter, it would be better for lawyers to revise their practices to avoid talk of law (ie discourse eliminativism).I am concerned with the practices we have, not whether or how they should be changed.It is also consistent with the thought that the concept of law-understood as an idea which picks out some practices as law-lacks a single nature (ie concept eliminativism).Such concepts of law are what Dworkin described as 'sociological': Ronald Dworkin, Justice in Robes (Harvard UP 2006) 2-4.However, my concern is with the grounds of legal propositions, not the essential features of what makes some social structures count as a legal system: cf Ronald Dworkin, Law's Empire (Harvard UP 1986) 4. 3 To be clear, this is not meant as a knock-down argument against category eliminativism, though I do think it struggles with this feature of legal practice.For instance, a common eliminativist strategy is to disambiguate 'the law' into, among other things, genuine moral duties and predictions of what officials are likely to do.If so, 'the law' would be a disjunctive combination of these two possibilities (among others).One problem with this view is its inability to account for the thought, internal to legal practice, that predictive claims just are not law.But I will not pursue that line here, for there is a more serious problem.Lawyers make claims about legal content all the time, and it is implausible to suppose they ever mean to refer to an entirely open set of moral duties.Nobody thinks the law consists of all genuine moral duties.So a further disambiguation is required.You could read this article as defending a possible way to achieve this further disambiguation.That is, on those occasions when 'the law' refers to genuine moral duties, it refers only to those duties picked out by my account. 4Mark Greenberg, 'The Moral Impact Theory of Law' (2014) 123 Yale LJ 1288, 1320-3. 5Ronald Dworkin, Justice for Hedgehogs (Harvard UP 2011) 404-6.I say 'on some readings' given an interpretive disagreement.The difference lies in where Dworkin stands in relation to the causal pedigree thesis.According to that thesis, all legal rights are causally traceable to the acts of political institutions.This corresponds to two possible readings of Dworkin.The requirement for judicial enforceability could be in addition to causal pedigree: see Nicos Stavropoulos, 'Why Principles?' (2007) Oxford Legal Studies Research Paper No 28.This reads Dworkin as proposing a possible way to flesh out the moral impact theory.If this reading is correct, Dworkin would only disagree with Greenberg to the extent the latter assumes that causal pedigree suffices for a duty's inclusion in law.Alternatively, the requirement of judicial enforceability could be a substitute for causal pedigree: Greenberg, 'Moral Impact Theory' (n 4) 1299-30, fn 18.On this reading, Dworkin and Greenberg fundamentally disagree.None of my core arguments turn on this interpretive disagreement; I leave open which reading best reflects Dworkin's thinking.enforceability approach to demarcation.Sometimes we can require the court, on demand, to wield its coercive power to ensure the satisfaction of our moral rights.The law consists of these circumstances. We can put the difference between these two approaches in temporal terms.Greenberg looks to the past.He tells a historical story of how the change to our moral duties arose.Conversely, Dworkin looks to the future.He tells a prospective story of what should occur if we invoke our moral rights. 6ake the legal duty to drive on the left side of the road.For an anti-positivist, this is a moral duty.Driving on the right would be morally wrong.But why is it a legal duty?Greenberg says it is because of how the traffic duty came about.It arose because the legislature caused us to converge on a set of expectations.Now, we expect the other cars to drive in the left lane.And that gives us a duty to drive in the left lane, too.Since we can causally trace the duty back to what the legislature did, it is law.Dworkin gives a different answer.He says it is because of what will occur if we breach the duty.Such a breach would be morally wrong.But not just that-a court could justifiably impose coercive sanctions in response.This possibility is why the duty is law. Both approaches have serious flaws, which I address later.For now, I want to float a third possibility.As we saw, these approaches look to either the past or future.But what about the present?That is, we could start with the nature of the legal duties themselves-not their origin story or their significance in the courtroom. Let us return to the traffic example, and this time take a closer look at what underlies the duty to drive in the left lane.Recall that, for anti-positivists, it is a moral duty.Here I assume moral duties consist of reasons against φ-ing, which together render φ-ing impermissible.If so, moral duties are constituted, at least partly, of reasons. This allows for an alternative approach.Legal duties are those moral duties partly composed of a legal reason.Driving on the right lane may be morally impermissible for many reasons.Among those reasons, however, is a legal reason-and that makes the traffic duty form part of the law.Call this the constitutive reasons account. For it to succeed, we must work out the legal reasons.I think those reasons are moral considerations, so they cannot be in contrast to morality.Hence, my account is anti-positivist.But now it seems we have traded one problem for another.Previously, we wanted to know why the law consists of some moral duties, but not all.This approach, however, throws up a different problem: what makes some moral reasons, but only some, legal reasons? Here is my answer.Legal reasons are those moral reasons which arise from, and apply within, a relationship to the relevant law-having body.This is ecumenical in two ways.First, it allows a plural set of moral considerations to count as a legal reason.Any reason we only owe within a relationship is, in principle, a possible legal reason.Second, it allows for different kinds of law.Municipal law concerns the reasons which flow from a relationship to the state, canon law from a relationship to the church, international law from a relationship to the world community, and so on. To bring things together, let us return to the moral duty to drive in the left lane.Among the ways its breach would be wrong is this legal reason: the consideration of fair play which flows from our participation in a state-run system of traffic co-ordination for mutual benefit.Since this reason only applies within our relationship with an aspect of the state, it is a legal reason.So its presence makes the moral duty a legal duty, too. 7he article is laid out as follows.The second section engages critically with Greenberg's causal pedigree approach.The third section does the same for Dworkin's judicial enforceability approach.Together, they reveal issues which an alternative approach could address.The fourth and fifth sections introduce the constitutive reasons account.The sixth section shows why it is more attractive than the alternatives.The seventh section responds to objections.A brief conclusion follows. The Causal Pedigree Approach For Greenberg, the law consists of changes which legal institutions cause to our moral situation.So law is the moral impact of legal institutions.Not every change, however, is part of the law.As Greenberg recognises, this would be overinclusive.Suppose a legal institution acts in an evil way.For instance, the executive targets specific groups for detention in concentration camps.This causes a significant change in the state of affairs.The change generates a moral duty to resist.We ought to do what we can to hinder the executive's ability to carry out this plan.But this moral duty is not plausibly a legal duty.Certainly, the moral duty to rescue the Nazi regime's victims was not part of Nazi law. In response, Greenberg restricts his theory to those changes legal institutions cause in the legally proper way.8For him, legal institutions which worsen the moral situation do not pass this test.The causality is deviant because legal institutions exist to improve the moral situation.When legal institutions worsen the profile, thereby causing duties to resist, the directionality is flipped.Compliance aims to halt, indeed reverse, the change to the moral situation. Here is the problem. 9As it turns out, it is not so easy to know whether the ensuing moral duty is one of resistance.An intuitive strategy is to rely on the legal institution's intention, or alternatively the content of what that legal institution produced.That is, we might learn the legislature intended a grave moral wrong.Or what the legislature produced-the statute-might, on its face, require a grave moral wrong.Either possibility allows us to identify the moral wrong with what the legal institution tried to do.Only then can we say the moral duty to prevent that wrong from arising is one of opposition to the legal institution's actions. But Greenberg disclaims resort to either intention or the linguistic meaning of legislation. 10How, then, can we identify which moral duties are of resistance?A possible view is that such duties arise when we ought to resist the law.But this option is not available to Greenberg, for, as an anti-positivist, he is committed to the view that legal duties are a subset of moral duties.And here the legal institution acts so wrongly that not only does it fail to require us to comply, but we are morally required to engage in active resistance.So the thing we must resist cannot, for Greenberg, be a legal duty.Nor, as we saw, can it be what the legal institution intends, or what the statutory meaning requires. What is left?Recall we need to ascribe a possible state of affairs-which we are morally required to prevent from ever occurring-to what the legislature did. To do so, it seems Greenberg must retreat to a probabilistic view.That is, a legal institution is causally responsible for a moral wrong when it, by acting, increases the probability of that wrong arising.For instance, by enacting a statute which calls for concentration camps, the legislature makes it more likely the moral wrong of herding people to such camps occurs.We have a moral duty to prevent this from occurring.The duty is one of opposition to the legislature since the legislature made it more likely the wrong will occur.This allows Greenberg to deny legal status to this moral duty. But the probabilistic view is implausible.Suppose the legislature declares war on Country X.To that end, it seeks to conscript able-bodied adults into the military.This, let us say, worsens the moral profile.It would be better had the legislature not declared war.Indeed, the case for war is so weak that, in ordinary circumstances, the legislature would fail to impose a duty on us to join the military.However, by declaring war, the legislature makes it more probable that Country X will commit serious moral atrocities.To stop it, able-bodied adults are thereby under a duty to fight.Intuitively, this duty is both moral and legal.And so, by joining the military, able-bodied adults comply with their legal duty.But the probabilistic account cannot explain this.From its perspective, this is a duty 10 Mark Greenberg, 'The Moral Impact Theory, the Dependence View, and Natural Law' in George Duke and Robert P George (eds), The Cambridge Companion to Natural Law Jurisprudence (CUP 2017) 275, 289-91.To motivate this disavowal, Greenberg offers this example.Suppose the legislature enacts a statute.It clearly designates a particular scheme.Call this scheme A. But things go awry.For whatever reason, the legislature fails to cause a convergent expectation on scheme A. Perhaps this is because a critical player accidently misinterprets the statute, or because of a pervasive psychological bias, or so on.In any event, the population converges on scheme B, not A. So it is scheme B which becomes morally salient.We might then have a moral duty to participate in scheme B. And that, Greenberg says, is a legal duty.Yet the legislature, on any plausible view of intention, sought to require scheme A. And the statute, on any plausible view of linguistic meaning, requires scheme A. VOL. 44 to resist the grave moral wrong of Country X committing atrocities.And that is a wrong which the legislature's act made more likely to occur.It therefore arises in a legally improper way.Hence, there is no legal duty to join the military. Greenberg faces other problems, too.Consider a legislature which acts to impose a tax.Because of this, my friends need to fill out complicated paperwork.Since they are struggling, I promise to help.I thereby come under a moral, promissory duty.It is caused, in part, by what the legislature did.But it is not a legal duty. Once again, Greenberg says the duty does not arise in the legally proper way, 11 for these duties are too far downstream-that is, too remote-from the legal institution's act.So the causal process is legally improper.But this, like his attempt to exclude duties to resist from law, does not work.To see this, consider what remoteness might mean. There are two intuitive possibilities.First, consequences are too remote insofar as they are unforeseeable.But it may well be readily foreseeable that, by imposing a tax, people will promise others to help with the paperwork necessary to pay that tax.Second, consequences are too remote insofar as the acts of others 'break the chain of causation'.On this view, the legal institution may have led to my having a promissory duty to assist.Since I chose to make that promise, however, it is too remote, for my free choice has broken the causal chain.But this is of no help, for we could simply change the facts to make my duty to assist non-consensual.Suppose my duty to help fill out the tax form is owed not to my friends by virtue of a promise, but to my parents by virtue of our relationship.On these facts, I do not choose to come under the moral duty to assist.So I have not broken the chain.Nonetheless, my duty is not legal. To sum up, Greenberg seeks to address these potential cases of overinclusion by adding a caveat to his theory.He says law only consists of changes which legal institutions cause in the 'legally proper way'.The worry is this formulation simply reflects an intuition that these duties are not law. 12If that is so, the caveat is just an empty label.It offers no positive explanation for the intuition.long to bring a claim.Here there is a strong intuition, supported by legal practice, that the victim continues to have a right as a matter of substantive law.The victim is just procedurally barred from enforcement.Similarly, the rules of evidence could stop a plaintiff from proving the allegations in court.In practice, this leads to unenforceability.But suppose the allegation is factually true.It just cannot be proven in court.Here, you might, once again, think the plaintiff possesses a substantive legal right. The problems go beyond procedure.As a matter of substance, the claim might concern a non-justiciable subject matter. 14If so, the plaintiff possesses a legal right as a first-order matter.But the court, as a second-order matter, lacks the power to adjudicate.So the legal right is unenforceable.Yet, dismissal has nothing to do with the merits-that is, whether the plaintiff has the legal right she asserts. Given this, you may wish to revise the view.Perhaps a moral duty is legal if it is enforceable in court, or if it is unenforceable because a separate legal rule either requires or permits a court not to enforce it.Since procedural limits and the justiciability doctrines are legal rules, this revised approach can correctly identify the duties they render unenforceable as law. But this remains seriously overinclusive.For instance, English courts cannot recognise new criminal offences in common law. 15This is a legal duty which prevents courts from enforcing moral duties not found in legislation.Yet, we would not describe all these unenforceable moral duties as legal duties.In response, you could say this legal rule just reflects the moral position.With or without the rule, it would be wrong for courts to recognise new common law offences.But this cannot help Dworkin, for we could say the same thing about non-justiciability.Given the lack of judicial capacity, it would be wrong for courts to enforce a duty in a non-justiciable area.Those non-justiciable duties can still be legal duties, however. The Constitutive Reasons Approach We have dwelt for too long on the shortcomings of other approaches.It is now time to see whether we can do better. I think we can, by focusing on the reasons which make up duties.Take our duty not to steal.What explains this duty?Well, theft is wrong for all sorts of reasons.It deprives the victim of a valuable interest.It manifests disrespect towards her.It causes her distress.And so on.These are all plausible considerations which count against stealing.Together, the force of those reasons render theft impermissible. Suppose a state which prohibits theft adds to the reasons not to steal.Afterwards, there would just be one more reason, among many, against it.The presence of this additional reason sets this moral duty apart from others.This raises a tempting possibility.We might say a legal duty is just a moral duty which has, as one of its underlying reasons, a legal reason. 16uch a view merely requires the presence of this reason.This says nothing about its relative importance.There are two possibilities.First, the legal reason could be decisive for the duty to arise.The duty to pay taxes is a good example.Without the law, the duty would not arise.Second, the legal reason could be unnecessary for the duty.Indeed, the force of some moral duties is so overwhelming that the addition of reasons, including legal reasons, seems insignificant. 17Take, for instance, the duty not to murder.So long as one of the reasons composing that duty is legal, however, it forms part of the law. 18his shifts much of the explanatory burden from the duty to the reasons underlying that duty.A moral duty is legal when one of the reasons for complying with that duty is of the right sort.So everything hinges on the following question: what makes a reason a legal reason? That is what the next section seeks to answer. Legal Reasons Consider the duty to drive in the left lane.Two distinct set of considerations could support this duty.First, because it promotes road safety.Needlessly injuring another is wrong, and sticking to the left lane reduces the risk of that occurring.Second, because we agreed to drive in the left.Or it is what a democratically elected body chose.Or it is a fair price to pay for the benefits we obtain from co-operatively driving on the road.That latter set encompasses a diverse range of considerations.These include consent, democracy and fair play. 19Among others, they are what I call 'legal reasons'. 16For the avoidance of doubt, I think legal reasons must be moral reasons.True, for a reason to be legal, something must distinguish it from all the other reasons.But that distinction need not lie between moral and non-moral reasons.It may, instead, be a distinction between moral reasons. 17Joseph Raz, Ethics in the Public Domain (OUP 1994) 342-3. 18What about entirely trivial reasons?By this I mean reasons which play no part in the explanation of the content of the duty.This occurs when the legal reason makes no difference to what the duty requires.When this is the case, removing the reason does not alter the scope of the duty in any way; the duty prohibits precisely the same set of acts as before.Such reasons fail to compose the relevant duty.It therefore cannot make that duty a legal duty.This is an issue of composition: when reasons can properly be said to partly compose the duty.I cannot give a complete answer to this mereological issue here, but there is one possibility I find especially promising.A reason composes a duty when it forms a necessary aspect of a set of reasons, which together suffice to ground the precise content of that legal duty.This bears some similarity to a leading account of causation: see Richard Wright, 'Causation in Tort Law' (1985) 73 Cal L Rev 1735, 1788-1803.Yet I say, unlike Greenberg, that my account is non-causal.What gives?Notice how, on my account, the determination is not something we would typically describe as causal.It is not the relation between an event and a state of affairs.It is, rather, a relation between duties and the reasons that underlie them.This is metaphysical, not causal, determination.Both, however, are one-directional determination relations (ie X grounds/causes Y, but not the other way around).So it is unsurprising that similar ideas apply across both domains. 19Recall that, on my account, a duty is legal so far as one legal reason (like consent, democracy or fair play) composes the duty.Why is this important?Because any given legal reason, taken in isolation, cannot plausibly explain the entire content of the law.Consider consent.The effort to explain legal duties as consensual is subject to wellknown difficulties.The most pressing is that many people do not, in fact, consent to the rule of the state: see John A Simmons, Moral Principles and Political Obligations (Princeton UP 1979) 83-100; Ronald Dworkin, Law's Empire (Harvard UP 1986) 192-3.Nonetheless, consent can still help explain the legal duties of officials (who choose to stay in office) and short-term tourists (who choose to visit).Now, you might doubt whether they all, in fact, support the traffic duty.Little turns on this.The traffic example is only illustrative.What matters is two things.First, these considerations sometimes count in favour of a legal duty, even if you reject their salience in the traffic context.Second, something unites these reasons as a single category, which then distinguishes it from our general reason not to injure others. To be clear, I will not offer a sustained defence of how these particular considerations support legal duties.Their precise grounds-how they support the moral duties we have, so far as they do-is not something I address.All I want to establish is that these considerations are plausible candidates for the set of legal reasons.You may disagree with the precise picture, but I hope to show how they could support legal duties. A. Fair Play At this point, I want to focus on a particular candidate for a legal reason: fair play.With this discussion I hope to clarify the kind of reasons I have in mind, and how they might bear on the legal duties we possess. To achieve a common benefit, we sometimes need to work together.An obvious example might be a football team, which must co-ordinate to win.And winning, let us say, is beneficial to all members of that team.Being in a position to win, though, is not easy.It requires hard work, like gruelling practices.This gives rise to the intuition underlying fair play.One should not benefit from the hard work of one's teammates if one does not put in the requisite work oneself.Doing so takes advantage of the efforts of others-and that is unfair.So there is reason to avoid taking a free ride. 20any question the breadth of this consideration.Nozick, for instance, asks us to consider a neighbourhood association which operates an entertainment scheme. 21The group assigns each neighbour a day she is responsible for providing entertainment.Suppose the entertainment benefits each neighbour.Given this, must the neighbours entertain on their assigned day?Now, there are many ways to respond, but here I consider two. First, we could distinguish a reason to entertain from a duty to do so.That is, we could accept the neighbours have reason to contribute, given the benefits they enjoy.But the benefit is not so important as to generate a duty. 22We could assess this importance in both objective and subjective terms.The entertainment may not objectively be too important.Alternatively, the neighbours may subjectively prefer other things over entertainment. Second, we could distinguish passive receipt of benefits from active participation in the beneficial scheme. 23The members of a sports team are actively involved in the joint activity.They are not just bystanders who benefit from seeing the team play; in an important sense, they are the team.Similarly, it may not be enough for the neighbours to enjoy the entertainment.Perhaps they must participate in the neighbourhood association before coming under a duty to contribute. Once we keep these limits in mind, it becomes easier to see how fair play reasons could arise.Those who actively participate in co-operative enterprises for a substantial mutual benefit ought to bear the reciprocal burdens. 24 B. Special Reasons You may worry my account is arbitrary.What explains treating fair play as part of a distinctive set of reasons?Perhaps I have simply selected a kind of moral consideration from a grab bag of reasons.If so, the ensuing account would be objectionable as ad hoc.It may get us closer to our intuitions about which moral duties are legal, but at the cost of explanatory power. It is good, then, that something is distinctive about fair play, alongside other considerations like consent and democracy.Some reasons apply to us all, by virtue of our being human.And we owe the duties they support to everyone else, by virtue of their being human.These are general reasons.Other reasons, however, only apply to those in particular relationships.And we only owe the duties they support to the other members of that particular relationship.These are special reasons. 25onsider the duty not to needlessly cause injury to others.We owe this to everyone else.It does not require us to be in a relationship, other than in the possible sense in which we are all members of a moral community.So the reason which supports this duty-the disvalue of injury-is a general reason.Now consider the duty to do what I agree to do.Plausibly, I only come under the duty once I enter the agreement.The agreement forms a distinct relationship between me and you.Further, I owe the duty to respect that agreement to you, but not others.So the reason which supports this duty is a special reason.The same applies to the other considerations we addressed.The democracy-based reason only applies to members of a democracy, and we only owe the duties it supports to members of that democracy.And so on. Put another way, special reasons arise, and are relevant to, the limited domain of a relationship.In the context of law, however, one particular relationship looms large: how we stand with respect to the state.For municipal law-the law of states, like American law-is surely of central significance to any account of law. So I want to start by addressing how my approach explains this important species of law. Again, legal reasons are distinctive because they are special reasons.They only arise from, and apply within, particular relationships.In the context of municipal law, the relevant relationship is our relationship to the state.Within this domain, our legal reasons are the reasons which arise from our relationship with the state, or at least an aspect of the state.So the reasons which arise from our relationship to the United States explain why some of our moral duties form part of American law.Here, I understand 'relationship' quite broadly.This expansive understanding allows my account to encompass the kinds of considerations I have previously mooted. First, consent.By agreeing to have certain duties apply to me, I enter into a consensual relationship with the state-and that may entail a relational duty to keep my word. 26Second, democracy.I share a relationship of citizenship with my fellow compatriots, and that may entail a relational duty to respect their views.Third, fair play.By participating in beneficial activities, I may come under a relational duty to bear a fair share of its associated burdens. You may worry my understanding of 'relationship with the state' is too loose.When I agree to follow a state's laws, the relevant consensual relationship includes the state.This much is straightforward.But what about the relationship between democratic citizens?Or between drivers on the road?I think these, too, are relationships we have with the state, or an aspect of the state. First, democratic citizens.Elections occur through a state's institutions, like polling booths and electoral boards.And it is state institutions, like the legislature, which carry out a democratic mandate.For instance, a sweeping victory may confer a mandate upon the legislature to enact the victorious party's platform.And even when the victory occurs through a referendum, it is ultimately state institutions which must enact the practical changes. Second, drivers.To be sure, the primary participants are those who drive their vehicles on the road.But the state puts up signs and road markings.It maintains traffic lights.It operates a licensing scheme for qualified drivers.It hires police officers and traffic controllers.By travelling on the road, we position ourselves in a special way with this state-run activity.That position-the relation between travellers and those making safe travel possible-constitutes a relationship with an activity which forms an aspect of the state. 2726 Hence, short-term tourists may have a consensual duty to comply with the laws of their hosts.To vacation in France, I may need to agree to abide by French laws, even those which strike me as seriously misguided.This gives me a special reason to conform to French law.To be sure, when I travel to France, I do not plausibly agree to comply with all the moral duties which obtain in France.My agreement is only directed at a subset of those moral duties: the duties which form French law.This presupposes an account of French law.So you may worry this leads to a circularity problem.But there is a straightforward solution.The set of duties I accept are (for the most part) just those legal duties which apply, in non-consensual fashion, to French residents.To identify them, we must now turn to the special reasons which apply to French citizens, given their relationship to the French state.Doing so helps us see which duties tourists agree to follow. 27At this point, you may want an exhaustive account of which activities form an aspect of the state and which do not.That requires a theory of the state, which this article lacks.I do, however, want to insist on two points.First, we should need a theory of the state to arrive at a full picture of what constitutes the law of a given state.To fully account VOL.44 C. Legal Domains If I am right, legal duties are partly constituted by special reasons.Those reasons apply within the limited domain of a particular relationship.Their presence makes moral duties into legal duties. Put this way, this account takes us beyond, not just the state, but even bodies relatively analogous to the state.Any kind of relationship for which special reasons are applicable could be a law-having body.So far, I have focused on the state.Here, I am on solid footing: no one doubts the ability of states to have law.But we could go further.Municipal law is one type of law, but what about international law?Now, some doubt whether international law is really 'law'.But we could go further still-what about canon law? I happen to prefer a radically pluralist position.This explodes the kind of legal domains we could have.We may have special reasons arising from our relationship with the world community.Or even our church.Indeed, any relationship which gives rise to special reasons is, in principle, a body for which there is law.Consider friendships.Given their relationship, friends may have a duty to support each other.So, as it stands, friendship is a potential legal domain.The duties partly constituted by the special reasons of friendship form the 'law of this friendship'. To avoid this implication, we could insist only the state, or bodies sufficiently analogous to the state, can have law.Doing so requires an articulation of the state-like features which a law-having body must possess.Fleshing out those features is an available avenue to pursue for those who find a 'law of this friendship' especially unintuitive. 28For my part, however, I doubt the importance of this task.Suppose a mother, when addressing her child, refers to one of her family's rules as the 'law of her family'.Not much, I think, turns on whether she is mistaken. 29t this point, you might worry this indifference conflicts with a sentiment I expressed at the start.To motivate my account, I criticised views which sought to eliminate the line between legal duties and (simply) moral duties.Identifying which duties do, and which do not, form part of the law is a key feature of legal practice.And, if nothing else, this task presupposes that not every moral duty is a legal duty.A theory of law should seek to explain the social practice of law. for American law, we need to know: (i) what law is; and (ii) what counts as part of the United States.Here, I have primarily focused on (i).Second, my account is broadly consistent with an intuitive sketch of what does, and what does not, form an aspect of the state.For instance, the relationship among citizens in a democracy forms part of the state.Hence, the special reasons Americans have to respect democratic decisions make certain moral duties part of American law.By contrast, my relationship with a friend does not form part of the state.This is why, although I have special reasons to support my friends, the duties those reasons support are not thereby part of American law. Discriminating between legal and (simply) moral duties is just part of that practice.Failing to explain it comes at a serious cost. But this sentiment is perfectly consistent with my present indifference.To reconcile them, consider the distinction between domain-specific and cross-domain assessments of what 'law' is.By domain-specific assessments, I mean whether a given duty counts as the law within a particular domain, like the state.This is what we refer to when we ask whether theft is illegal in England.By cross-domain assessments, I mean whether a particular domain, as compared to other domains, has a body which is capable of having law.This is what we refer to when we ask whether a neighbourhood association, as compared to the state, can have law. To argue for the importance of distinguishing law from non-law, I relied on the nature of legal practice.The distinction forms an important part of the social practice which any theory of law should explain.This supports the significance of domain-specific assessments.Participants within legal practice-lawyers, judges and so on-spend much of their time asking whether a putative duty forms part of the law of their jurisdiction.Lawyers in England argue over, and judges answer, questions like the extent of the duty not to commit theft under English law.This is distinct from cross-domain assessments.From their perspective, whether it is mistaken to describe my family rules as 'the law' (in my family) is beside the point.What these lawyers care about is that my family's rules do not form part of English law. In short, legal practice is generally concerned with domain-specific assessmentsthat is, the legal status of duties within their domain.To be sure, some kinds of crossdomain assessments are relevant to legal practice.Some legal questions in a particular jurisdiction turn on the legal status of a duty in a different jurisdiction.For instance, English judges may need to adjudicate disputes which involve a choice-of-law provision referring to Spanish law as the source of applicable rules.Other cross-domain assessments, however, are largely irrelevant to legal practice.Take the question of whether a given domain is capable of having law at all.The status of Spain as a law-having body is not in doubt.The same cannot be said for the Sicilian Mafia.Whether criminal gangs have their own law might interest some philosophers, but it does not feature in the everyday practice of lawyers and judges-or even law professors and law students.My argument for the significance of distinguishing law from non-law, then, does not apply to this kind of crossdomain assessment. Comparative Advantages Why accept the constitutive reasons account?Here, I show it is preferable to the alternatives, for it directly addresses the problems with causal pedigree and judicial enforceability. A. Causal Pedigree To start, I want to address a possible misunderstanding.Nothing about my account is inconsistent with the following claim: that acts of legal institutions always bear a causal relation to legal duties.This point is at the heart of Greenberg's approach.But it is also consistent with my account.Take consent.Tourists may need to agree to abide by a state's laws if they wish to enter the country.Giving permission to enter is an act by a state institution; that act forms part of the causal story for why the consent-based reason arises. Given this, you may wonder whether my account collapses to causal pedigree.It does not.Suppose the relevant legal reason arises from a legislature's democratic authority.On Greenberg's view, what explains the duty's legal status is its being caused by that legislature's actions.On my view, however, we only need to know two facts: (i) that a democratically accountable institution made a decision; and (ii) that there is a special reason to respect the judgments of such institutions.Unlike Greenberg, I do not rely on how (i) led to (ii).Thus, my account does away with the need to specify a 'legally proper way' to cause legal duties to arise, as the relevant relationship between (i) and (ii) is constitutive, not causative.The duty is legal insofar as it is partly constituted by a legal reason. 30he primary problem with Greenberg's account is overinclusiveness: it picks out too many duties as legal.Earlier, I focused on two examples: first, duties to resist the acts of legal institutions; and second, changes to the moral position which are only tenuously related to the acts of legal institutions.It would be costly for an account to count these duties as law.To be sure, Greenberg is aware of this problem.To address it, he says only changes caused in the legally proper way count as law.But this comes at the cost of explanatory power, for, as I argue, he lacks an attractive account of why these duties arise in a legally improper way.The constitutive reasons account fares better on both counts. (i) Resisting evil When we describe a state of affairs as evil, we ordinarily refer to reasons of universal applicability.Think of the killing of innocents, slavery and so on.Now, consider resistance to evil.Everyone has reason to prevent such evil from occurring, and we owe this to everyone else.So it is a general, not special, reason.This is why the duty to resist evil is not, without more, a legal duty. Suppose the United States sought to create concentration camps.We have a duty to resist this.Now, suppose the United States does not exist.In this possible world, it is a gang of criminals who seek to enact this state of affairs.I nonetheless have precisely the same reasons to resist their activities.So the duty is insensitive to the existence of the United States.No legal reasons, special to our relationship to the United States, support it.This, you may worry, is false.If the United States commits wrongs, Americans have a reason to protest-even if others do not.The thought is that a state's actions are of greater concern to its citizens.Those citizens bear heightened responsibility for what their state does; that translates as a reason to stop it committing further wrongs.This could, for instance, take the form of complicity.On this view, Americans are complicit in what the United States does in their name. To not be complicit in the wrong, they must do things, like protest, to try and stop the United States from acting wrongly. There is, however, a distinction between a reason and what that reason favours.The same reason can count in favour of different acts.The value of someone's life is a reason for the doctor to take reasonable care when treating her-and a reason for me not to kill her.Giving incompetent medical care and committing murder are different acts, but the same reason-that her life is valuable-counts against it.Similarly, the same reason-that the United States has wronged others-may favour different acts.For Americans, it may require protests; for others, something else.The reason, however, applies to both.It is not limited to the relationship Americans have with the United States.To see this, let us return to complicity.The objection assumes the acts are wrong; if they are good, I should want to be 'complicit' in their occurrence.This is because complicity concerns the degree of responsibility one bears for an act.So it does not tell us anything about what favours, or disfavours, our action.Rather, it bears on the strength of our reasons, whatever they might be, given our situation.So complicity does not, alone, reveal a special reason-or, indeed, any reason.A further story is needed.And that story depends on general reasons.Our reason not to be complicit in murder is the value of the victim's life.This is of general applicability.It is something we owe to the victim, not those with whom we are in a relationship of complicity.To be clear, this is entirely consistent with the thought that, with regard to a duty to protest, the relationship between American citizens and the United States matters.It is just that the relationship alters what that reason favours, rather than forming part of the ground of that reason.Reasons are only legal when the relationship matters in the latter sense. 31i) Visiting relatives I should visit my relatives, given the valuable relationship I share with my family. Part of what makes that relationship are our duties to support one another.Given the loneliness caused by the lockdown, and the COVID pandemic generally, this duty may require me to visit my relatives.Now, the duty, to be sure, is composed of a special reason.I only owe the duty to my relatives, given my relationship with 31 You could press the objection further.Suppose my friend is about to commit theft.The value of our friendship, you may say, gives me a reason to stop her.It is distinct from the general reason to stop strangers from committing wrongs.The interests of the potential victim ground a general reason to prevent theft.We may also think, however, that the value of my friendship gives me a special reason.It only applies within my friendship.I owe it not only to the victim, but also to my friend to stop her from stealing.Similarly, I may owe it to the United States, given our relationship, to stop it from committing wrongs.This special reason, however, is parasitic on a general reason.Even on this view, I only owe it to the United States to stop it from wronging others.That an act is wrong is therefore ineliminable from the explanation of why Americans in particular should intervene.And the wrongness of the act is explained by reasons quite apart from their relationship with the United States.Any reason which favours a duty to protest must include a general consideration.So it can, at most, be a composite reason: a reason constituted by both general and special considerations.Such composite reasons are not (wholly) special reasons.And only special reasons can be legal reasons.them.But my relationship is to them, not the state.So the duty does not form part of municipal law. B. Judicial Enforceability The primary problem with Dworkin's judicial enforceability account is underinclusiveness.Some moral duties are unenforceable in court, yet they are intuitively legal.Why insist on their legal status, despite their unenforceability?Because of the divergence between the reasons for the legal duty and the reasons for its unenforceability.A limitation period renders my claim unenforceable, but for reasons quite apart from those which underlie my legal rights.Perhaps the delay in bringing suit was faultless; I may be just as deserving of the right.Yet a court ought not to enforce it, so as to ensure legal certainty for a broad class of potential defendants. Consider the following two scenarios.In Scenario A, I owe a moral duty to pay £500.It is also a legal duty as a matter of contract law.One reason for the duty is that it is good to keep my word.Another is that I benefit from the practice of contracting, which the state makes possible.Given these benefits, I have a fairplay reason to endure its burdens.Now consider Scenario B. Nothing about the reasons for the duty has changed.The only change concerns my identity.For I am no longer an ordinary person; I am, instead, a visiting sovereign of an independent nation.So there are good instrumental reasons against judicial enforcement of this duty.It would harm foreign relations, cause embarrassment and unsettle settled expectations between sovereigns.All this goes to why a court should not order me to pay.None of it, though, relates to whether I should pay.Of course I should.So the duty in Scenario A is precisely the same duty in Scenario B. It would therefore be odd if the duty in Scenario A, but not that in Scenario B, were legal. My account vindicates this intuition.If the reasons for a duty stay the same, so too does its legal status.Returning to my promissory duty to pay £500, part of why I should pay the money is because I gave my word.This reason alone suffices to support a moral duty.But it is only a legal duty if we can identify a legal reason which underlies that duty.Here we can: the reason of fair play to endure reciprocal burdens. But what if that reason went away?Not all promissory duties arise in the context of a co-operative activity which forms an aspect of the state.For instance, I could promise to pay £500 without intending to participate in the co-operative activity of contracting.If I did, the reason of fair play would not arise.Assume we are unable to identify another legal reason to support the duty.Even so, I might have a moral duty to pay the £500.The other reasons to keep my word, not affected by this change of circumstances, could well be decisive.But it would not be a legal duty. Objections To restate my view, the law of an entity, such as the state, consists of the duties partly composed of a reason arising from a relationship with that entity.Here, I address potential objections to this position-but not the objection that my account is underinclusive because it excludes legal duties which do not bind in morality.According to this worry, the law sometimes requires things we are morally free to refuse.My account of law cannot explain them, for I presuppose legal duties are always moral duties.Now, this is an important objection, but not one I can address here.It takes aim, not really at my particular account, but at anti-positivism more generally.The anti-positivist response, of course, is simply to reject the premise.It is to deny legal duties can be anything other than moral duties.Others have sought to motivate this denial. 32If more work is needed on this front, it must await another occasion.Given this, you can read this article as defending the following conditional claim: if anti-positivism is correct, we should focus on law's constitutive reasons. To that end, I respond to various objections.All share a common theme: they worry my account is overinclusive.More precisely, they identify moral duties which, at least intuitively, do not form part of the law. 33These duties fall into three groups: (i) those duties for which there is no judicial remedy; (ii) those arising from advice; and (iii) those which closely support state activities. A. Duties without Remedies The constitutive reasons account allows for legal duties which are judicially unenforceable.Often this is intuitive.The government can act unlawfully even if the matter is non-justiciable before the courts.Similarly, a tortfeasor commits a legal wrong even after the limitation period runs out. Sometimes, however, your intuitions might go the other way.Let us return to the contract law example.Suppose I promise to pay you £500 without securing an agreement to receive anything in exchange.Plausibly, my promise suffices to ground a duty to pay.Say I refuse.I thereby wrong you.Yet you might be unable to get a court to enforce that duty.In England, damages are typically only available when the promise is backed by good consideration. So long as the promise was intended to create a legal relation, I have said a legal reason supports the promissory duty.This is because it partly consists of the fair play reason to endure reciprocal burdens arising from active participation in a beneficial activity.This suffices to make the promissory duty legal; no further requirement of consideration is necessary.I am therefore committed to saying gratuitous promises can impose legal duties. 32Dworkin, Justice for Hedgehogs (n 5) ch 19; Mark Greenberg, 'How Facts Make Law' (2004) 10 Legal Theory 157. 33Notice something important.Overinclusiveness objections are unavailable to those sympathetic to category eliminativism, for they take the following form: (P1) There is a set of duties which form the content of the law.(P2) If my account is true, a given duty forms part of that set.(P3) That duty does not form part of that set.(C) My account is therefore wrong.Category eliminativism gets off at P1.They do not accept the existence of a discrete set of legal duties.But the overinclusiveness objection needs P1, for it assumes the law has a certain content, which my account overshoots.I take such objections seriously, because I accept there is a genuine sense in which some duties are not legal.I just think these objections fail on the merits.By contrast, category eliminativism cannot accept these objections as meaningful.That is because it denies the 'set of legal duties' has a true referent.Hence, there is nothing for an account to be overinclusive of.Say you agree these objections are meaningful, irrespective of whether they succeed or fail.This is yet another reason reject category eliminativism.This is entirely consistent with insisting upon consideration before awarding damages.Here are two reasons the law might restrict remedies in this way.First, consideration could be a useful formality.It evidences an intention to create legal relations, ensures the robustness of that intention by performing a cautionary role and offers a way to express that intention in a characteristic way. 34Second, we might say the breach of a duty arising from a gratuitous promise is wrong, but not so wrong as to warrant a coercive remedy.For that, we may need the particular unfairness of resiling from a bargain. 35erhaps you are unsatisfied, as we are still left with the following picture: gratuitous promises ground legal duties, albeit unenforceable ones.Why not just say they fail to ground any legal duties?Here is why: because my view better explains the way the doctrine of consideration is perceived.In White v Jones, for instance, the UK House of Lords developed a doctrine to extend a remedy for some breaches of gratuitous promises. 36There, Lord Goff accepted that this extends to the plaintiff 'what is, in substance, a contractual cause of action'. 37The impulse to develop the law in this way is readily explicable if we view consideration as an external restriction on the availability of a remedy, rather than as a core feature of the underlying duty.Nor is White v Jones unique.The law often struggles to decide whether to enforce gratuitous promises.Thus, 'the law would be rendered more intelligible and clear if the need for consideration were abolished'. 38This is an internal, doctrinal critique of the law.It accuses consideration of frustrating the development of an intelligible principle by which to explain the substantive law of contract.Again, this is consistent with the thought that consideration is a remedial restriction. More generally, the doctrine of consideration has inspired particular hostility. 39t first glance, this is puzzling.Not all moral duties form part of the law.Nor is there widespread unease about this.But it is altogether more understandable if we suppose gratuitous promises impose legal duties.We might think, for example, that the law should, as a starting point, enforce legal duties.Sometimes there are good reasons not to.Think of justiciability doctrines.Even so, the lack of enforcement might leave us uneasy; at a minimum, we require a powerful justification for non-justiciability.The same dynamic could explain the particular unease with consideration in contract law. B. Advisory Duties In the midst of the COVID pandemic, the UK government sought to alter the behaviour of its residents.Often it did so not by proposing legislation or issuing regulations, but by providing advice.These statements did not go through a formal process of approval.And they were expressed in non-imperative terms. Nonetheless, it is plausible that some residents came under a moral duty to do what the government advised.Suppose the government advised us to stay home.Perhaps we ought to stay home anyway, to protect the health of others.This alone could not, under my account, make the duty legal, for the reason to protect the health of others is general, not special.But other reasons could support this duty, too.For instance, it might be unfair to benefit from the sacrifices of others, while disregarding the corresponding burdens.Since lockdowns were an intimate aspect of the state, my account identifies this reciprocity-based reason as a legal reason.So we would have a legal duty to stay home.In some ways, this captures the experience on the ground.At the time, many, including officials, treated advice in just the same way as regulatory rules.To be clear, this is not to say officials were justified in doing so.I reject the conflation of a duty's legal status with its enforceability.Nonetheless, you might find my conclusion-that advice sometimes led to legal duties-unintuitive for two reasons: first, because the advice did not go through the formal processes which are characteristic of law; and second, because the government did not intend for its advice to be law.Here I respond to both. (i) Formal process For many, the line between 'law' and 'advice' during the COVID pandemic was indistinguishable.The UK government chose to institute its COVID strategy through a complicated mix of regulatory and advisory rules.Often, it was difficult to discern where one began and the other ended.Nor, indeed, did the distinction matter much to ordinary residents.Their lives were significantly altered regardless.Further, officials, predictably, had trouble telling the regulatory and advisory rules apart.This led to confusion about which duties were enforceable and which were not.And that posed a serious challenge to the rule of law. Given this, it may have been better for such advice to go through a formal, characteristically legal process.That the government failed to do this might motivate you to deny that the advice counted as law.My account offers a different explanation.It allows for a powerful criticism of government-by-advice, not because it fails to be law, but because it is law.Precisely because the advice led to legal duties, it should have gone through the formal process characteristic of law.And precisely because the advice was law, the lack of certain procedural safeguards was regrettable, for the rule of law attends to the harms which might otherwise arise from law. 40Perhaps the issuance of advice was necessary, but it may have led to the very harms the rule of law guards against. (ii) Intent Another worry is that the government did not intend to create law.Not all law, however, is formed intentionally.Some legal rules are customary-and those rules arise from social practices which do not always aim to construct law.Similarly, the government may have unintentionally created legal duties via advice.A related worry is that the government intended to offer suggestions, not orders.Now, in the context of COVID lockdowns, this is doubtful.There, the government may well have intended to create mandatory duties, even while framing them as 'advice'.In any event, some of its advice, during the exceptional context of the pandemic, led to moral duties.So we know the government's acts diverged from its intent.The question is simply how far that divergence went. C. Ancillary Duties Here, I focus on two contexts: constitutional conventions and elections.I describe their associated duties as 'ancillary' because they relate to the functioning of core state institutions.Constitutional conventions play a key role in regulating how our constitutional actors behave.And elections allow legislatures to enjoy democratic legitimacy. (i) Constitutional conventions To explain how constitutional conventions might bind, Jaconelli argues that they are supported by a reason arising from mutual benefit and burden. 41Some constitutional conventions restrain the ruling party.So far as power periodically changes hands, these restraints will eventually apply to the opposition party, too.It may be beneficial, to all parties, to accept these restraints.Although burdensome to the party in power, it confers valuable protection when that party is in opposition.These political parties, insofar as they benefit from such restraints, have reason to accept their respective burdens.I have described such reasons as arising from fair play.Under my account, the duties to which constitutional conventions refer, insofar as they are supported by such reasons, are legal duties.This presents a challenge, for we typically think of constitutional conventions as rules which, although they form part of the constitution, are not law. What motivates this thought?Dicey offers one argument: constitutional conventions cannot be law since they are neither 'enforced or recognised by courts'. 42unro offers another: that, in contrast to the systemic nature of law, conventions form a 'discrete unconnected set'. 43The truth of both propositions is doubtful, even on their own terms. 44First, the claim about courts.As a descriptive matter, courts often recognise the existence of constitutional conventions, 45 and perhaps they sometimes even enforce them, too. 46Second, the claim about systematicity.Some constitutional conventions are intentionally created by actors.Those actors, in turn, are authorised to do so by a separate, power-conferring constitutional convention.If so, these conventions would bear the systemic relation between primary and secondary rules which Hart thought was the mark of law. 47or most, I suspect it is the claim about judicial enforceability which motivates the denial of legal status to constitutional conventions.That is, you might descriptively accept that some courts enforce conventions.You could, however, normatively reject the appropriateness of such enforcement.And that distinguishes conventions from law.But the distinction remains undermotivated, for, as we saw, not all laws are enforced in court. 48Since some laws are not legally enforceable, the presence of enforceability cannot distinguish conventions from law. This, you may worry, goes too far.Perhaps not every law is judicially enforceable.But judges plausibly have a defeasible duty to apply any legal duty before them.If they do have such a duty, the starting point, for legal duties, is judicial enforceability.The objection would be that constitutional conventions, by contrast, lack this starting point.Put this way, however, the point misleads.True, for most legal duties, this defeasible duty will likely be decisive.By contrast, most constitutional conventions will likely attract serious concerns of judicial enforceability.The force of this objection therefore relies on the contrast between law, understood as a distinct category, and conventions.But it is the extent of the former category-the set of legal duties-which we are trying to discern.And the difference dissipates once we investigate a subset of those legal duties.Consider duties which, although obviously legal, are intimately connected to the political choices of elected officials.The judicial enforcement of these duties, just as much as constitutional conventions, raise serious concerns. (ii) Voting There is no legal duty to vote in the United States, but such a legal duty exists in Australia.The question is whether my account can make sense of this difference.One possibility is to deny that American citizens are morally obligated to vote.On this view, legislative intervention is needed for a moral duty to arise, like in Australia. Many, however, think a moral duty exists in both jurisdictions.So we must now turn to the constitutive elements of this moral duty.In Australia, one of the reasons underlying the duty is the legislative determination in favour of compulsory voting.Hence, we can identify a legal reason-to respect democratic decisions.This explains why Australia has a legal duty to vote.But can we explain why America does not? To do so, I must show that no legal reasons bear on the moral duty of Americans to vote.This looks like a significant challenge, for there is an obvious possibility.Political participation could be valuable.That value arises from our relationship with the polity.Given this value, Americans could have a reason to vote.This reason, under my account, looks like a legal reason. On one view, this value arises no matter our preferred candidate.What matters is our participation in the electoral process.Now, such a value is easy to grasp in small, homogeneous, discursive societies-think of an idealised of ancient Athens.There, we might think elections facilitate valuable forms of deliberation and civic friendship.But you may doubt this wholly participatory value arises in modern pluralistic societies, for the mere act of voting, taken alone, is rather thin.And even if it exists, you may doubt this value is sufficiently strong to contribute to a duty to vote. Given this, you may seek to enrich the value by reference to our preferred candidates.This allows for a more promising approach, for electoral outcomes can have important consequences.So we may have moral reason to vote for, and otherwise assist, the better candidate.There is reason to help good candidates win public office.But this, in isolation, looks like a general, not special, reason.We should support people who will allow for good consequences and oppose others who will cause harm.This is a reason we always have, however it is not a special reason.So it cannot be a legal reason. In response, you could turn to fair play. 49By voting, we can prevent injustice and achieve other beneficial consequences.We all benefit from this.By refusing to vote, we free ride on those who did vote to prevent injustice.Voting to achieve such ends is a co-operative enterprise.It requires each participant to endure the burden of casting a vote.As a matter of reciprocity, we should do our fair share to contribute to the common benefit of good electoral outcomes. Nonetheless, here are a few arguments against the existence of fair play reasons to participate in elections.First, we may doubt the electoral process, as a whole, is a co-operative enterprise, for it is irreducibly competitive.Elections are how duelling factions seek political victory over one another.When you and I vote for diametrically opposed political candidates, our shared activity is not co-operative.By contrast, it is easier to see how political parties, who organise to seek the victory of their preferred candidates, are engaged in a co-operative enterprise.But the activities of a particular political party are distinct from the state.Second, we may doubt whether a person who refuses to vote meaningfully participates in the electoral activity.Earlier, I suggested fair play reasons only arise for those who participate in the co-operative enterprise.Such a restriction is one way to address Nozick's objection concerning the neighbourhood association which supplies entertainment.If so, this decisively cuts against an electoral bystander having a reason to vote.To be sure, there are ways, aside from voting, to participate in the electoral process.For example, I could volunteer at the polls or be an activist.But then I am enduring a burden, and therefore the question of free riding would not arise. Conclusion Suppose the law consists of a duty to pay half my income in taxes.For positivists, this bears no relation to there being a moral duty to do so.For antipositivists, by contrast, I must necessarily have a moral duty to pay that amount.Now take the language of necessity away.The question, suitably revised, is whether, under the law, there exists a moral duty to pay half my income.This issue attracts the attention of not just legal philosophers, but political theorists too.A long line of thought, travelling under the familiar label of 'political obligation', evaluates the prospects of various considerations which might support a duty to pay.These considerations include consent, democracy and fair play-precisely the reasons I identify as legal reasons. Such discussions, however, often assume a deeply positivistic outlook.First they ask what the law is; then they ask whether a duty, apart from the merits of that law, exists.The constitutive reasons account flips this order.It first looks to the considerations which underlie the duties we have.Only then does it ask what the law is. Even those who disagree with my account of legal reasons can accept this approach.You can add, jettison or modify the set of reasons which are legal reasons.What matters is that we agree on the significance of the reasons which underlie legal duties.To explain legal duties, we should look to the nature of those reasons. A particular kind of reason-the legal reasons-makes moral duties into legal duties.
15,670.2
2023-12-13T00:00:00.000
[ "Law", "Philosophy" ]
Enhancing bone marrow regeneration by SALL4 protein Hematopoietic stem cells (HSCs) are widely used in transplantation therapy to treat a variety of blood diseases. The success of hematopoietic recovery is of high importance and closely related to the patient’s morbidity and mortality after Hematopoietic stem cell transplantation (HSCT). We have previously shown that SALL4 is a potent stimulator for the expansion of human hematopoietic stem/progenitor cells in vitro. In these studies, we demonstrated that systemic administration with TAT-SALL4B resulted in expediting auto-reconstitution and inducing a 30-fold expansion of endogenous HSCs/HPCs in mice exposed to a high dose of irradiation. Most importantly, TAT-SALL4B treatment markedly prevented death in mice receiving lethal irradiation. Our studies also showed that TAT-SALL4B treatment was able to enhance both the short-term and long-term engraftment of human cord blood (CB) cells in NOD/SCID mice and the mechanism was likely related to the in vivo expansion of donor cells in a recipient. This robust expansion was required for the association of SALL4B with DNA methyltransferase complex, an epigenetic regulator critical in maintaining HSC pools and in normal lineage progression. Our results may provide a useful strategy to enhance hematopoietic recovery and reconstitution in cord blood transplantation with a recombinant TAT-SALL4B fusion protein. Background Hematopoietic stem cell transplantation (HSCT) is a type of stem cell therapy used to treat cancers such as lymphoma and leukemia and other blood-related diseases. However, there have been several limitations in the use of bone marrow hematopoietic cell transplantation. The primary limitation is that the donor pool is limited by the need for at least partial HLA matching and it usually takes time in finding a suitable donor [1]. Human umbilical cord blood (hUCB) is increasingly being used as an alternative source of hematopoietic stem/ progenitor cells (HSC/HPC) for allogeneic HSCT because of its rapid availability and less stringent requirement for HLA matching [2]. This is especially important for minority patients and patients of mixed ethnicity, where hUCB is a particularly attractive alternative donor stem cell source. However, the absolute number of hUCB HSC/HPC transplanted is much lower than bone marrow or mobilized peripheral blood stem cells due to the limited volume of hUCB. This leads to significantly delayed engraftment and increased peri-transplant complications [3][4][5][6]. One approach to overcome this problem is to use two unrelated hUCB units for the transplantation [7,8]. While this strategy improved adult engraftment rates, it brought about worse GVHD (graftversus-host disease) [9]. In addition, of two or more hUCBs transplanted into one recipient, it is usual for only one of those multiple hUCBs infused in the patient to be present [7,8,10]. There are multiple steps critical for HSC/HPC engraftment in vivo including homing to a niche, and then the seeded HSC/HPC expand and proliferate, or engraft. Several other strategies that are being explored to increase the engraftment include ex vivo expansion of hUCB HSC/HPC [11][12][13][14], increase of hUCB HSC/HPC homing by CD26 inhibitor or SDF-1/CXCR4 regulation, and addition of third party mesenchymal stem cells. However, there are still problems like risk of contamination and loss of long-term engraftable cells in the process of ex vivo manipulation of hUCB involved in these strategies [15]. The identification of agents to increase hUCB engraftment by in vivo expansion of transplanted hUCB HSC/HPC without in vitro manipulation is of significant therapeutic value. Recently, we demonstrated that lentivirus expression of SALL4 in human bone marrow HSC/HPC was able to dramatically expand these cells and enhance their ability of long-term engraftment in NOD/SCID mice [16][17][18]. In order to evaluate in vivo effect of SALL4 on hematologic recovery and the engraftment of donor cell in HSCT, we produced SALL4B protein in baculovirus expression vector system (BEVS) and injected the protein into animals after iradiation or HSCT. In present study, we successfully expressed and isolated a TAT-SALL4B fusion protein carrying the protein transduction domain of HIV transactivating protein (TAT) in BEVS and showed the activity of the recombinant TAT-SALL4B protein in vitro. In addition, the TAT-SALL4B accelerates the regeneration of mouse bone marrow after lethal or sub-lethal irradiation by in situ expansion of the bone marrow HSC/HPC. In a transplantation model where human cord blood CD34+ cells were introduced into NOD/SCID mouse, TAT-SALL4B protein treatment is able to augment both short-term and long-term engraftment of human cells in the recipient mice. Our results suggest the potential utility of recombinant TAT-SALL4B protein as a stimulator for hematologic recovery after myelosuppression and an enhancer for the engraftment of cord blood cells in HSCT. Expression and purification of TAT-SALL4B fusion protein Previously, we demonstrated that SALL4-tranduced human CD34+ cells were capable of rapid expansion in vitro [16]. Protein transduction utilizing cell penetrating peptides (CPPs) can overcome many of the limitations of lentiviral vectors. Therefore, we sought to develop a CPP SALL4B fusion protein, TAT-SALL4B expressed in a Sf9 insect cell system. We then tested the impact of TAT-SALL4 on the growth of residual bone marrow cells in vivo after ablation. The SALL4 stem cell gene contains two alternatively spliced isoforms. SALL4A is the large spliced variant and SALL4B is a smaller variant with approximately half the full-length of SALL4A. We focused our studies on SALL4B [16] because it is a shorter form and expressed a high level in SF9 cells. In Figure 1a, the structures and functional module of the SALL4B construct for the baculovirus expression system are shown. The full-length TAT-SALL4B was expressed in baculovirus-infected Sf9 cells and purified using anti-Histidine (His) affinity chromatography (Figure 1b-c). Functional activity of TAT-SALL4B fusion protein in vitro In our earlier experiments utilizing SALL4B lentiviral transduction [19], we showed that SALL4B could regulate the expression of multiple genes involved in the self-renewal maintenance of human ES cells including OCT4 and Nanog. In addition, SALL4B can regulate its own promoter [20]. In order to test if TAT-SALL4B bears similar functional activities, we constructed OCT4 and SALL4 luciferase promoter reporters into 293 T cells, we found TAT-SALL4B protein could regulate OCT4 and SALL4 promoter activities in a similar pattern to that reported in previous study [19] (Figure 1d). The TAT-SALL4B protein significantly upregulated OCT4 and downregulated SALL4 promoter activities. The luciferase activities also showed a dose dependent manner in the presence of TAT-SALL4B protein. (Figure 1e). We also tested the activity of TAT-SALL4B protein on the growth of mouse bone marrow HSCs/HPCs enriched by a combination of Lineage, c-Kit and Sca-1 magnetic cell sorting. LSK (Lin-/c-Kit+/Sca-1+) cells were treated with 20 nM TAT-SALL4B or BSA control. After 6 days of culture, the total cell number was increased by~40 fold in the TAT-SALL4B group as compared to only~8 fold in the control (Figure 1f ). Enhancing bone marrow recovery by TAT-SALL4B As previously shown by our group, SALL4 expression is seen in hematopoietic CD34 + [21], but not CD34-cells. Markedly elevated levels of SALL4 were detected at the early phase of bone marrow recovery after ablation and expression levels were decreased as bone marrow cellularity increased ( Figure 2). This study indicates that upregulation of SALL4 may play a role of bone marrow recovery. We then tested the impact of TAT-SALL4B on the growth of residual bone marrow cells in vivo after ablation. A series of experiments were carried out in order to determine if the purified TAT-SALL4B fusion protein had the ability to regenerate bone marrow production in vivo. TAT-SALL4B protein, G-CSF, or PBS was intraperitoneally injected into mice for seven consecutive days 24 hours after lethal irradiation (Figure 3a). The dose of lethal irradiation (7 Gy, gamma-ray) administered to the mice was able to kill more than 99% of the mouse bone marrow cells within two to three days. An average of 2 × 10 7 whole bone marrow nucleated cells was obtained from flushing out both tibias and femurs from one wide type mouse. In the PBS group, the number of whole bone marrow cells per animal was 1.32 ± 0.21 × 10 5 at day 8 after irradiation. As consistent with previous reports, G-CSF increased the number of whole bone marrow cells by~3-4 fold (4.51 ± 0.47 × 10 5 ) [22]. The increase was over 6-fold (7.91 ± 0.83 × 10 5 ) in the SALL4B group compared to the PBS control. These data suggest that SALL4B has a greater effect on boosting the proliferation of bone marrow cells after irradiation compared to G-CSF (Figure 3b). To further confirm our cell count data, we analyzed the histological sections from the various treatment groups 8 days after irradiation. In contrast to the PBS group, in which there were only very few cells, mainly marrow stromal cells left in mouse bone marrow cavity, the cellularity of the bone marrow was dramatically enhanced by SALL4B treatment, similar to that in the G-CSF treated animals ( Figure 3c). Furthermore, we detected the existence of TAT-SALL4B in the bone marrow cells of mice by flow cytometry and immunofluorescent staining (Additional file 1: Figure S1). This demonstrated the cells that repopulating the marrow cavity were actively expressing the TAT-SALL4B protein. An additional control with TAT-GFP was used to exclude the possibility that the observed mouse bone marrow regeneration would have resulted from an effect of TAT. We expressed and purified TAT-GFP fusion protein using the same method utilized for TAT-SALL4B and compared the function of TAT-GFP to PBS in lethally irradiated mice. The results showed that there was no difference in the total number of bone marrow cells between the two groups (Additional file 1: Figure S2) suggesting TAT had no impact on the proliferation of bone marrow cells, and the SALL4B portion of the fusion protein accounted for the regeneration of mouse bone marrow. Radioprotection by TAT-SALL4B To determine if the enhanced growth of residual marrow cells had a functional impact, we performed an animal survival assay after lethal irradiation. TAT-SALL4B treatment significantly increased the survival of mice 24 hours after 8 Gy lethal irradiation, a dose by which the mouse usually dies within 30 days. As depicted in Figure 3d, the cumulative actuarial 30-day survival rate in SALL4B is 85.7%, compared to 0% in the PBS control group. These data were consistent with the previous observation of improved bone marrow cellularity in TAT-SALL4B treated mice. An interesting note is that this radioprotection effect was achieved by post-irradiation administration of SALL4B, which is different from G-CSF treatments where radioprotection is effective only by administration before or within two hours after irradiation injury [23]. TAT-SALL4B boosts HSC/HPC in injured mouse bone marrow We then investigated the impact of TAT-SALL4B on the expansion of residual HSCs/HPCs in detail after ablation. Compared to the control group (0.94 ± 0.24%), LSK cell percentage was significantly higher in the G-CSF group (3.29 ± 0.62%). More importantly, the LSK cell percentage in the SALL4B group (5.52 ± 1.02%) was even higher than that in the G-CSF group (Figure 4a,b). The total fold increases (vs. control) of HSCs number (Whole BM number times LSK%) in mouse bone marrow were~10 fold and~30 fold in the G-CSF and SALL4B treated group, respectively ( Figure 4c). In order to analyze the number of progenitor cells in these bone marrow cell populations, CFC assays were conducted. In parallel with flow cytometry (data not shown), both G-CSF-and TAT-SALL4B-treated mice showed significantly higher bone marrow CFC content than mice treated with PBS and the G-CSF-treated mice had the highest bone marrow CFC content overall ( Figure 4d). This is consistent with the observation of a biased effect by G-CSF toward lineage-restricted progenitors [22]. TAT-SALL4B treatment increases short-term engraftment in HSCT One major disadvantage of using human cord blood is that the absolute number of HSCs/HPCs in hUCB is much lower than that in bone marrow, which significantly delays engraftment [6]. We next tested if TAT-SALL4B was able to increase the proliferation of transplanted donor cells in the host. First, we used the mouse CD45.1/CD45.2 transplantation system. We enriched mouse BM precursor cells by c-Kit positive selection from CD45.1 mice and transplanted into lethally irradiated CD45.2 mice. At day 8, the mice in the SALL4B group had higher numbers of total bone marrow cells than those in the control group (2.44 ± 0.36 × 10 6 vs. 1.58 ± 0.16 × 10 6 ; P < 0.05, Figure 5a). When only donor cells (CD45.1+) were analyzed, the results showed that the SALL4B group had more donor cells compared to the controls (1.28 ± 0.21 × 10 6 vs. 0.78 ± 0.13 × 10 6 ; P < 0.05, Figure 5b), although the percentage of donor cells in the bone marrow was not different between the SALL4B group and control (data not shown). In addition, we also tested the effect of the SALL4B protein on the short-term engraftment of human cord blood CD34+ cells when introduced into sublethally irradiated NOD/SCID mice. In parallel to the results from the CD45.1/CD45.2 transplantation system, the total number of bone marrow cells was significantly increased at 14 days after transplantation in the SALL4B treated animals compared to the PBS controls (12.5 ± 1.50 × 10 6 vs. 5.67 ± 0.94 × 10 6 ; P < 0.05, Figure 5c). Furthermore, the percentage of human cells as identified by human CD45 antibody in mouse bone marrow of SALL4B protein treated mice was greater than that of control, although this was not statistically significant (0.15 ± 0.11% vs. 0.085 ± 0.05% , P = 0.07, Additional file 1: Figure S3). However, when the total increase of whole bone marrow cells is taken into account, the increase of human CD45+ cells in the bone marrow is statistically significant in the SALL4B group when compared to controls (16.6 ± 10.2 × 10 3 vs. 4.65 ± 2.45 × 10 3 ; P < 0.05, Figure 5d). These data prove that TAT-SALL4B treatment can promote proliferation of the host mouse bone marrow not only in a myeloablative condition (lethal irradiation) but also in a non-myeloablative condition (sublethal irradiation). Also, TAT-SALL4B can enhance the proliferation of donor cells after transplantation. Recent studies have shown that homing is an important mechanism for the increase of donor cell number in transplantation studies. In order to evaluate if the SALL4B protein has an effect on the homing activity of donor cells, we conducted further experiments using the CD45.1/CD45.2 transplantation system. In these trials, the recipient mice were intraperitoneally injected once with TAT-SALL4B after transplantation then sacrificed 24 hours later for analysis. Our results showed that there is no difference regarding the total number of bone marrow cells and donor cells between the SALL4B treated and control groups (Additional file 1: Figure S4). Therefore, we believe that the increased short-term engraftment in both the mouse-mouse and human-mouse transplantation studies was most likely related to the expansion of donor cells in the host and not their homing activity TAT-SALL4B injection enhances long-term repopulating capacity of human cord blood CD34+ cells in NOD/SCID mice Facilitation and maintenance of the long term engraftment of HSCs is of high importance in HSCT, which correlates with the success of hematological reconstitution and clinical outcome in patients after HSCT. We have observed that the TAT-SALL4B protein was able to increase the short-term engraftment of donor cells in both the mouse-to-mouse and human-to-mouse transplantation models. To address whether post-transplant administration of TAT-SALL4B is also capable of facilitating the long term engraftment of human HSCs, NOD/SCID mice were injected with human cord blood CD34+ cells via tail vein (Figure 6a). and received TAT-SALL4B or PBS intraperitoneally. Sixteen weeks after transplantation, there was a significant higher percentage of human CD45+ cells in the bone marrow of TAT-SALL4B treated mice compared to the control group (3.90 ± 0.36% vs. 0.43 ± 0.34%, P < 0.05, Figure 6b,c). Furthermore, these cells were able to differentiate in vivo into various blood lineages including lymphoid, myeloid, and erythroid lineages (Figure 6d,e). Of note is that SALL4B protein treatment may be able to reverse the higher lymphoid (versus myeloid) reconstitution usually observed in human cord blood HSCs transplantation in NOD/SCID mice [24]. This demonstrated that the TAT-SALL4B treatment does not alter the normal differentiation program of human HSCs in vivo and is consistent with our previous results using SALL4B over-expressed HSCs/HPCs by lentivirus transduction. In order to determine if SALL4B protein could also affect tissues other than hematopoietic cells, we examined all of the major organs and found there were no any abnormalities by histology studies (data not shown). We also performed long-term follow up in a small number of mice receiving TAT-SALL4B post transplants. As shown in Additional file 1: Table SI, we monitored the animals throughout the study for body weight and hematopoietic parameters. No tumors occurred in different strains including C57Bl/6, B6/SJL and NOD/SCID. Some mice have been observed for more than 18 months since TAT-SALL4B administration with no tumor formation. N-terminal 12 amino acids are essential for SALL4B to stimulate the expansion of HSCs/HPCs DNMT mediated epigenetic modifications are important for the self-renewal of hematopoietic stem cells. We have found that SALL4 actively recruits DNMT epigenetic modifiers (DNMT1, 3A, 3B, 3 L and MBD2) to target genes and leads to their inactivation. Downregulation of endogenous SALL4 expression led to accelerated cell differentiation of HSCs/HPCs [17]. Our previous studies also indicate that the SALL4 N-terminal sequence is essential for its interactions with DNMTs [25]. We conducted studies to further illustrate that the SALL4 Nterminal sequence is essential for its interactions with DNMTs. We generated an N-terminal 12 amino acid (aas) (N12) deleted SALL4B lentivirus construct (ΔN12-SALL4B) (Figure 7a). Deletion of the N-12 amino acids (aas) in SALL4B (ΔN12-SALL4B) strikingly diminished its physical interactions with DNMTs in mouse LSK cells resulting in a reduction of methyltransferase activity to a level that was comparable with that of a negative control ( Figure 7b-c). When ΔN12-SALL4B was introduced to the HSC/HPC cells, it strikingly diminished the induction of HSC/HPC expansion unlike the SALL4B (Figure 7d). Discussion Recently, we have demonstrated that bone marrow HSC/ HPC transduced to express SALL4A or SALL4B are able to achieve high levels of ex vivo expansion without loss of their long-term engraftment capability. To evaluate the in vivo effect of SALL4 on the expansion of hematopoietic precusor cells in transplantion, we generated a CPP SALL4B fusion construct, TAT-SALL4B and expressed it in insect cells. Protein transduction utilizing CPP might overcome the limitations of lentiviral vectors, and has recently been in wide use [26][27][28]. There are multiple phase 2 clinical trials using the CPP approach as a systemic or topical delivery system and these include NCT00451256 for c-myc (prevention of undesirable cell proliferation in coronary artery bypass grafts) and PsorBan for cyclosporine andmNCT007885954 for a PKCδ inhibitor (the treatment of acute myocardial infarction). We found that TAT-SALL4B was able to enhance the hUCB HSC/HPC engraftment in NOD/SCID. In our study, after treated with TAT-SALL4B protein, both short-term and long-term engraftment of human cells were significantly enhanced in hUCB CD34+ cell transplanted NOD/SCID mice. Notably, for short-term engraftment, although the percentage of human cells in mouse bone marrow did not show much difference between SALL4B and PBS treated animals, the absolute number of human cells was significantly increased after SALL4B treatment. Our studies indicate that the increased engraftment (absolute number of human cells) was likely a result of the direct expansion of donor cells in bone marrow by SALL4B treatment rather than enhanced homing of donor cells. Interestingly, even though as low as 20,000 human CB CD34+ cells were used for the transplantation, at 4 months after transplantation, there were still a significant portion of human cells existing in the bone marrow of SALL4B treated mice as compared to almost no human cells for the control. Our results may have some therapeutic value in term of the low dose of stem cells in human UCB transplantation. Numerous clinical studies have consistently demonstrated that the total nucleated cell (TNC) and CD34+ cell doses in cord blood grafts are highly correlated with the rate of neutrophil and platelet engraftment, as well as the incidence of graft failure and early transplant-related complication [8]. Because of this, only~10% of all cord blood specimens can be currently used. It is possible that the introduction of SALL4B to previously unusable cord blood units due to low TNC or CD34+ cell counts may be used in future therapies. Drugs which promote rapid hematopoietic recovery would address the major cause of morbidity and mortality. G-CSF or derived drugs target specifically the hematopoietic system and prolong the median survival time of lethally irradiated mice [23]. However, no survival advantage is observed when mice receive G-CSF 24 hour's post-TBI [23]. An increased time window after TBI would be advantageous in a nuclear emergency setting where healthcare provider times may be at a premium and they cannot administrate patients right away. In our study, we found that SALL4B injection dramatically regenerated the bone marrow in mice after irradiation, suggesting the potential radioprotective effect of SALL4B. In addition, this effect is achieved by injection 24 hour post-irradiation. Additionally, the TAT-SALL4 could potentially be used for increasing the therapeutic index of radiation therapy regimens for cancer patients by reducing the hematologic toxicity of ionizing radiation, enhancing bone marrow stem cell engraftment, and treatment of aplastic anemia. Our studies also observed the stimulatory effect of SALL4B on bone marrow cells in both lethal and sublethal irradiation in mice. Since the bone marrow cell numbers are not obviously affected in wild type animals receiving SALL4B treatment (Additional file 1: Figure S2), this suggests that SALL4B protein may only boost the proliferation of bone marrow cells in pathological conditions when the bone marrow microenvironment allows more cells' occupation for the recovery. When the bone marrow recovers to a normal condition, the SALL4B will not cause a hyper-proliferation problem which is related to tumorigenesis. This is very important for safety issues. In fact, in order to address the concerns about safety, we monitored the animals throughout the study for body weight and hematopoietic parameters. No tumors occurred so far in different strains including C57Bl/6, B6/SJL and NOD/SCID, and the longest time since mice received SALL4B is over 18 months. The mechanism underlying the stimulating effects on HSC/HPC expansion is still largely unknown. SALL4 is highly expressed and plays an important role in embryonic stem cells, primitive germ cells, HSCs/HPCs and acute leukemia [17,19,20,29]. Aside from the gatekeeper role in embryonic development and pluripotency of inner cell mass in blastocyst [19,20,30,31], SALL4 is essential for primordial germ cell survival [32]. Recent studies have suggested that hematopoiesis may be initiated from migrating germ cells [33,34] and SALL4 as a stem cell marker could be useful in further investigating this notion. Recently, we have found that SALL4 actively recruits DNMT epigenetic modifiers to target genes and lead to their inactivation. We further demonstrated that the N-terminal 12 amino acids were critical for SALL4B binding to DNMTs and the consequent expansion of HSCs in vitro. In the future, we will explore to shorten the protein based on these results and test their activity in in vivo expansion of bone marrow stem cells. In addition, bone marrow niche cells such as endothelial and perivascular cells [35] may also uptake TAT-SALL4B protein and contribute to the in vivo expansion of HSCs/HPCs in present study. In conclusion, our data demonstrated that TAT-SALL4B protein from insect cells promoted hematopoietic recovery after lethal and sub-lethal irradiation. Furthermore, TAT-SALL4B treatment was able to enhance both the shortterm and long-term engraftment of human UCB cells in NOD/SCID mice and the mechanism is likely related to the in vivo expansion of donor and recipient cells in the bone marrow. TAT-SALL4B protein could become an attractive candidate to enhance the engraftment of human UCB cells in hematopoietic stem cells transplantation and facilitate the hematopoietic recovery after radiation injury. Purification of TAT-SALL4B Sf9 insect cells (ATCC, Manassas, VA, USA) were transfected with a baculovirus expression construct containing either the human SALL4B sequence, or a GFP control, each with a c-terminal 6x His fusion tag. To recover recombinant protein, cells were lysed in lysis buffer [50 mM Na2HPO4/NaH2PO4 (pH 7.4), 300 mM NaCl, 20 mM NEM, 0.2% Triton X-100] containing 20 mM imidazole. His-tagged proteins were eluted in the lysis buffer containing 300 mM imidazole and blotted for detection with SALL4 or GFP antibody. Fractions containing the desired protein were pooled and dialyzed in IMDM overnight. Promoter assay Promoter luciferase assays were performed with the Dual-Luciferase Reporter Assay System (Promega, Madison,WI, USA) as described previously [20]. Briefly, MCF-7 cells were transfected with the SALL4B or OCT4 promoter reporter plasmid [20] using Lipofectamine 2000 (Invitrogen, Grand Island, NY, USA). Five hours later, cells were changed to fresh medium and TAT-SALL4B proteins or BSA were added. The next day, cells were analyzed after treated with TAT-SALL4B Protein or BSA for 3 times. The data are represented as the ratio of firefly to Renilla luciferase activity (Fluc/Rluc). These experiments were performed in duplicate. Animals Animals used includes three different strains C57BL/6 (CD45.2), B6/SJL (CD45.1) and NOD/SCID mouse (The Jackson Laboratory, Bar Harbor, ME, USA). All animal procedures were approved by Stony Brook University Institutional Animal Care and Use Committee. Bone marrow recovery analysis and survival test Eight week old C57BL/6 mice received 7 Gy total body irradiation (TBI) at dose of 0.6 Gy/min. Twenty four hours later, 6 μg TAT-SALL4B protein, 2 μg G-CSF (as previously described [22]), 6 μg TAT-GFP protein or PBS was injected intraperitoneally daily for seven days. At day 8, bone marrow cells were counted and analyzed by flow cytometry. In addition, bone marrow tissue sections were prepared for Wright-Giemsa staining. For survival test, mice were given a single dose of 8 Gy TBI and received TAT-SALL4B or PBS in same pattern as mentioned above. Mice were monitored daily after irradiation for 30 days. Cell transplantation There are two transplantation models utilized in our study: CD45.1/CD45.2 (mouse to mouse) and CD34+/NOD/SCID (human to mouse) models. Animals received 7 Gy TBI for C57BL/6 mice or 2.5 Gy TBI for NOD/SCID mice. 24 hours after irradiation, animal were injected with mouse 400,000 c-Kit + or 40,000 human CB CD34+ cells. For short-term engraftment analysis, transplanted mice were given 7 consecutive TAT-SALL4B protein or PBS injections for seven days. At day 8 or day 14, total bone marrow cells were counted and the donor cells in recipient mice were analyzed using mouse CD45.1 or human CD45 antibody. For long-term engraftment experiment, 20,000 human CB CD34+ cells were transplanted into NOD/SCD mice and 4 additional injections of proteins were conducted in the following week. Human cell content in the bone marrow was checked 16 weeks after transplantation. Homing assay CD45.2 mice were irradiated (7 Gy) and transplanted with 400,000 CD45.1 mouse bone marrow c-Kit + cells 24 hours later. TAT-SALL4B protein or PBS was injected into CD45.2 mice right after transplantation. Animals were sacrificed for bone marrow cell collection 24 hours after transplantation. The percentage of CD45.1 cells in bone marrow cells were analyzed by flow cytometry and the absolute number of CD45.1 cells was calculated. CFC assay 20,000 bone marrow cells from TAT-SALL4B, G-CSF or PBS treated animals were suspended in MethoCult® (Stemcell Technologies) medium for CFC assay according manufacturer's instruction. A colony with more than 100 cells was counted as a positive colony. Immunoprecipitation HEK 293 cells were infected with Flag tagged SALL4B or ΔN12SALL4B lentivirus. Proteins were prepared with Cel-Lytic™ MT Cell Lysis Reagent (Sigma-Aldrich, St Louis, MO, USA). Immunoprecipitations were performed by using Dynabeads® Protein G Immunoprecipitation Kit (Invitrogen) according to manufacturer's instructions. Western blots were conducted with antibodies against Flag (Bethyl Laboratories, Montgomery, TX, USA) and DNMT1 (Novus Biologicals, Littleton, CO, USA). DNA methyltransferase activity assay These experiments were carried out using the EpiQuik DNMT Activity/Inhibition Assay Ultra Kit (Epigentek, Farmingdale, NY, USA) following the manufacturer's procedures. Firstly, nuclear proteins were extracted from tested cells using the EpiQuik™ Nuclear Extraction Kit (Epigentek). For immunoprecipitations preceding the assays, we used antibodies against HA or IgG control from Bethyl Laboratories. Statistical analysis Results are reported as means ± SD. Values with P <0.05 were considered to be statistically significant. Additional file Additional file 1: Table S1. Follow-up irradiated mice receiving TAT-SALL4B protein post-transplant.
6,545
2013-11-05T00:00:00.000
[ "Biology", "Medicine" ]
Search for top-philic heavy resonances in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector A search for the associated production of a heavy resonance with a top-quark or a top-antitop-quark pair, and decaying into a $t\bar{t}$ pair is presented. The search uses the data recorded by the ATLAS detector in $pp$ collisions at $\sqrt{s}= 13$ TeV at the Large Hadron Collider during the years 2015-2018, corresponding to an integrated luminosity of 139 fb$^{-1}$. Events containing exactly one electron or muon are selected. The two hadronically decaying top quarks from the resonance decay are reconstructed using jets clustered with a large radius parameter of $R=1$. The invariant mass spectrum of the two top quark candidates is used to search for a resonance signal in the range of 1.0 TeV to 3.2 TeV. The presence of a signal is examined using an approach with minimal model dependence followed by a model-dependent interpretation. No significant excess is observed over the background expectation. Upper limits on the production cross section times branching ratio at 95% confidence level are provided for a heavy $Z^\prime$ boson based on a simplified model, for $Z^\prime$ mass between 1.0 TeV and 3.0 TeV. The observed (expected) limits range from 21 (14) fb to 119 (86) fb depending on the choice of model parameters. Introduction The discovery of a new particle consistent with the Standard Model (SM) Higgs boson by the ATLAS [1] and CMS [2] collaborations was a major milestone in high-energy physics.However, the underlying nature of electroweak symmetry breaking remains unknown.Naturalness arguments suggest that the large quantum corrections to the Higgs boson mass are cancelled by some new mechanism to prevent an excessive level of fine-tuning.Such a mechanism has been proposed in several theories of Beyond the SM (BSM) physics.The large Yukawa coupling of the top quark to the Higgs boson motivates various top-quark-based resonance searches.In many BSM theories, such as composite Higgs scenarios [3-6], new 'top-philic' vector resonances are predicted.These resonances couple more strongly to the top quark than to the light quarks, such that all the other couplings except for the one with top quarks can be neglected.Typical t t resonance searches target new resonances produced through q q annihilation, assuming sizeable couplings to light quarks [7-10].Top-philic resonances, on the other hand, require different production modes, such as the production of the heavy resonance in association with a top-quark or a t t pair, resulting in three or four top quarks in the final state.Figure 1 shows tree-level diagrams for the production of a top-philic resonance Z . This analysis searches for a new top-philic heavy resonance above a mass of 1.0 TeV.The ATLAS and CMS collaborations have previously published measurements of SM four-top-quark (t tt t) production [11][12][13][14][15]. with the first observation of this process reported by both experiments recently Fig. 1 Examples of tree-level Feynman diagrams for Z production in association with (a) t t, (b) t j (where j refers to any light quark), and (c) t W .The Z generation modes are derived from top quark final states produced via (a) strong, (b) electroweak, and (c) mixed QCD and electroweak interactions [16,17].Three-top-quark production, with a much smaller cross section of O(1 fb) predicted by the SM [18][19][20], has not been measured.In resonant BSM models, the two top quarks from the resonance decay, referred to as the 'resonance top quarks', are expected to be highly boosted.The other top quarks are referred to as the 'spectator top quarks' and are expected to have lower momenta.While previous BSM searches explored the t tt t final state for several target models [21,22], this analysis uniquely attempts to reconstruct the resonance explicitly, enabling a search of a new physics contribution with minimal model dependence. In addition, a simplified model [23] considering a coloursinglet vector particle Z is used to generate simulated samples for further model-dependent interpretations, however without relying on how the resonance couples to other particles in a specific model.The interaction Lagrangian reads: L = tγ μ (c L P L + c R P R ) t Z μ = c t tγ μ (cos θ P L + sin θ P R ) t Z μ , where γ μ are the Dirac matrices and P L/R = (1 ∓ γ 5 )/2 are the chirality projection operators with γ 5 = iγ 0 γ 1 γ 2 γ 3 .The coupling of the vector singlet to the top quarks is defined as c t = c 2 L + c 2 R , with components that couple only to lefthanded and right-handed top-quarks c L and c R , respectively.The tangent of the chirality angle is defined as tan θ = c R /c L .For the mass range which is explored in this search with a Z mass (m Z ) much larger compared to the top-quark mass (m t ), the Z decay width can be approximated by /m Z ≈ c 2 t /(8π).To minimise model dependence, only the tree-level production of a heavy top-philic resonance is considered.At the LHC a large contribution comes from t t Z (Fig. 1a) which is independent of θ .Further contributions arise from t j Z (Fig. 1b) and t W Z (Fig. 1c) production modes, which are negligible for θ = π/2, and are largest for θ = 0 with similar magnitude as t t Z . This search uses the data recorded by the ATLAS detector in ppcollisions at √ s = 13 TeV between 2015 and 2018, corresponding to 139 fb −1 .In the SM, the top quark is expected to decay into a W boson and a b-quark with a branching ratio of approximately 100% with subsequent hadronic or leptonic decay of the W boson.This analysis targets final states for which both resonance top quarks decay hadronically and one of the spectator top quarks decays leptonically.The singlelepton channel is preferred over the fully hadronic channel due to the significantly lower background arising from multijet processes, and the ability to trigger on events with at least one lepton.The top quarks from the resonance decay are expected to be highly boosted, and therefore their hadronic decays are reconstructed using jets with a large radius parameter and identified by requiring the jets to have a large mass and momentum.This final state with one lepton 1 suffers from a large background that is mostly composed of t t production in association with additional jets (t t+jets), especially when the associated jets contain b-hadrons.A data-driven technique assisted by simulations is used to overcome the challenge of modelling the t t+jets background.The presence of a signal is tested in signal-enriched regions using an approach with minimal model dependence followed by a model-dependent interpretation. The ATLAS detector The ATLAS detector [24] at the LHC is a multipurpose particle detector with a forward-backward symmetric cylindrical geometry with a nearly 4π coverage in solid angle. 2 It consists of an inner tracking detector (ID) surrounded by a 1 In the rest of this article, 'lepton' refers exclusively to an electron or a muon.Tau leptons are not explicitly considered although their decay products can be accepted by the electron, muon and jet selection criteria. 2ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the zaxis along the beam pipe.The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upwards.Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the z-axis.The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2).Angular distance is measured in units of superconducting solenoid, electromagnetic (EM) and hadron calorimeters, and a muon spectrometer (MS). The ID covers the pseudorapidity range |η| < 2.5.The high-granularity silicon pixel detector covers the vertex region and typically provides four measurements per track, the first hit normally being in the insertable B-layer (IBL) installed before Run 2 [25,26].It is surrounded by a silicon microstrip detector and a straw tube transitionradiation tracking detector.The calorimeter system covers the pseudorapidity range |η| < 4.9.It consists of lead/liquidargon (LAr) sampling calorimeters which provide EM energy measurements with high granularity.A steel/scintillator-tile hadron calorimeter covers the central pseudorapidity range (|η| < 1.7).The endcap and forward regions are instrumented with LAr calorimeters for EM and hadronic energy measurements up to |η| = 4.9.The MS surrounds the calorimeters and is based on three large air-core toroidal superconducting magnets with eight coils each.The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector.The MS includes a system of precision tracking chambers and fast detectors for triggering.The ATLAS trigger and data acquisition system [27] consists of a two-level trigger system in order to select events.The firstlevel trigger is implemented in hardware and uses a subset of the detector information to accept events at a maximum rate of nearly 100 kHz.The second-level is a software-based trigger that reduces the accepted event rate to 1 kHz, on average, depending on the data-taking conditions [27].An extensive software suite [28] is used for real and simulated data reconstruction and analysis, for operation and in the trigger and data acquisition systems of the experiment. Object reconstruction and event selection This analysis uses a set of data events collected by the ATLAS detector between 2015 and 2018 at √ s = 13 TeV.Only events for which all detector subsystems were operational are considered.The data set corresponds to an integrated luminosity of 139 fb −1 [29,30]. This analysis is based on events where the detector readout is triggered by the presence of at least one electron or one muon, referred to as single-lepton triggers.Events are selected if the leptons satisfy either low transverse momentum p T thresholds, with identification and isolation requirements, or a looser identification criterion with no isolation requirement and higher p T thresholds.The lowest p T requirement used in the single-lepton triggers varies from 20 to 26 GeV depending on the data-taking period and the lepton flavour [31,32]. Events are required to have at least one primary vertex reconstructed from at least two ID tracks with p T > 500 MeV.For events with several primary vertices, the one with the largest sum of the squared transverse momenta of the associated tracks is taken [33]. Electron candidates are reconstructed from an isolated electromagnetic calorimeter energy deposit which is matched to a track in the ID [34].The pseudorapidity of the calorimeter energy cluster, η cluster , must satisfy |η cluster | < 2.47, excluding the transition region between the barrel and the end-cap EM calorimeter (|η cluster | / ∈ [1.37, 1.52]).Muon candidates are reconstructed by combining tracks in the ID with tracks in the MS [35] and are required to have |η| < 2.5.Both electron and muon candidates are required to have a p T above 28 GeV.Further selections on the longitudinal and transverse impact parameters are imposed.The transverse impact parameter divided by its estimated uncertainty, |d 0 |/σ (d 0 ), has to be less than five (three) for electron (muon) candidates.The longitudinal impact parameter z 0 is required to be |z 0 sin(θ )| < 0.5 mm for both electron and muon candidates.Electron candidates must pass a 'Tight' likelihood-based identification working point [34] employing calorimeter and tracking information that provides separation between electrons and jets.They are also required to be isolated according to the 'Tight' selection, a criteria based on the properties of the topological clusters in the calorimeter and of the ID tracks around the reconstructed electron [34].Muon candidates must satisfy the 'Medium' cut-based identification working point [36].They are also required to be isolated using the 'TightTrackOnly' selection, a criteria based on the properties of the ID tracks around the reconstructed muon [36]. Jets are reconstructed from topological clusters of calorimeter cells and tracking information using the anti-k t algorithm [37,38] with a radius parameter of R = 0.4, processed using a particle-flow algorithm [39].They are referred to as 'small-R jets'.Jets are required to have p T > 25 GeV and |η| < 2.5 and are calibrated as described in Ref. [40].The Jet Vertex Tagger (JVT) discriminant [41] is used to identify jets originating from the hard-scatter interaction through the use of tracking and vertexing information.Jets with p T < 60 GeV and |η| < 2.4 are required to satisfy the requirement JVT > 0.50, corresponding to a selection efficiency for hard-scatter jets of about 96% [42].The DL1r classification algorithm based on recurrent neural networks [43,44] is used to identify jets containing b-hadrons (bjets).This algorithm identifies b-jets against backgrounds of light-flavour and charm-quark-initiated jets using information about the impact parameters of tracks associated with the jet and the topological properties of the displaced vertices reconstructed within the jet.This analysis uses a b-jet efficiency working point of 77% as measured for jets with p T > 20 GeV and |η| < 2.5 in simulated t t events [45].The tagging algorithm gives an expected rejection factor (defined as the inverse of the efficiency) of about 192 against light-flavour jets, and about 5.6 against jets originating from charm quarks [44,46]. To resolve the potential ambiguities of a single detector response being assigned to two objects by the reconstruction algorithm, a sequential overlap removal procedure is applied.First, any electron found to share a track with another electron with higher p T is removed.Second, electrons sharing their track with a muon candidate are removed.Third, the closest jet in rapidity y and in φ using R y = ( y) 2 + ( φ) 2 = 0.2 of an electron is removed. 3Fourth, the electrons within R y = 0.4 of a remaining jet are removed.Fifth, the jets with less than three associated tracks that are within R y = 0.2 of a muon are removed.Finally, remaining muons are removed if their track is within R y = min(0.4,0.04+10 GeV/ p Tμ ) of a remaining jet. After the overlap removal, the selected and calibrated small-R jets are used as inputs for jet reclustering [47] using the anti-k t algorithm with a radius parameter of R = 1.These reclustered jets are referred to as 'large-R jets', and are used as proxies of the hadronically decaying top quarks in this analysis.The calibration corrections and uncertainties for the reclustered large-R jets are inherited from the small-R jets [47].A trimming procedure [48] is applied to reclustered large-R jets which removes all the associated small-R jets that have a p T below 5% of the p T of the reclustered jet to suppress gluon radiation and mitigate pile-up effects.The large-R jets are required to have p T > 300 Gev, |η| < 2, mass m > 100 Gev, and at least two constituent jets. At preselection level, the events are required to have exactly one lepton with p T > 28 Gev that matches the lepton that fired the trigger, and at least two large-R jets to capture the two resonance top quarks that decay hadronically.Additional jets are expected from the decay of the spectator top quarks for t t Z signal events, as well as from the associated production in the t j Z and t W Z production modes.The number of additional jets is counted using small-R jets with R > 1 to any of the selected large-R jets in an event.Events are further required to have at least two additional small-R jets and at least two b-tagged small-R jets.The btagged jets can be either inside or outside of the large-R jets and therefore can also count towards additional jets. Event simulation Monte Carlo (MC) samples of simulated events were produced to model the signal and background processes.The samples are normalised using the best theory predictions available.The generated event samples are processed through the full ATLAS detector [49] based on Geant4 [50].Only the MC samples describing SM t tt t production and systematic variations of the t t simulation are processed through a faster simulation making use of parameterised showers in the calorimeters [51].Additional simulated ppcollisions generated using Pythia8.186[52] using the A3 set of tuned parameters (tune) [53] and the MSTW2008LO parton distribution function (PDF) set [54] were overlaid to model the effects of multiple interactions in the same and nearby bunch crossings (pile-up).The distribution of the number of additional ppinteractions in the MC samples is re-weighted to match the one observed in data.All simulated samples were processed through the same reconstruction algorithms and analysis chain as the data.For all samples of simulated events, except those generated using Sherpa [55], the EvtGen 1.2.0 program [56] was used to describe the decays of bottom and charm hadrons.The NNPDF3.0NLO [57] PDF set was used in all matrix element (ME) calculations if not stated otherwise. The signal samples are simulated using the MadGraph5_aMC@NLO 2.8.1 [58] generator at leadingorder (LO) in the five-flavour scheme with the NNPDF3.1LO[57] PDF set.The model implementation is based on the simplified top-philic resonance of Ref. [59] where a colour singlet vector particle is considered to exclusively couple to top and anti-top quarks.Six mass points are generated, spaced according to the experimental resolution.The Z masses are: 1.0 TeV, 1.25 TeV, 1.5 TeV, 2.0 TeV, 2.5 TeV, and 3.0 TeV.Interference effects between s-and t-channel processes for t t Z production are simulated.The interference with SM four-top-quark production was found to be negligible.Only the resonant Z production mode is included in the simulation of the t W Z and t j Z channels.The signal samples are generated with baseline choices of the chirality parameter θ = π/4 and the coupling between Z and top quarks of c t = 3.Other choices of these parameters are realised using MadGraph matrix element reweighting [60].The width of the resonance is computed automatically using MadGraph5_aMC@NLO and corresponds to a relative width of about 4% for c t = 1 and increases to 64% for c t = 4. The production of t t events is modelled using the HVQ program [61,62] in the Powheg Box 2 [61,[63][64][65] generator at next-to-leading order (NLO) in QCD.The h damp parameter, which controls the transverse momentum p T of the first additional emission beyond the Born configuration, was set to 1.5 m t [66]. The t t+jets MC events are classified according to the flavour of the particle jets.The particle jets are reconstructed from the simulated stable particles using the anti-k t algorithm with a radius parameter R = 0.4, and are required to have p T > 15 GeV and |η| < 2.5.Events are labeled as t t+≥1b if at least one particle jet is matched within R < 0.3 to any b-hadron with p T > 5 GeV.In the remaining events, if at least one particle jet is matched within R < 0.3 to any chadron with p T > 5 GeV, the events are labeled as t t+≥1c.Only hadrons not associated with b-and c-quarks from topquark and W -boson decays are considered.All other events are labeled as t t+light.Events categorised as t t+≥1b and t t+≥1c are collectively referred to as t t +heavy flavour (HF) events. Samples of single-top quark production backgrounds corresponding to the W t associated production, s-channel and tchannel production, were modelled using the Powheg Box 2 generator at NLO in QCD.Overlaps between the t t and W t final states are removed using the 'diagram removal' scheme [67]. A sample of t t + H events is modeled using the Powheg Box generator at NLO.The production of t t + V (with V = W, Z , including non-resonant Z /γ * contributions and t t + W W ) events are modeled using the Mad-Graph5_aMC@NLO 2.3.3 generator at NLO.Samples of V +jets (V = W, Z ) events are generated with the Sherpa 2.2.1 [68] generator using NLO-accurate MEs for up to two partons and LO-accurate MEs for up to four partons.Samples of diboson final states (V V ) were also simulated using the Sherpa 2.2.1 generator.The production of SM t tt t events is modeled using the MadGraph5_aMC@NLO 2.6.2 generator at NLO with the NNPDF3.1NLO[57] PDF.The rare process of single-top-quark associated Z boson production, t Z, is modeled using the MadGraph5_aMC@NLO generator at LO. All events generated using Powheg Box or MadGraph5_aMC@NLO are interfaced with Pythia8.230[69] using the A14 tune [70] and the NNPDF2.3LO[57] PDF set.Events generated using Sherpa employ the set of tuned parameters developed by the Sherpa authors. To assess the uncertainty due to the choice of generator, the t t and single-top-quark samples produced with the nominal generator set-ups are compared with alternative samples generated with MadGraph5_aMC@NLO (interfaced with Pythia8) for the calculation of the hard-scattering processes.These alternative samples were generated using the same PDF in the ME as in the nominal samples.Additional t t and single-top-quark samples were produced by replacing Pythia8 with Herwig 7.04 [71,72] for parton showering and hadronisation, using the H7UE tune [72] and the MMHT2014LO [73] PDF set.These samples are used to evaluate uncertainties due to the choice of parton shower and hadronisation model. Analysis strategy This analysis targets events where both resonance top quarks decay hadronically.For a heavy resonance signal with m Z ≥ 1.0 TeV, the two resonance top quarks are expected to be highly boosted.Large-R jets are used as proxies for the two highly boosted top quarks.The invariant mass distribution of the two large-R jets with the highest p T , labelled as m JJ , is scanned for an excess over the background prediction. The m JJ distributions for the top-philic Z signal with different m Z and c t values are shown in Fig. 2. A peak near the generated value of m Z is typically observed for masses m Z ≤ 2 TeV and coupling strengths c t ≤ 1.The peak at the resonance mass in general is wider for higher Z masses and for higher c t values due to the increase of the Z natural width.All distributions are skewed towards the lower mass end for multiple reasons.One factor is the contamination from large-R jets that do not capture or only capture part of the decay products from a hadronically-decaying resonance top quark.Moreover, the parton luminosity effect and QCD radiation from the highly boosted top quarks also contribute to the lower mass side.The difference between the two chirality hypotheses, θ = 0 and θ = π/2, is small, with a slightly better mass resolution in case of θ = 0.This is due to the maximal contribution from t j Z and t W Z production modes, which have smaller ambiguity in reconstructing the Z resonance using the two large-R jets given the lower expected jet and b-jet multiplicities. Despite the above effects that degrade the resolution for reconstructing the Z mass, the m JJ distributions for the signal with small m Z and c t are still distinct from those for background, as shown in Fig. 2. For the signal hypotheses with larger m Z and c t , the distributions become more background-like given the increasing resonance width and the low-mass shoulder.According to MC simulations, about 90% of background after the preselection consists of t t+jets events.Other background events mainly come from the production of t t Z, t t W , t t H, single top-quark and W/Z bosons in association with jets.The SM t tt t, diboson and other rare processes contribute less than 1% of all background events.For background events, a smoothly falling m JJ distribution is expected once m JJ passes the turn-on threshold due to the minimum requirements on the large-R jet mass and p T . The search is performed in the range of the m JJ distribution between 1.0 TeV and 3.2 TeV.The lower bound was chosen to minimise the impact due to the turn-on threshold on the background modelling, whereas the upper bound was motivated by the vanishing amount of expected SM background events at the high end of the m JJ spectrum.The product of the efficiency and acceptance of the signal in the t t Z production mode after the preselection varies from 3.1% to 7.2% for the different signal hypotheses, estimated using the simulated samples described in Sect. 4. In the single-lepton final state, Z signal events are expected to have a high multiplicity of jets in addition to the two large-R jets (N add.The predictions are given by MC simulations.The contributions from the SM t tt t, diboson, and V H processes are combined in the category 'Other'.Four signal samples are presented with m Z = 1.5 and 3.0 TeV and θ = 0 and π/2.All distributions are normalised to unit area discriminating power to separate these contributions.These two variables are used to categorise events into regions with different signal-to-background ratios.As illustrated in Fig. 4, nine regions are defined, denoted by 'Na', 'Mb', where Na and Mb represent N add.-jets and N b-jets , respectively, and both range from 2 to ≥4.The region with the lowest signal contamination, (2a, 2b), is referred to as the 'source region' and used to derive the data-driven background estimate detailed in Sect.6.The highest N add.-jets and N b-jets regions are chosen according to the expected signature of the t t Z events and help to maintain a statistically significant background estimation.All regions with at least three b-tagged jets have relatively higher expected signal contributions compared to other regions.They are used for signal extraction and are referred to as the 'signal regions'.In the (3a, 2b) and (≥4a, 2b) regions, the small signal contribution provides little gain in sensitivity, and the expected signal contamination is not negligible for the data-driven background estimate.Therefore they are not used as either signal nor source region.However, they are used as validation regions to verify the modelling of the background.Scaling the top-philic simplified signal model to the expected exclusions achieved by this search, the largest signal contamination is below 2% in the source region and below 5% in the validation regions, depending on the model parameter choices.Figure 3 illustrates the background composition in bins of N add.-jets and N b-jets expected from the MC predictions.The fractions of non-t t background are similar in most bins.The most dramatic change is in the fraction Fig. 4 An illustration of the analysis regions defined using N add.-jets and N b-jets .The source, validation and signal regions are highlighted in blue, gray and red, respectively.The names of the regions used in the rest of the document are also shown of t t+≥1b events.From MC simulations, the fraction of t t+≥1b events with respect to the total sum of background events is expected to increase from 8% in regions with two b-tagged jets to 68% in the most signal-like regions with at least four b-tagged jets. Background estimation The search for the resonance in the m JJ distribution heavily relies on the modelling of the continuum background.However, the MC prediction is expected to be unreliable and susceptible to large modelling systematic uncertainties due to various factors discussed in the following.The top-philic Z signal in the one-lepton channel features a large jet multiplicity.For the main background, t t+jets, events with a similar final state contain multiple partons beyond Born level.This means that in the nominal t t+jets sample generated using NLO ME, most of the jets in the final state are from the parton shower with limited precision [74].These jets form the large-R jets used to construct m JJ .Additional mismodelling and uncertainties come from the prediction of the boosted large-R jets, which are subject to the often badly modelled collinear radiation.Finally, the t t +HF background that populates the signal regions is underestimated by the current MC predictions [75,76]. A data-driven technique is used to provide a reliable background prediction.It is based on the similarity of the background m JJ shape in all analysis regions observed in MC simulations.Therefore, the shape of the background is obtained from data in the source region where there is negligible signal contamination.The background shape is then extrapolated Fig. 5 The distribution of data in the source region, along with the fitted function to data.The bands show the three independent variations obtained by diagonalising the covariance matrix of the fit.The third eigen variation is not visible due to its smallness to the signal regions.The extrapolation factors are derived from the MC simulations, accounting for the reduced event rate in higher N add.-jets and N b-jets bins and the minor differences in the shape of m JJ between the regions.The final background model is obtained using a profile likelihood fit via further adjustments according to all systematic variations considered in this analysis. Firstly, the shape of the background is obtained by fitting the data in the source region, (2a, 2b), using the following parameterization [77,78]: where x = m JJ / √ s, with √ s being the pp collision centerof-mass energy, and p 1 to p 3 are the fit parameters controlling the shape of the function.The fit result is presented in Fig. 5, showing that the fitted function describes the data well within the uncertainties due to the limited number of data events.Alternative parameterizations considering additional parameters or alternative functional forms have been tested, but were found to provide no improvement in the performance of the background estimate. The fitted function from the previous step is scaled by extrapolation factors for each signal region, in order to obtain the background prediction in that region.The extrapolation factors are derived from MC simulation in bins of m JJ , as the ratio of the expected number of events in the region of interest to that in the source region.To minimize the impact due to the limited number of MC events, the function in Eq. 1 is used, fitting the m JJ distribution predicted by MC simulation in both the region of interest and the source region.The extrapolation factors are then obtained by taking the ratio of these fitted functions.The shape of the background in the various analysis regions differs by up to ±15% with respect to that in the source region, according to the prediction of the extrapolation factors.The extrapolation is performed with all backgrounds combined.The shape of the m JJ distribution from the different background processes are compared using MC simulations.For the t t+jets, single top-quark and t t Z/W/H processes, constituting about 95% of the total background, the shapes are consistent within the uncertainties due to the limited number of MC events.Furthermore, the composition and the shape of the small backgrounds, i.e. those other than the t t+jets backgrounds, are found to be similar across all analysis regions.As an additional check, alternative extrapolation factors were derived with t t+jets MC simulation only, to account for the effect due to variation in the non-t t backgrounds.The resulting difference was found to be smaller than the uncertainty due to the limited number of MC events, and is therefore neglected. The extrapolation factors are based on the MC predictions in each of the analysis regions.Systematic uncertainties affecting the predicted distributions (m JJ , N add.-jets and N b-jets ) are therefore propagated to the background estimate via the extrapolation factors.These include modelling uncertainties in all background MC samples and experimental uncertainties.For each systematic variation, the extrapolation factors are rederived using the same procedure as for the nominal background prediction.To account for the deficit of t t + HF events in MC simulation, additional normalisation uncertainties are assigned to t t+≥1c and t t+≥1b events.Furthermore, dedicated uncertainties in the data-driven background estimate and the extrapolation are considered.The details on the systematic uncertainties included in this analysis are described in Sect.7. All systematic uncertainties are incorporated in a profile likelihood fit to data in the signal regions, as discussed in Sect.8. Systematic uncertainties Different sources of systematic uncertainty affect the search presented here, including those related to the luminosity, the identification and reconstruction of the physics objects, the MC simulation of the signal and background processes, and the method used to estimate the background.In the following, a brief description of the sources of systematic uncertainty is provided.A particular emphasis is put on the ones related to the t t background prediction, which will be shown to have the largest impact on the sensitivity of the search.The systematic variations can affect the normalisation of the signal and background templates estimated in the different regions as well as the shape of the m JJ distributions.All systematic uncertainties on the background prediction enter the analysis exclusively via extrapolation factors, except for the uncertainties related to the functional fits to data and MC described in Sect.7.2, and the signal bias uncertainty described in Sect.7.5.The luminosity uncertainty only applies to the signal. Experimental uncertainties The uncertainty in the combined 2015-2018 integrated luminosity is 1.7 % [29], obtained using the LUCID-2 detector [30] for the primary luminosity measurements.This uncertainty affects the signal prediction in the model-dependent interpretation. Other experimental uncertainties arise from corrections and calibrations applied to MC simulations.These uncertainties affect the background estimate via the extrapolation factors, as well as the MC prediction of the Z signals.An uncertainty is considered for the reweighting factors that correct the pile-up profile in MC simulations to match those in data.Uncertainties on the modelling of leptons arise from their momentum and energy scale calibration and resolution, as well as the trigger, reconstruction, identification, isolation efficiencies [34,35].Uncertainties on the modelling of jets mainly come from their energy scale (JES) and resolution (JER), containing effects from jet flavour composition, single-particle response, and pile-up [79,80].An uncertainty is assigned for the efficiencies of the JVT requirement on jets [42].Uncertainties are considered for the calibration of the b-tagging efficiencies, including the efficiencies of tagging bjets as well as the rates of mis-tagging c-jets and light-flavour jets [45,46,81]. Uncertainties on functional fit and extrapolation Dedicated uncertainties are assigned for the data-driven background estimate and the MC extrapolation from the source region to other regions.The uncertainties from each functional fit are obtained from the covariance error matrix of the fit.Three statistically independent variations are extracted from an eigen-decomposition of the covariance matrix.This leads to three uncertainties associated to the functional fit in the source regions to data and MC, respectively.In addition, three uncertainties are also associated to the fit of the MC in each of the six signal regions which leads to 24 components of uncertainty in total.The resulting uncertainties due to the functional fit to data and MC in the source region are both correlated across the signal regions. Theoretical uncertainties for the t t background An uncertainty of 50% in the normalisation of the t t+≥1b events as well as the t t+≥1c events is applied [75,76], and an uncertainty of 10% is considered for the t t+light events.The uncertainties due to the choice of generator and parton shower model used to simulate the inclusive t t sample are evaluated by comparing the nominal t t sample with alternative t t samples, detailed in Sect. 4. The uncertainties associated with the choice of NLO generator are estimated by comparing the predictions of Powheg Box and<EMAIL_ADDRESS>effect of the choice of parton shower and hadronisation model is estimated from comparing the prediction of Powheg + Pythia 8 to Powheg + Herwig 7.04.These two uncertainties are split into four components, each affecting either only the shape of the m JJ distributions or the acceptance and migration for the 3b regions and the ≥4b regions separately.This treatment is motivated by the different compositions of t t+light, t t+≥1c and t t+≥1b events in the two b-jet multiplicity bins and the large effect on the overall acceptance and migration of the events across the regions due to different parton shower and hadronisation models.Uncertainties due to missing higherorder QCD corrections are estimated by separately varying the renormalisation and the factorisation scales by factors of 2.0 and 0.5 in the nominal t t sample and taking the envelope.Additionally, uncertainties in the amounts of initialand final-state radiation (ISR and FSR) from the parton shower (PS) are assessed by respectively varying the corresponding parameter of the A14 PS tune4 and by varying the FSR renormalisation scale by factors of 2.0 and 0.625.The uncertainty related to the parton distribution function (PDF) is evaluated by using the PDF4LHC systematic variations [82]. Modelling uncertainties for non-t t backgrounds Non-t t background processes represent a minor fraction of the total background.Therefore, the relevant uncertainties have a small impact on the result of the analysis.Similar to the t t background, uncertainties due to missing higher-order QCD corrections, the amounts of initial-and final-state radiation, and the parton distribution function are also considered for most of the non-t t backgrounds. An uncertainty of 30% in the total cross section of the three single-top-quark production modes is included.This conservative number is motivated by the uncertainties due to the modelling of several associated jets and heavy flavour jets in the production.Uncertainties associated with the parton shower and hadronisation model and the generator choice are evaluated by comparing the nominal Powheg + Pythia 8 sample for each process with alternative samples produced with Powheg + Herwig 7 and aMC@NLO + Pythia 8.An additional uncertainty on the interference between t W and t t production at NLO is evaluated by comparing the nominal t W sample produced using the diagram removal scheme with an alternative sample produced with the same generator but using the 'diagram subtraction' scheme [67]. Modelling uncertainties in the t t W , t t Z, and t t H processes are evaluated in a similar way.Uncertainties of 60%, 15%, and 20% are applied to the t t W , t t Z, and t t H cross sections, respectively [83][84][85]. An uncertainty of 60% is assumed for the V +jets production cross section.It is estimated by adding a 24% uncertainty in quadrature for each additional jet based on a comparison among different algorithms for merging LO matrix elements and parton showers [86].An uncertainty of 20% is assumed for the SM t tt t production cross section [87].A conservative cross section uncertainty of 50% is applied to all other small background processes. Signal bias uncertainty A dedicated uncertainty is considered in the model-dependent interpretation of the results, accounting for the bias on the extraction of the signal yields caused by the background model.The details on the signal extraction and the modeldependent interpretation are described in Sect.8. To evaluate the size of the bias, a large number of pseudo-datasets are sampled using the expected background directly from MC simulations.The model-dependent signal extraction (as described in Sect.8.3) is performed for each of these pseudodatasets, taking into account all systematic uncertainties described previously.The resulting distribution of the signal strengths is fitted with a Gaussian function.The deviation of the fitted central value from zero is taken as the size of the bias.This procedure is repeated for all signal mass points.The three points with the largest bias are used to compute a second order polynomial function.The eventual size of the bias for each mass point is evaluated from the fitted polynomial function.To obtain dedicated uncertainties for all choices of model parameters, this procedure is repeated for all distinct c t and chirality parameter θ values explored in this search.The uncertainty is determined using the MC signal template scaled by the corresponding value of bias, and is implemented as an uncertainty on the background expectation.Depending on the analysis region and signal point, this translates to uncertainties up to 14% in the individual signal regions, with the majority of values being a few percent in size. Statistical analysis The search for a resonance signal is conducted using the binned m JJ distributions in all signal regions.An approach with minimal model dependence is adopted, followed by a model-dependent interpretation.A profile likelihood fit is used to obtain the final background model and make further statistical inference regarding the presence of a signal.The search with minimal model dependence is performed using BumpHunter [88], a hypothesis-testing tool that searches for local data excesses compared to the expected background.To obtain the expected background, the profile likelihood fit is performed with a background-only hypothesis.In the model-dependent interpretation, the Z signal samples described in Sect. 4 are used to interpret the data.Profile likelihood fits are performed with the signal-plus-background hypothesis, testing the compatibility between data and the models with different m Z , c t and chirality parameter θ . Profile likelihood fit The statistical analysis is based on a binned profile likelihood function.The bin width is chosen to be 100 GeV, with two large bins of 500 GeV and 700 GeV at the high mass end of the m JJ spectrum to avoid empty bins.Each bin in the signal regions is represented by a Poisson probability term for the observed data, with the total expected yield given by the background model described in Sect.6 and the Z signal samples.The likelihood function L(μ, θ θ θ) is constructed as the product of the Poisson probability terms over all bins.This function depends on the signal-strength parameter μ, a multiplicative factor to the assumed pre-fit signal cross section, and θ θ θ , a set of nuisance parameters encoding the effect of systematic uncertainties.The nuisance parameters are implemented in the likelihood function as Gaussian or log-normal constraints.The total number of expected events in a given bin therefore depends on μ and θ θ θ.The fit is performed by finding the values of μ and θ θ θ that maximise the likelihood function.In the case of a fit with the background-only hypothesis μ is fixed to 0. The nuisance parameters θ θ θ allow variations of the expectations for signal and background according to the corresponding systematic uncertainties, whilst penalising the likelihood function for any deviation according to the specified constraints.The fit also reduces the impact of systematic uncertainties on the search sensitivity by exploiting the highly populated and background-dominated regions.The result of the profile likelihood fit is further used for the signal extraction. Search with minimal model dependence In the search using BumpHunter, the data and the expected background in each of the six fitted regions are compared in sliding windows of variable sizes.The smallest window is required to contain two bins with the binning shown in Figs.7 and 8 in Sect.9, corresponding to a width that is slightly smaller than the expected resolution extracted from the Z MC samples.The Poisson probability is evaluated for all windows.For each region, the window with the smallest Poisson probability is chosen to be the most prominent window.The corresponding probability p min is used to construct the BumpHunter test statistic t defined as where d and b represent the number of observed data and the expected background events in the window, respectively.A deviation of data from background is only considered as evidence against the background-only hypothesis if data exceed the expectation.To obtain the local p-value and the significance of the most interesting bump, 10 5 pseudo-experiments are sampled from the expected background, and the t value from data is compared to the distribution of the t values from the pseudo-experiments.The global p-value and significance are then computed for the most prominent window of each region, taking into account the trial factors that incorporate the look-elsewhere effect [89]. Model-dependent interpretation The model-dependent interpretation is based on the signal samples described in Sect. 4. The signal strength is extracted by performing the profile likelihood fit with the signal-plus-background hypothesis.The fit and the subsequent statistical analysis are performed for each mass point.The test statistics are defined based on the likelihood ratio λ(μ) = L(μ, θ θ θ μ )/L( μ, θ θ θ), where μ and θ θ θ are the values of the parameters that maximise the likelihood function, and θ θ θ μ are the values of the nuisance parameters that maximise the likelihood function for a given value of μ.To evaluate the compatibility of the observed data with the background-only hypothesis, the test statistic q 0 , defined as is used.The resulting p-value represents the compatibility p 0 .A small p 0 indicates that the background-only model does not describe the data well, and the significance of the corresponding signal is further examined.In the absence of any significant excess above the background expectation, upper limits on the signal production cross section at 95% confidence level are derived.The test statistic q μ , defined as is used with the CL S method [90,91].Both p 0 and the upper limits are computed using the asymptotic approximation [92].All statistical analyses are performed using the RooStats framework [93][94][95]. Results The results of the statistical analysis are presented in this section.Prior to the analysis using real data, the profile likelihood fit model and the statistical methods were tested against pseudo-datasets constructed using simulated events.Both the background modelling and the signal extraction were validated, using background-only and signal-plus-background pseudo-datasets.In particular, alternative t t background predictions were considered when building the pseudo-datasets.They were used to stress-test the fit, especially the incorporated systematic uncertainties related to t t background modelling.These alternative predictions include an enhanced composition of the t t + HF backgrounds, as well as a t t sample generated with Sherpa2.2.10,5 which is not used in the definition of any uncertainty.These stress-tests demonstrated the capability of the fit to constrain the relevant systematic uncertainties, and to model the data correctly in case of mismodelling in the extrapolation factors from MC predictions.The search result with minimal model-dependence is derived based on a simultaneous fit to all signal regions with a background-only hypothesis.The background estimate is examined in validation regions.This is done by propagating to the validation regions the post-fit nuisance parameters obtained from the fit to the signal regions.The resulting agreement between data and the estimated background in the validation regions is shown in Fig. 6.The largest discrepancies are evaluated and found to be insignificant after taking into account the look elsewhere effect.Figure 7 shows the post-fit m JJ distribution with the background-only hypothesis compared to the observed data in the two most sensitive signal regions, along with the results from BumpHunter.The rest of the signal regions are shown in Fig. 8.All fitted distributions show reasonable agreement with data within uncertainties.The goodness of fit was evaluated using a likelihood-ratio test in which the likelihood of the nominal fit is compared to that of a saturated model [102].The result indicates a good description of the data by the background model.The post-fit nuisance parameter values are all within 1σ of the prior uncertainty.No significant excess was found in the BumpHunter search.The most significant deviation between data and the background expectation was observed in the (2a, ≥4b) region at 1.2 TeV.The window with the largest deviation has a local significance of 1.04σ ; the corresponding global p-value is 0.15.The expected distribution with the presence of a top-philic Z signal from MC simulation is illustrated in the top panels of Figs. 7 and 8.The top-philic Z signal considered has m Z = 1.5 TeV with c t = 1 and θ = π/2, and is normalised to have an arbitrary large cross section of 51 fb, obtained using the cross section predicted by the simplified model [59] with these parameters scaled by a factor of 100. The model-dependent signal extraction also reveals no significant excess in data.For all model parameter choices, the fitted signal strength μ is compatible with zero.The smallest local p 0 value is 3.8%, obtained with the 1.25 TeV mass point for c t = 1 and θ = 0.In addition, the fitted values of the nuisance parameter are compared to those obtained in the fit with the background-only hypothesis and found to be consistent.The observed and expected upper limits on the signal production cross section at 95% confidence level are shown for specific signal scenarios in Fig. 9.The observed (expected) limits range from 21 (14) fb to 119 (86) fb depending on the choice of model parameters.The strongest observed exclusion is observed at m Z = 2.5 TeV with c t = 1 and θ = π/2.At the higher mass range cross section limits are stronger for the smaller couplings c t due to the narrower signal width.While for the cross section limits there is no strong dependence on the chirality parameter θ , one can observe a stronger model constraint for θ = 0 because of the additional contributions from t j Z and t W Z production to the expected signal cross section.For c t = 4 top-philic Z mass of 1.0 TeV is excluded for both chirality parameter choices (θ = 0 and π/2). Table 1 lists the impacts of various groups of uncertainties relative to the total uncertainty on the fitted signal strength for two examples of the top-philic Z signal: (m Z = 1.5 TeV, c t = 1, θ = 0) and (m Z = 3.0 TeV, c t = 4, θ = 0).The results are dominated by systematic uncertainties.The most important source of systematic uncertainty is the modelling of the t t+jets background.The uncertainties due to the migration and acceptance effect of the ME and PS generator choices in the ≥4b regions play a dominant role.They contribute 26% and 31% (36% and 48%) of the total uncertainty, respectively, for the signal with m Z = 1.5 TeV and c t = 1 (m Z = 3.0 TeV and c t = 4).This reflects the large difference between the predictions of the different generators in the phase space explored by this search.The uncertainties in the normalisation of t t+≥1b also contribute 18% of the total uncertainty for both signal hypotheses shown in Table 1.The uncertainties from the jet energy scale and resolution are also an important source, given the large jet multiplicity in the signal regions.This is followed by the uncertainties from the data-driven background estimate.In each plot two coupling-strength scenarios are shown, for c t = 1 (in gray) and c t = 4 (in black).The light (dark) blue curves illustrate the signal cross sections based on the model described in Ref. [59] for c t = 1 (c t = 4).The green bands surrounding the expected limits for c t = 1 and c t = 4 correspond to the 68% confidence intervals.All limit lines are obtained by interpolating linearly between the different mass hypotheses Table 1 The contribution from different systematic uncertainties relative to the total uncertainty on the fitted signal strength for two signal scenarios with θ = 0, grouped into categories.For each category, the fit is repeated with the corresponding group of nuisance parameters fixed to their best-fit values.The contribution from each category is then evaluated by subtracting in quadrature the uncertainty on the signal strength obtained in this fit from that of the full fit with all uncertainties.The percentage is calculated relative to the total uncertainty from the full fit.The contribution from the statistical uncertainty is also shown.The total systematic uncertainty is different from the sum in quadrature of the different groups due to the correlations among the nuisance parameters in the fit Uncertainty categories Relative contribution to the total uncertainty [%] Conclusion A search for a top-philic heavy resonance produced in association with a top quark or a top-quark pair and decaying to t t is presented.The search is performed using 139 fb −1 of pp collision data at √ s = 13 TeV collected by the ATLAS detector at the LHC.Large-R jets are used as proxies of the top quarks from the heavy resonance decay.The invariant mass spectra of the two large-R jets with the largest p T are used to test for the presence of a resonance signal in the range of 1.0 TeV to 3.2 TeV.Events in the single-lepton final state are selected and categorised according to the number of additional jets and b-tagged jets.The search is conducted in regions with at least three b-tagged jets, with and without assuming a specific Z signal model.No significant excess was observed above the expected background.The upper limits on the Z production cross section at 95% confidence level are computed for signals with six values of m Z between 1.0 TeV and 3.0 TeV based on a simplified model.The observed (expected) limits range from 21 (14) fb to 119 (86) Trust, United Kingdom.The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (The Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers.Major contributors of computing resources are listed in Ref. [103]. Data Availability Statement This manuscript has no associated data or the data will not be deposited.[Authors' comment: All ATLAS scientific output is published in journals, and preliminary results are made available in Conference Notes.All are openly available, without restriction on use by external parties beyond copyright law and the standard conditions agreed by CERN.Data associated with journal publications are also made available: tables and data from plots (e.g.cross section values, likelihood profiles, selection efficiencies, cross section limits, …) are stored in appropriate repositories such as HEPDATA (http:// hepdata.cedar.ac.uk/).ATLAS also strives to make additional material related to the paper available that allows a reinterpretation of the data in the context of new theoretical models.For example, an extended encapsulation of the analysis is often provided for measurements in the framework of RIVET (http://rivet.hepforge.org/)."This information is taken from the ATLAS Data Access Policy, which is a public document that can be downloaded from http://opendata. -jets ) and the b-tagged jets (N b-jets ).The distributions of N add.-jets and N b-jets are shown for the signals and the background in Fig. 3, demonstrating clear Fig. 2 Fig. 2 The m JJ distributions from the top-philic Z signals compared to the total background after the event preselection.The signal distributions are shown for two values of the chirality angle, (a) θ = 0 and (b) θ = π/2.In each case, four signal samples with different mass resolutions are presented for the possible combinations of m Z = 1.5 TeV,
12,534.2
2023-04-04T00:00:00.000
[ "Physics" ]
DEVELOPMENT OF A GENERALIZED MODEL FOR THE PROTECTION OF A CRITICAL INFRASTRUCTURE OBJECT FROM THE DESTRUCTIVE IMPACT OF AIR ATTACK MEANS Purpose: development of a generalized model for the protection of critical infrastructure objects from the destructive action of an air attack. Theoretical framework: based on the analysis of the use of cruise missiles with radar correlation-extreme algorithms to damage critical infrastructure objects in the conditions of the Russian-Ukrainian war, a generalized model of the protection of critical infrastructure objects from the destructive action of an air attack has been developed. Methods: determined by a set of solved scientific and research tasks and carried out using: methods of system analysis - during studies of the distribution function of the electronic subsystem; numerical modeling methods - when studying the main electrophysical properties of critical infrastructure objects. Results and conclusions: a generalized model of the protection of a critical infrastructure facility against the destructive action of air attack tools has been developed, which allows for the assessment of risks and their management. The developed model can quantify uncertainties, simulate potential scenarios and assess the impact of various factors on the level of risk. This allows decision makers to make informed choices and develop strategies to mitigate risks. INTRODUCTION The first mass-produced cruise missile was created during the Second World War in Germany -"Fau-1". One of the main difficulties of the development was the guidance system. It looked as simple as possible -the autopilot monitors the course and altitude, measures the flight range. As soon as the "Fau-1" flew the specified distance, the autopilot directed the missile into a dive. Later, the Germans developed the first inertial guidance system based on analog instruments with a gyroscope and accelerometer for the Fau-2 ballistic missile (Christopher, 2013, p.26). Therefore, it was quite logical to invent a mechanism that would regulate the indicators. TERCOM became this system for cruise missiles that had to fly 3 hundreds of kilometers and stay in the air for hours. Its principle is that the missile scans the surface below it and compares it with a standard. One of the simplest implementations will be, for example, terrain data -elevation differences recorded by a radio altimeter. In this case, the missile route can be divided into certain control points, the map of which is stored in the missile's memory. These should be areas with contrasting topography, for example, rivers with steep banks, a network of ravines, or even individual large buildings. Having arrived, the rocket compares, finds and corrects the indicators of the inertial system, resetting the accumulated error. Currently, the TERKOM system uses not only terrain data, but also visual images. This is due to the fact that the route may not have a characteristic relief. But this system is much more complex because it requires work at the level of pattern recognition, but this technology, called DSMAC, was successfully mastered in the United States in the 1980s and integrated into the Tomahawk Block II cruise missile. The advent of satellite navigation fundamentally changed the situation, because now it became possible to constantly receive one's coordinates, altitude and speed. It was for this that the US initially began to deploy the GPS system in the 1970s, and in the 1980s the USSR began to deploy its GLONASS. Regarding the use of cruise missiles by the Russian Federation, the situation is as follows. Aviation X-101 and X-555 have all four components of navigation. Caliber missiles most likely do not have DSMAC (Dementiiuk et al., 2023, p. 29-37). But in the realities of the Russian Federation, another important factor is the availability of detailed, up-to-date and accurate radar and optical maps that are loaded into the missile's memory. Now the enemy is increasingly using Kh-59M missiles, the most common of which, most likely, use only inertial and satellite navigation, and for direct guidance, a radar or television guidance system is included (Kozubenko O. & Shulman O., 2022). As for the Kh-22, it generally only has an inertial on the march section and a radar guidance system on the terminal. Both with extremely low accuracy on the Soviet technology base of the 1960s and 1970s. That is, it is launched in the direction of the target, which, moreover, should be as radiocontrast as possible. Accordingly, the Kh-22 is capable of causing much greater destruction than most modern Russian missiles. But it has a significant drawback -low accuracy (Kozubenko O. & Shulman O., 2022). The highest accuracy was achieved in the mode of active operation of the homing head on the entire flight path (Turinskyi et al., 2019, p. 542-548). But in this mode, the missile becomes visible to air defense systems at a long distance. Most likely, Russia uses a combined guidance mode: the missile flies autonomously for most of the flight, and only at a certain distance from the target does the homing head turn on. In this mode, the accuracy drops significantly and is several hundred meters, but the chances of interception decrease. On December 1, 2022, at a briefing of representatives of the Security and Defense Forces of Ukraine, fragments of the combat part of the Kh-55SM missile, which Russia uses during shelling of Ukraine, were demonstrated (Skoblikov O. & Knyazyev, 2012, p. 1-8.). This is a modification with an increased range of the Soviet Kh-55 cruise missile, with which Russia will attack Ukraine from March 2022 from Tu-95 and Tu-160 strategic bombers. The Kh-55 and Kh-555 missiles fly at subsonic speeds with terrain at an extremely low altitude. They are intended for use at stationary strategically important objects. With the beginning of hostilities, information appeared about the first use of Kh-59 missiles. It's an old Soviet rocket from the 1980s, but it's pretty accurate. The possible circular deviation is indicated as less than 10 m, but the USSR and Russia tend to exaggerate the real accuracy of their weapons. In order to terrorize and intimidate Ukrainians, Russia strikes almost every day with the help of cruise missiles, in particular Kalibr, which are launched both from Iskander operational-tactical air systems and from ships. Launches are carried out beyond the range of Ukrainian weapons (Kozubenko O. & Shulman O., 2022). There is nothing revolutionary in the Kalibr cruise missile, it is an updated version of the Soviet-developed 3M10 missile, which in turn was a tracing paper from the American Tomahawk cruise missile. The Soviet Kh-55 and its more modern modification Kh-555 became an alternative to long-range Kalibr missiles. But these missiles can no longer be called highly accurate. For them, the circular slope is 20-100 m. Russia uses P-800 Onyx cruise missiles to strike targets in southern Ukraine. This missile was developed in the late 1970s as a medium-range anti-ship missile. A missile with a reduced flight range (300 km versus 600 km) is exported under the name "Yakhont". In the Russian-Ukrainian war, the Kh-101 is used -the latest cruise missile, which is launched from the Tu-160 and Tu-95MS missile carriers. It is difficult to detect, intercept and shoot down by means of air defense (Datsenko, 2022). The peculiarity of this cruise missile is that it is able to change the target even in flight. A large number of works were devoted to the development of methods and means of passive protection of objects, which were carried out and are carried out by such famous scientists as V. Gorodnov (Horodnov et al., 2004) and others. The existing methods and means are not able to ensure the necessary effectiveness of the protection of critical infrastructure objects due to their peculiarities, in relation to the destructive effect of cruise missiles with a radar navigation method -an insufficient number of air defense means for the protection and distribution of critical infrastructure objects (Sytenko, 1965, p. 1-183). Therefore, a contradiction arose, which is due, on the one hand, to the presence of the destructive effect of cruise missiles with a radar guidance method, and on the other hand, to the lack of technologies, methods and means that will allow to ensure the necessary level of protection of critical infrastructure objects without harming their functioning and which can be implemented at critical infrastructure facilities without significant financial losses and the involvement of air defense resources (Iasechko М., Atamanenko I. et al., 2019, p. 614 -617). The idea of research is aimed at increasing the level of protection of objects of critical infrastructure in the event of repeated attacks with radar correlation-extreme guidance algorithms. The purpose of the article is to create a model for the protection of a critical infrastructure object under the conditions of the destructive action of air attack means. The object of the research is the process of protection of technical buildings, turbine (engine) halls of a critical infrastructure object based on changing the contrast of the critical infrastructure object, using false targets, physical reflection and changing the effective scattering area of the object. The subject of the research is the methods of protecting the critical infrastructure object from the destructive impact of air attack means. THEORETICAL FRAMEWORK When comparing various mathematical models that provide the calculation of the desired parameters of the protection of critical infrastructure objects, the problem of quantitative measurement of the absolute or at least the relative value of the effectiveness of the models arises. Such a task leads to the need to choose an appropriate indicator of the effectiveness of mathematical models, which quantitatively reflects the degree of achievement of the goal of modelling (Gorodnov, 1987, p. 273-284). This indicator is naturally chosen based on the purpose of applying the mathematical model. Usually, the goal of modeling the protection of a critical infrastructure object is to optimize actions to protect the object, increase its readiness and effectiveness of cover, that is, increase the effectiveness of individual elements of protection. Then, from the point of view of the protection of the critical infrastructure object, the model that is used should provide an increase in the effectiveness of the cover due to the optimization of the protection parameters. If we are talking about optimal parameters (that is, the best in the given content), then any deviation from the optimal values of the protection parameters will lead to a decrease in effectiveness, that is, to losses in the effectiveness of the cover. Therefore, the better the model, the smaller the a posteriori loss of the effectiveness of the protection of the critical infrastructure object it provides. Then the ideal model should provide minimal Пеі efficiency losses caused by errors in the input data of the model. Taking into account the above-mentioned understandings, to compare the quality (efficiency) of two models -the evaluated and the existing (available) one, it is advisable to introduce a dimensionless (relative) indicator of the effectiveness of the evaluated model of the species: Where, Пен, Пеd, Пео are the expected loss of effectiveness of the protection of the critical infrastructure object with the direct implementation of the protection parameters formed using the existing, ideal model and the one being evaluated, respectively. When calculating the values of such an indicator, the units of measurement of the effectiveness of the protection of the critical infrastructure object will be insignificant, and the errors in the estimations of the losses of efficiency Пе for the analyzed models will tend to mutual compensation. It can be shown that the values of the efficiency indicator lie in the range from a negative infinite value to unity (because the loss of efficiency of the protection of the critical infrastructure object when using the ideal model by its definition will be the smallest possible for the case of using any other model). In this sense, the indicator is satisfactory. However, its direct measurement is hardly possible. Therefore, it is necessary to find the possibility of its calculation based on the results of measurement or calculation of side (indirect) parameters of the model, which directly affect the quality of the solution to the protection optimization problem. If we do not touch the methods and methods directly used in modeling, then the main quality parameters of the model, which are directly or indirectly measured, can be chosen as the known reliability of calculations, operational efficiency of modeling, completeness and importance of input data used to obtain the result (that is, taken into account in the model). For the following considerations, it is necessary to make a number of basic assumptions. First, let us assume that the various efficiency losses Пе*, which are determined by the inaccuracy of determining each of the Q (і=1...,Q) protection parameters, are independent and additive from the point of view of the overall efficiency losses, which determine the quality of the models (* -index "n", "d", "o" of the corresponding model), i.e (2) Secondly, we will introduce the notation for the losses Пбі of the effectiveness of the cover of the critical infrastructure object, when the simulation results were not used for some reason in the protection of the object (without using the model), and for the losses of the effectiveness of the cover Пм*і, obtained when the results of the simulation and relevant 6 recommendations were used. We denote by P* -the probability of obtaining simulation results in time using this model for the time t < tн, which is available. Then the loss of effectiveness Пе*і, for each of the protection parameters, can be estimated as a mathematical expectation of the loss of effectiveness of covering the critical infrastructure object ( ) Third, suppose that each of the models (estimated, available, and ideal) provide the determination of all of the Q parameters that are sought, but the efficiency of determining each parameter in the general case is different. Then for each i-th parameter (i = 1,...,Q) for these models it will be permissible to write Taking into account the fact that the reduction in the effectiveness of the cover due to the failure to use the ideal model is always greater than due to the failure to use any other, that is, obvious inequalities Sdi  Soi; Sdi Sнi , let's move on to determining the relative values of the reduction in the loss of cover efficiency due to the non-use of the appropriate models: Where, the value of R will be determined on the interval: If we set the relative weight of the efficiency gains provided by each іth (і = 1, ... , Q) protection parameter, in the form (7) After the numerator and denominator of the expression is divided by the sum of all values of the reduction in the effectiveness of the cover Sdі at і = (1, ... , Q), the desired model efficiency indicator will take the form: (8) It is obvious that the expression аi·Роi·Роi simultaneously characterizes the reliability and efficiency provided by the evaluated model when calculating the i-th protection parameter, as well as the importance of this parameter, which ultimately determines the contribution of the evaluated model to this i-th protection parameter in reducing losses the effectiveness of the cover in comparison with the decision-making situation without the use of the estimated model. Then the value (9) will approximately characterize the contribution of the analyzed (*) model to the reduction of efficiency losses for all Q protection parameters and, thus, makes sense of the degree of expected completeness of modeling for the analyzed (*) model. Taking into account the above considerations, the indicator for comparative assessment of the effectiveness of models can be written in its general form: Tcp time -can have an interpretation of the average value of the time required for simulation (calculation) and obtaining the result. In the case when the value of Tcp takes a constant value of T, the expression takes a simpler form: Thus, the probability P(t) essentially determines the efficiency of obtaining a result with known restrictions on the available time t and the required time T for modeling or calculations. Due to the fact that an increase in the degree of adequacy of the model and its approximation to the ideal model is accompanied by a decrease in the absolute value of the methodological error δmet (or its dispersion -Dмет), as a quantitative indicator of the degree of inadequacy of the real model, it is appropriate to take the ratio of the form: In the general case, as the degree of adequacy of the model to the real process increases, 8 the value of this indicator approaches zero, or on the contrary, increases with the increase of inadequacy of the model. In a number of research works, including those carried out under the leadership of V. Horodnov (Gorodnov, 1987, p. 289-296), it is shown that the value of the relative error βj (j=1,2,3,4) depending on the method of accounting for factors is usually within the following limits: β1 = 0 when the factor is directly taken into account by setting its current value, which corresponds to the value in the real process; β2 = 0.4 .. 0.49 -with a simple generalization (replacement of a set of various but homogeneous in physical content factors by one factor); β3 = 0.6 -with functional and conceptual generalization of disparate factors with the aim of displaying them in the model with one representative value; β4 = 1.0 -with indirect or implicit consideration of factors. Knowing the relative аі weights of significant factors and methods of their generalization in the model allows, after sufficiently complex mathematical transformations, to directly determine the value of the indicator of inadequacy of the model to the real process: (13) Where, Q -is the value of the model inadequacy indicator; ai -weights of the importance of taking into account the i-th factor in the model in relative units; qj -a set of factors taken into account in the model by the jth method of generalization; βj -the relative average value of the error introduced into the calculations due to inaccurate (generalized) consideration of factors. the value of the R model reliability indicator will take the form: Where, * k R -the reliability value of the definition of the k-th parameter; i -the importance of taking into account the i-th factor in the model; * jk g -a set of factors that are taken into account by the j-th method of generalization; j -the relative average value of the error that is introduced into the calculations due to inaccurate (generalized) consideration of factors Thus, a concise method of practical calculation of the effectiveness of mathematical models is reduced to the following. From the beginning, each of the Q sought protection parameters is determined and characterized by its importance ak (priority when making a decision), the efficiency P*(t) of calculating its value using the appropriate model (usually all Q parameters have the same value of the efficiency indicator if calculated using the same model) and reliability R*k . The symbol (*) takes the value of the model number. 9 Then further, taking into account the need to have an estimate of the values of all parameters before the decision is made, the value is calculated which roughly characterizes the contribution of the considered model to the reduction of efficiency losses for all parameters sought, and thus has a sense of the degree of its expected completeness. The generalized indicator of the effectiveness of the model has the form Where, Y1(t), Y2(t) is the expected completeness of consideration of significant factors when using the first, for example, the existing model and the second, for example, the developed model. METHODOLOGY The article uses the method of system analysis and the method of mathematical modeling. The method of system analysis is used to study, evaluate and understand the complex system of protection of a critical infrastructure facility. It involves breaking down the system into its components, studying their relationships, and analyzing how they function together to protect it from air attack. This method aims to identify problems, deficiencies or opportunities for system improvement, and to propose solutions or improvements. The method of mathematical modeling is used for the analysis and research of the protection of a critical infrastructure facility against an enemy air attack. It is a technique used to model, analyze and predict the behavior and results of a critical infrastructure object protection system using mathematical principles and formulas. The given mathematical model captures and describes their interaction and influence on each other. RESULTS AND DISCUSSION It is important to create a mathematical model for assessing the probability of providing protection of critical infrastructure objects from air enemy strikes. Mathematical models make it possible to systematically analyze the risks associated with the protection of critical infrastructure objects from strikes by an aerial enemy. Taking into account factors such as geographic location, potential threats and infrastructure vulnerabilities, the model helps estimate the likelihood and potential impact of such events. Physical security measures (including technologies) are used to counter air attack means at critical infrastructure facilities (Iasechko, 2017, p. 18-21). This is protection when protection requires a multi-level of different measures. The basic principle is that the security of the infrastructure is not significantly impaired by the loss of any individual layer. To detect any unauthorized access and mitigate the threat before it can reach core facilities, a multi-layered approach can include the following: • delineation of the perimeters of the critical infrastructure facility area and protection by physical barriers; • patrolling and sufficient supervision; • access control with additional security features used to increase its performance or efficiency; • use of such technologies as methods and/or techniques of verification; Physical security measures must be supported by properly trained personnel, robust and reliable comprehensive emergency planning, and concise, well-written security plans and orders.Thus, the mathematical model for estimating the probability of providing cover for critical infrastructure objects from air enemy strikes is important for risk analysis, cost estimation, and resource allocation. This enables understanding and managing the risks associated with such events, supporting the resilience and recovery of critical infrastructure in challenging circumstances. CONCLUSIONS The research carried out in the article makes it possible to develop a mathematical model for ensuring the protection of critical infrastructure objects from the destructive impact of various types of air attack means (Iasechko М., Kolmykov M. et al., 2020, p. 1380-1384. A mathematical model can be used to predict future events or outcomes. By analyzing input data and using mathematical techniques, the model can project trends, estimate probabilities, and provide valuable information for decision-making. The mathematical model of the protection of the object of critical infrastructure from the destructive influence of air attack means allows modeling and analysis of events without the need for conducting expensive or lengthy physical experiments (Iasechko M., Larin V. et al., 2019, p. 3566 -3571). Using mathematical equations and computational tools, we can explore a wide range of possibilities, 14 saving resources and speeding up the decision-making process (Nikoliuk et. al, 2023;Zelenin, 2023). In general, a mathematical model of the protection of a critical infrastructure object from the destructive effects of air attack provides a powerful basis for understanding, predicting and optimizing air defense, contributing to progress in various fields and enabling evidence-based decision-making.
5,380.2
2023-07-12T00:00:00.000
[ "Computer Science" ]
Prebiotic Syntheses of Organophosphorus Compounds from Reduced Source of Phosphorus in Non-Aqueous Solvents Reduced-oxidation-state phosphorus (reduced P, hereafter) compounds were likely available on the early Earth via meteorites or through various geologic processes. Due to their reactivity and high solubility, these compounds could have played a significant role in the origin of various organophosphorus compounds of biochemical significance. In the present work, we study the reactions between reduced P compounds and their oxidation products, with the three nucleosides (uridine, adenosine, and cytidine), with organic alcohols (glycerol and ethanolamine), and with the tertiary ammonium organic compound, choline chloride. These reactions were studied in the non-aqueous solvent formamide and in a semi-aqueous solvent comprised of urea: ammonium formate: water (UAFW, hereafter) at temperatures of 55–68 °C. The inorganic P compounds generated through Fenton chemistry readily dissolve in the non-aqueous and semi-aqueous solvents and react with organics to form organophosphites and organophosphates, including those which are identified as phosphate diesters. This dual approach (1) use of non-aqueous and semi-aqueous solvents and (2) use of a reactive inorganic P source to promote phosphorylation and phosphonylation reactions of organics readily promoted anhydrous chemistry and condensation reactions, without requiring any additive, catalyst, or other promoting agent under mild heating conditions. We also present a comparative study of the release of P from various prebiotically relevant phosphate minerals and phosphite salts (e.g., vivianite, apatite, and phosphites of iron and calcium) into formamide and UAFW. These results have direct implications for the origin of biological P compounds from non-aqueous solvents of prebiotic provenance. Introduction Phosphorus (P) plays a significant role in all living forms as an essential component that is involved in metabolism and biochemical reactions [1,2].Ionized phosphate esters are ubiquitous in biochemistry for two reasons; (1) metabolites should be charged to prevent the loss of these compounds from the lipid-based cell-membrane, and (2) the charge must be negative so as to repel nucleophiles, therefore being able to resist breakdown by hydrolysis [1,2].Phosphate uniquely allows for these requirements [3][4][5][6].Phosphorus is hence considered to have played a key role in the origin of life on the early Earth as suggested previously [7][8][9][10].On the early Earth, P is assumed to have been present mainly in the form of phosphates (including orthophosphate minerals) such as apatite (Ca 5 (PO 4 ) 3 (F,Cl,OH)), whitlockite (Ca 9 (Mg,Fe)(PO 4 ) 6 PO 3 OH), and brushite (CaHPO 4 •2H 2 O) [4,9,11]. Calcium phosphate minerals are the dominant form of phosphates [12].These phosphate minerals that are considered to be prebiotically relevant [4,12] are poorly soluble in water and as such react poorly with organics.The liberation of P (as phosphate) from rocks takes place by the dissolution of various phosphate minerals such as apatite.Mineral dissolution is pH-dependent [12] and under typical pH (~7), P is minimally available.This low solubility and reactivity of the phosphate minerals [13][14][15] is considered to be an issue in the realm of prebiotic chemistry, known as, "the phosphate problem" [16].This problem could plausibly have directly impacted the event of prebiotic phosphorylation on the early Earth, as the C-O-P linkage formation requires condensation reactions that are thermodynamically disfavored [17], considering water as the major solvent on the early Earth.Therefore, the prebiotic formation of the P compounds of biological relevance has been challenging [18]. One prebiotically plausible alternative route to address the low reactivity of P towards various organics is the use of non-aqueous solvents in lieu of water.If water is removed by evaporation from a warm pond containing prebiotic reagents, phosphorylation can readily occur [12].Non-aqueous solvents similarly promote condensation, leading to prebiotic phosphorylation.Formamide (HCONH 2 ) has been suggested to be one of the earliest, prebiotically relevant anhydrous solvents [15].This organic compound is both a reactant and a solvent under prebiotic conditions [19][20][21][22].The route to the prebiotic formation of formamide has been suggested by the use of simple precursor molecules such as HCN, NH 3 , and CO [23,24].Moreoever, it has also been detected in the interstellar medium [25].Another example of prebiotically relevant anhydrous solvents includes deep eutectic solvents such as mixtures of urea and choline chloride [26][27][28][29].Possibly, to date, one of the most prebiotically plausible solvents is proposed to be a mixture of urea, ammonium formate, and water [30].The work by Burcar and colleagues showed efficient phosphorylation of nucleosides in this solution mixture, even when apatite was used as a phosphorylation agent [30].Heating this semi-aqueous solvent mixture (urea, ammonium formate, and water) at 70 • C is known to partially transform the ammonium formate to formamide, thus indicating promise for anhydrous conditions required for phosphorylation [30]. Another route to the facile formation of organophosphorus compounds is the use of reduced-oxidation-state P compounds (reduced P, hereafter) [16,31].These inorganic reduced P compounds can be about 10 3 -10 6 times more soluble in water compared to orthophosphate in the presence of divalent cations [3].The prebiotic plausibility of reduced P compounds on the early Earth is supported by the detection of phosphonic acids in the Murchison meteorite [32], phosphite in the hydrothermal environments [33], interstellar synthesis of phosphorus oxoacids [34], the natural reduction of phosphate into phosphite [35], and prebiotic syntheses of several phosphonic acids [36].An additional source of these reduced P compounds is extraterrestrial impacts that delivered the meteoritic mineral screibersite (Fe,Ni) 3 P [37], which releases various inorganic P species upon aqueous corrosion and is considered to be a significant source of various organophosphorus compounds of prebiotic origin [38]. These reduced P compounds also undergo a condensation reaction in the presence of urea and under mild heating conditions (i.e., heating to dryness), and form energetic condensed reduced P compounds, including pyrophosphite and the mixed-valence compounds isohypophosphate [39].These high-energy condensed P compounds react with organics to form organophosphorus compounds [39]. The reduced P compounds oxidize into phosphate PO 4 3− (1) in the presence of ultraviolet light and H 2 S/HS − , via a thiophosphate intermediate [40], (2) by auto-oxidation under mild heating and in the presence of condensation agents [39], and (3) by oxidation with H 2 O 2 catalyzed by Fe 2+ [41], called the Fenton reaction.This Fenton reaction produces reactive •OH and •OOH radicals that oxidize reduced P compounds by cleaving the H-P bond to generate a phosphite radical.Phosphite radicals are disproportionated to phosphate (PO 4 3− ) and condensed phosphates such as pyrophosphate (HP 2 O 7 ) 3− , triphosphate (H 3 P 3 O 10 ) 2− , and trimetaphosphate (P 3 O 9 3− ) [41].The Fenton reaction requires H 2 O 2 , which would have been a strong oxidant in the anoxic prebiotic environments [42].Peroxide could have formed via photolysis of at-mospheric water [43,44] or ice [42].Dry, cold, and low-oxygen conditions would have promoted the formation of H 2 O 2 through photolysis reactions of H 2 O in Archean atmospheres [42,45,46].Water ice that would have been part of glaciers during "Snowball Earth" events [42] could also have undergone photolysis to form H 2 O 2 .Another route to forming H 2 O 2 involves the abrasion of quartz surfaces, which would form reactive free radicals that could oxidize water to H 2 O 2 and O 2 [47]. In our previous studies, we demonstrated that inorganic P compounds generated through Fenton chemistry of hypophosphite actively react with nucleosides in water in the presence of urea and NH 4 + to generate phosphite and phosphate esters, including dimers (nucleoside-phosphate-nucleoside) and cyclic organic phosphates [48].In the present study, we report plausible Fenton reactions of hypophosphite in non-aqueous solvents such as formamide and a semi-aqueous solvent composed of urea, ammonium formate, and water (UAFW, hereafter).We show that reduced P compounds and their oxidation (P) products generated via Fenton reactions can react with organics in non-aqueous and semi-aqueous solvents to form organophosphorus compounds of biological significance.We also investigate the release of inorganic P from various prebiotically relevant phosphite minerals, i.e., phosphites of calcium and iron, into non-aqueous solvents.Finally, we also compare the release of P in these iron and calcium phosphite minerals with their phosphate counterparts such as vivianite and apatite under the same conditions, and also compare the molarities of the respective solutions. Deionized water (DI, hereafter) was obtained in-house by using a Barnstead (Dubuque, IA, USA) NANO pure ® Diamond Analytical combined reverse osmosis-deionization system [39].The semi-aqueous solvent UAFW was prepared as in prior studies by using a 1:2:4 molar ratio of urea: ammonium formate: water [30].This mixture was transferred to a glass vial of 20 mL capacity and was sealed, followed by heating at 65 • C until dissolved.The consistency of the UAFW solvent was transparent and all the contents were completely dissolved.The solvent was prepared and was stored at 4 • C. The initial pH of the solvent was found to be ~7.5 using pH paper purchased from Hydrion paper.After heating, the pH of the eutectic was measured at pH 5.5-6.0,which remained consistent over the course of the experiments.The other non-aqueous solvent, i.e., formamide, was used as purchased. Iron phosphite (FeHPO 3 ) was synthesized as previously [48].Equimolar solutions of FeCl 2 •4H 2 O and H 3 PO 3 (0.1 M each) were mixed slowly, and the mixture was stirred with the help of a magnetic stirrer.On mixing, brownish precipitates were formed and immediately filtered, dried, and stored for future use.Calcium phosphite (CaHPO 3 ) was prepared by mixing equimolar solutions of CaCl 2 •4H 2 O and Na 2 HPO 3 .5H 2 O (0.1 M each).A white precipitate was separated through filtering the solution, dried, and stored for future use.The other two minerals, i.e., vivianite and apatite, were crushed into fine powders and were stored in vials for future use. Oxidation of Hypophosphite by Fenton Reaction The pH of the starting DI (deionized) water was around 6. Sodium hypophosphite (NaH 2 PO 2 ) was used as a source of the reduced P material for the Fenton reaction to generate various oxidized forms of inorganic P, along with the condensed phosphates. 31P-NMR analysis of the starting material showed no other P peaks as impurities (Figure 1a).Fenton reactions were formed by the previously reported method [41,48]: aqueous solution of equimolar (0.2 M of both) hypophosphite (H 2 PO 2 − ) and FeCl 2 .4H 2 O (equal volumes, 0.1 M total of each reagent) were mixed and dissolved to form an homogenous solution.The total volume of this solution was 20 mL (10 mL for each of the solutions described above).To this solution mixture, 15 mL of 0.50 M H 2 O 2 was added dropwise. Syntheses of Biological P Esters by P Products from Fenton Solution The biomolecule substrates included nucleosides (adenosine, uridine, and cytidine), organic alcohols (glycerol and ethanolamine), and the organic quaternary ammonium In our study, the concentration of H 2 O 2 was varied from 0.1-0.5 M to study the extent of the formation of the oxidized P compounds generated by the Fenton reactor [41].This mixture was sealed and was allowed to stir at room temperature by using a magnetic stirrer for 24 h.After this time, the initial pH of the solution was found to be around 4.0-4.5.This mixture was subsequently quenched and titrated by 20% ammonium hydroxide (NH 4 OH), and was sealed immediately to prevent the escape of NH 3 from the NH 4 OH solution, followed by stirring on a magnetic stirrer at room temperature.The final volume of the solution after adding NH 4 OH solution was about 45-50 mL with a pH = 11-12.5.Thick orange-brown precipitates were observed.This step indicated the separation of insoluble Fe 3+ precipitating as Fe(III)(O,OH) X compounds.This step was necessary for the separation of Fe 2+ from Fe 3+ for the analysis of sample by 31 P-NMR.The resulting solution mixture was filtered with the help of a Whatman filter paper no. 1 and stored for further use as previously [48].This filtrate was labeled as inorganic P Fenton solution (IPF solution hereafter). Syntheses of Biological P Esters by P Products from Fenton Solution The biomolecule substrates included nucleosides (adenosine, uridine, and cytidine), organic alcohols (glycerol and ethanolamine), and the organic quaternary ammonium compound choline chloride.These compounds and their phosphorylated derivatives are significant in biochemistry and actively take part in various metabolic pathways, such as the formation of genetic makeup, cell-membrane structure, and respiration. The prebiotic phosphorylation and phosphonylation reactions of organic compounds with Fenton solution were carried out by adding 4 mL of formamide or UAFW in a clean and unsealed glass vial.To this vial, 0.40-0.65 g of organic compound was also added (Table 1).Finally, 7 mL of IPF solution was also added.The pH of the solution was around 10-11.The unsealed reaction vial was then allowed to heat at 55-68 • C from 20 h to 5 days, on a hot plate with a magnetic stirrer.The reaction mixture was kept unsealed to promote evaporation of water introduced from the IPF solution to mimic a hot drying and concentrated pool on the early Earth [22,30,[48][49][50].Two reaction sets for each organic reagent, including the nucleosides, organic alcohols, and the ammonium compound, were allowed to heat (unsealed) at 55-68 • C from 20 h to 5 days, with the only difference being the type of solvent.For Set-1 of reactions, formamide was used as a solvent, while for Set-2, UAFW was used as a solvent under similar conditions. After the completion of the reaction, the reaction sample volume was reduced to almost half from evaporation of water from the reaction mixture.The sample, however, was still in solution because of the presence of the non-aqueous or semi-aqueous solvents. After the required heating time, the reaction mixture was removed from heating and was allowed to cool down slowly to room temperature.It was then mixed with 5 mL DI water and stirred with a magnetic stirrer until a clear solution was formed, which was subsequently filtered through filter paper.The filtrate was transferred to a clean watch glass followed by air-drying at room temperature.This sample was allowed to concentrate overnight under ambient conditions with about ~2 mL remaining.2 mL D 2 O (75%) and DI water (25%) were added to this solution mixture.In case of MS analyses, only DI water was used as previously described [48].The total volume of the solution was 5 mL.About 430-350 µL of the sample solution was transferred to a clean NMR tube and analyzed via 31 P-NMR. A few reactions were also carried out with the 'unquenched Fenton solution'.In such experiments, the organic was directly heated with the Fenton solution without quenching with a base (NH 4 OH or NaOH) or without bringing the pH from 4.5 to 11.Once the Fenton solution was generated (Section 2.1), it was heated with the organic substrate (7 mL and pH = 4.5) to dryness at 55-68 • C for 20 h to 5 days.Both solvents were attempted.After the completion of the reaction, the dried mixture was treated with 0.1M NaOH solution to completely precipitate out Fe 3+ so it could be studied through 31 P-NMR.The mixture then followed the same protocol of solution preparation and was analyzed via 31 P-NMR.Prebiotic synthesis of organic P esters of biological significance.Various conditions attempted in the study.Each of the samples was heated unsealed at 55-68 • C for 1 to 5 days on a hot plate under the fume hood.The pH of each solution was around 10-11.No additive/catalysts were used.Where UAFW stands for urea: ammonium formate: water, IPF solution means inorganic P Fenton solution, and Form represents formamide.Also, the abbreviations for the organic compounds are as follows: AD (adenosine), UR (uridine), CY (cytidine), GL (glycerol), CH (choline chloride), and EA (ethanolamine).Reaction samples were heated unsealed for a given amount of time mimicking a 'Warm-Pool Model' Theme.Various numbers with the labeled names of the samples represent the days of heating. Some blank reaction sets were also carried out, in which the IPF solution along with the solvent (formamide or UAFW) was heated (unsealed) at the same temperature as that which was used for organics phosphorylation and phosphonylation reactions (55-68 • C).No organic substrate was added in such samples.After completion, a similar quenching protocol was followed, and samples were characterized by 31 P-NMR. Studies on the Release of Inorganic P from Various Prebiotically Relevant P Minerals The phosphate minerals selected in the studies included; vivianite (Fe 3 (PO 4 ) 2 •8H 2 O) and apatite (Ca 5 (PO 4 ) 3 (F, Cl, OH)) and their phosphite counterparts, iron (II) phosphite (FeHPO 3 ) and calcium phosphite (CaHPO 3 ).Each material was ground and crushed into a fine powder.In each case, 0.2 g of the material was taken in a clean glass, and to this vial, 4 mL of the solvent was added.The vial was capped (sealed) and was stirred by a magnetic stirrer on a hot plate at 65-68 • C (Table 2).The samples were capped to avoid the evaporation of the water from UAFW, unlike the studies in Section 2.2; the purpose of sealing this set of reactions was to study the release of P from the materials into the solvent, and not to promote condensation/evaporation reactions.The reaction vials were stirred and heated for 3 days.After 3 days, the samples were analyzed via 31 P-NMR to study the comparative release of P into the solvents. Analyses, Identification and Characterization of Inorganic and Organic P Compounds The samples were analyzed via 31 P-NMR and mass spectrometry (MS).For 31 P-NMR analyses, the samples were analyzed on a 600-MHz Bruker Neo NMR operating at 242.9 MHz in both H-coupled and H-decoupled modes.The width of the spectrum was 200 ppm, and the running temperature was 22 • C. The P products of the reactions were quantified by peak integration method as previously reported [39,[48][49][50][51].The relaxation time (D1) used between NMR scans throughout this study was set to 2 s.This was compared to several experiments run at D1 = 30 s.Since the integral values of D1 = 2 s and D1 = 30 experiments were comparable, a D1 = 2 s is considered quantitative, and the remaining experiments were run at D1 = 2 s.The sample preparation for the NMR analysis has been discussed in detail in Section 2.2. 31 P-NMR studies in case of the reaction samples containing any insoluble mineral such as vivianite and apatite were performed as follows: After 3 days, the sealed reaction sets (Section 2.3) were removed from heating and were allowed to cool down.To each sample solution, 1 mL of DI water was added, making final volume up to 5 mL.Each sample was then centrifuged to remove the insoluble mineral.The clear contents from the solution were taken into an Eppendorf tube and D 2 O was added (50:50) v/v.Each sample was then analyzed from 450 to 1000 scans. where S/N is the signal-to-noise ratio, and Scans means the number of NMR scans taken [37,51].This relationship was empirically determined and is accurate to about 10% over the range of 10 −4 to 10 −2 M based on the various sample spectra obtained [37,51].MS analyses were formed in negative ion mode on a 6130 Single Quadrupole Mass Spectrometer (Agilent, Santa Clara, CA, USA) attached to an Agilent 1200 HPLC by direct injection, and deionized water was used as a solvent as reported previously [48][49][50]. Some of the reaction samples were also quantified by using phosphonoacetic acid (PAA, hereafter) as an internal standard (SI, and also see ref. [48]). Results The Fenton reaction of hypophosphite generated various oxidized (inorganic) P products, including condensed P compounds such as pyrophosphate.Figure 1 shows the H-coupled 31 P-NMR spectrum of a Fenton solution after the completion of the reaction, followed by quenching with a base (IPF solution).Peak (a) represents a wide triplet for hypophosphite identified by comparing the coupling constant values (J); phosphite splits into a wide doublet (peak b) and was also confirmed by its coupling constant value to be around 550-570 Hz [48,51], orthophosphate as a singlet peak (peak c), and pyrophosphate as a singlet around −5 to −6 ppm.This IPF solution readily reacted with organics, and on heating with organic substrates (in non-aqueous solvents) at 55-68 • C resulted in the formation of various organophosphorus species.Condensed P compounds such as pyrophosphate were observed when the quenched Fenton solution (IPF solution) was mixed with formamide or UAFW and was heated at 68 • C for 3-4 days (without organics) (Table 1, first two entries).Various organophosphorus compounds were observed.For nucleosides (uridine, adenosine, and cytidine), alcohols (glycerol and ethanolamine), and choline chloride, both phosphate and phosphite derivatives were observed Supplementary Materials. Organophosphorus products were identified by peak splitting, peak location (ppm), and, when available, spiking with authentic standards.Organophosphorus compounds were also confirmed by MS (the direct injection method) as in our previous studies [29,[48][49][50].The MS analyses of reaction samples containing glycerol and IPF showed the following major peaks: [C 3 H 9 O 5 P-H] at m/z 155.02 corresponding to glycerol phosphite, and [C 3 H 10 O 7 P 2 -H] at m/z: 218.99 corresponding to glycerol diphosphite (two different phosphite groups attached at different location on the glycerol molecules, rather than a pyrophosphite (P-O-P) linkage) [39]. In the reaction samples with uridine with IPF solution, the following peaks were observed: [C 9 N 2 O 6 H 11 These reactions were also compared with our previous studies on these compounds [39,48]. Similarly, the reaction samples containing cytidine (with IPF) were also studied, as were peaks corresponding to [ In the reaction samples containing IPF and choline chloride in the non-aqueous solvents, the major peaks identified were as follows: [C 5 H 14 NO 4 P-H] at m/z 183 corresponding to phosphocholine and [C 5 H 15 NO 3 P-H] at m/z 167 corresponding to choline phosphite.Finally, in the reaction samples containing IPF and ethanolamine in the non-aqueous solvents, the key peaks identified were as follows: [C 2 H 8 NO 4 P-H] at m/z 140 corresponding to phosphoethanolamine and finally [C 2 H 8 NO 3 P-H] at m/z 124 corresponding to ethanolamine phosphite. Various organophosphites were identified and characterized by analyzing their chemical shift values and C-O-P (carbon, oxygen, and phosphorus), as well as P-H (phosphorus and hydrogen), interactions of various organophosphorus species [39,48].In the case of adenosine nucleoside, the best reaction sample was when the adenosine and IPF solution mixture was heated at 65-68 • C for 3 days and the solvent was UAFW.It produced about 89% of the phosphorylated and phosphonylated derivatives of adenosine.The compound adenosine-2 ,3 -cyclic monophosphate (Figure 2, peak j) appeared as a multiplet around 20 ppm.The other cyclic derivative, i.e., adenosine-3 ,5 -cyclic monophosphate derivative, was not present in the sample, which is usually located close to -2 ppm and appears as a doublet [48].It should be noted that there seemed to be two sets of peaks around 20 ppm; apart from compound j (adenosine-2 ,3 -cyclic monophosphate), there was possibly a double phosphorylation compound (2 ,3 -cyclic, 5 -monophosphate or phosphite NMP).This observation was also consistent with other nucleosides, including uridine and cytidine, as well as our previous observations [48]. In the case of adenosine-5 -monophosphite (Figure 2, peaks labeled as e), H-coupling of 31 P-NMR showed the splitting of this compound into a doublet of triplets with one between 7.5 and 8.0 ppm and the other 5.0 to 5.5 ppm.This also indicated the presence of a CH 2 -O-P bond, implying that the phosphite was attached at the 5 -carbon.The phosphonylated derivatives such as adenosine 2 -or 3 -monophosphites (peak g) appeared as doublets, showing the presence of a phosphite group via a CH-O-P bond.In the particular case of 2 or 3 monophosphite, the H-coupling of 31 P-NMR showed the splitting of this compound into two doublets.Based on our previous observations the phosphonylated derivatives of organic compounds are usually located downfield of 5.0 ppm.The organophosphates (phosphate esters) are usually present between 2 and 5.5 ppm.The H-coupled splitting of 31 P-NMR for adenosine 5 -monophosphate (peak f) appeared to be as a small triplet around 3.4 to 4.0 ppm, indicating the presence of a CH 2 -O-P type compound, and doublets (peak h) can represent 2 or 3 -AMP, representing a CH-O-P type linkage, while peak i represents dimer (adenosine-phosphate-adenosine species) as reported previously [48] and it is generally located around −1 to −2 ppm.The H-coupling of 31 P-NMR of the unreacted hypophosphite showed a splitting into a large triplet.Inorganic phosphite showed a splitting into a doublet in the H-coupled 31 P-NMR spectrum and was also confirmed by calculating the coupling constant (~560 Hz).Inorganic condensed P compounds such as pyrophosphate appeared as a singlet peak in the H-coupled 31 P-NMR spectrum, usually around −6 to −8 ppm. It is important to mention here that the peaks could easily be shifted right or left due to pH changes [52].This is why each peak was identified by carefully looking at the splitting pattern in the H-coupled 31 P-NMR spectrum, chemical shift values, and coupling constant (J).The samples were also spiked with standards whenever possible.Furthermore, finding the molecular weights of the targeted compounds via MS confirms some of these IDs.In case of adenosine, the compounds adenosine-5 -monophosphate and adenosine-2 ,3 -cyclic monophosphate were confirmed by spiking with standards. Uridine also readily reacted with the IPF solution.The relative abundances and products distributions (%) (based on the total dissolved P in the solution) were around 89%, when the reaction mixture was heated in the UAFW for 2 days (Table 1, reaction Sample no UR-UAFW-2).As described for adenosine, the peaks were identified by looking at the splitting pattern in the H-coupled 31 P-NMR spectrum (Figure 3).Various organic (uridine) diphosphite in case of were also reported in our previous studies [39].Overall, uridine nucleoside required a lower temperature window, i.e., 55-58 • C, for better reactivity.Percent fractions of organophosphorus compounds in the case of the three nucleosides are given in Table 3 based on 31 P NMR integrations.In the case of adenosine-5′-monophosphite (Figure 2, peaks labeled as e), H-coupling of 31 P-NMR showed the splitting of this compound into a doublet of triplets with one between 7.5 and 8.0 ppm and the other 5.0 to 5.5 ppm.This also indicated the presence of a CH2-O-P bond, implying that the phosphite was attached at the 5′-carbon.The phospho- The relative abundances (%) of the inorganic P products were calculated on the basis of the total P dissolved and by the peak integration method as in our previous studies [29,[48][49][50].The relative abundances can also be considered as yields (%) based on the total dissolved P in the given solution.This is due to the fact that the relaxation time for ( 31 P-NMR analysis) between the two scans in some samples was increased from 2 s to 30 s.This relaxation time ensures the NMR is quantitative.It was found that peak integral values and relative abundances of various P species remained almost the same for each sample, whether the relaxation time was 2 or 30 s. Various conditions and the names and descriptions of the samples are explained in Table 1.ND means not detected, T C-O-P means sum of both organic phosphites and organic phosphate species in a sample, and 'd' means total sum of all inorganic condensed P compounds detected. In the case of cytidine, the best reaction results (83% yields) were obtained when this nucleoside was heated with IPF solution (unsealed) at 65-68 • C, in the presence of UAFW as solvent.As mentioned above and as the case with uridine and adenosine, various phosphonylated and phosphorylated derivatives were identified by looking at the peaksplitting patterns (singlet, doublet, triplet, or multiplet) in the H-coupled spectrum of 31 P-NMR peak locations, and spiking with standard compounds was done wherever the standards were available (Figure 4) [39,[48][49][50]. In the case of both choline chloride and glycerol (Table 4, Figures 5 and 6), the best reactions were observed at 65-67 • C in the presence of UAFW as a solvent.Choline chloride is a tertiary amine, and it has only one location for phosphorylation or phosphonylation.In this case, two triplets around 3.5 ppm and 7.5 ppm represented choline phosphite, while a large triplet around 2 to 3 ppm suggested the presence of phosphocholine (Figure 5).For glycerol, both solvents showed similar results.Nevertheless, UAFW was a still-better solvent with a higher fraction of organic-P at 50%.Usually, glycerol-1-phosphate appears as a triplet between 3 to 5.5 ppm, indicating the presence of a CH 2 -O-P bond in the H-coupled spectrum of 31 P-NMR.Similarly, glycerol-2-phosphate appears as a doublet, generally preceding glcyerol-1-phosphate location, viz [22,29,50].However, in both samples, both of these phosphorylated species were not detected.The phosphite derivatives of glycerol showed splitting into two triplets (glycerol-1-phosphite) and two doublets (glycerol-2phosphite).The glycerol diphosphite or diphosphate (not pyrophosphate or pyrophosphite but phosphite/phosphate tied to different carbons on the glycerol molecule) were also identified by looking at the peak-splitting patterns in the H-coupled 31 P-NMR.These compounds were also compared with our previous results [22,29,39,50].In the case of both choline chloride and glycerol (Table 4, Figures 5 and 6), the best reactions were observed at 65-67 °C in the presence of UAFW as a solvent.Choline chloride is a tertiary amine, and it has only one location for phosphorylation or phosphonylation.In this case, two triplets around 3.5 ppm and 7.5 ppm represented choline phosphite, while a large triplet around 2 to 3 ppm suggested the presence of phosphocholine (Figure The relative abundances (%) of the inorganic P products were calculated on the basis of the total P dissolved and by the peak integration method as in our previous studies [29,[48][49][50].The relative abundances can also be considered as yields (%) based on the total dissolved P in the given solution (see caption for Table 3).Various conditions and the names and descriptions of the samples are explained in Table 1.ND means not detected, T C-O-P means sum of both organic phosphites and organic phosphate species in a sample, and 'd' means total sum of all inorganic condensed P compounds detected.The species that are not possible for a particular compound are left with sign NA. Life 2023, 13, x FOR PEER REVIEW 15 of 25 5).For glycerol, both solvents showed similar results.Nevertheless, UAFW was a stillbetter solvent with a higher fraction of organic-P at 50%.Usually, glycerol-1-phosphate appears as a triplet between 3 to 5.5 ppm, indicating the presence of a CH2-O-P bond in the H-coupled spectrum of 31 P-NMR.Similarly, glycerol-2-phosphate appears as a doublet, generally preceding glcyerol-1-phosphate location, viz [22,29,50].However, in both samples, both of these phosphorylated species were not detected.The phosphite derivatives of glycerol showed splitting into two triplets (glycerol-1-phosphite) and two doublets (glycerol-2-phosphite).The glycerol diphosphite or diphosphate (not pyrophosphate or pyrophosphite but phosphite/phosphate tied to different carbons on the glycerol molecule) were also identified by looking at the peak-splitting patterns in the H-coupled 31 P-NMR.These compounds were also compared with our previous results [22,29,39,50].In the case of ethanolamine, the best results were seen when it was heated with IPF at 55-57 • C in UAFW (Figure 7) (Sample EA-2, Table 1).In this case, the preferred site for phosphorylation or phosphonylation was the -OH group as compared to the -NH 2 group (Figure 7).In the case of other organics including glycerol, ethanolamine and choline chloride, the best reaction (in UAFW) yields based on the internal standard (PAA) were also calculated.For glycerol, the yields were as follows: glycerol-1-phosphate (13%), glycerol-1phosphite (16%), glycerol-2-phosphite (1%), glycerol-diphosphite species (2.5%), and glycerol-phosphate-glycerol (1%).This was a total of both glycerol phosphates and phosphites of around 33.5%.In the case of choline chloride, the yields were as follows: choline phosphite (18.54%), phosphocholine (17.5%) (total for ethanolamine was around 36%), and finally, in the case of ethanolamine, the yields were as follows: ethanolamine phosphite (35%) and phosphoethanolamine (16%), with a total of both phosphites and phosphates of ethanolamine of around 51%. In a separate set of experiments, we also studied the release of prebiotically relevant phosphate and phosphite materials and the possible release of soluble phosphorus from these mineral sources at 65-68 °C for three days under sealed conditions (Table 5).These studies were carried out in the non-aqueous and semi-aqueous solvents (formamide and UAFW) used in the phosphorylation and phosphonylation studies.The extent of the release of soluble phosphorus was determined on the basis of the total molarity of [P] in the solution as previously [37,51].The best result was observed, with total molarity of the P in the solution, to be around 0.1 M. We also observed the generation of pyrophosphite in this case.Overall, both formamide and UAFW showed an affinity for P solubilization.However, no P signals in the 31 P-NMR were observed in either phosphate minerals vivianite or apatite, at least not under same set of temperature, solvent volume, and, most importantly, number of scans for the 31 P-NMR.The number of scans in the case of natural samples, particularly minerals, was from 5000 to 10,000 per sample.Both phosphite materials actively released P into the solvent, indicating the ease of P-release.Some reaction sets were also quantified by using an internal standard, which was 0.1 M PAA as described previously [48].Each reaction was studied in UAFW.These yields are, with respect to the total phosphorus, added to the solution, which was the limiting reagent compared to the nucleoside substrate.In the case of adenosine, the yields were as follows: 2 -AMP and 3 -AMP combined yields (1.5%), 5 -AMP (0.5%), 2 ,3 -cyclic AMP (12%), adenosine-phosphate-adenosine (A-P-A) (2%), 2 and 3 -adenosine-monophosphite (7.5%), and 5 -adenosine-monophosphite (11.8%), with a total yield of adenosine phosphites and phosphates to be around 35%, respectively.In the case of the uridine reaction in the UAFW, the best yields were as follows: 2 -UMP and 3 -UMP (combined yields 1%), 5 -UMP (0.5%), 2 ,3 -cyclic UMP (11%), uridine-phosphate-uridine (U-P-U) (1.5%), 2 and 3 -uridinemonophosphite (8%), and 5 -uridine-monophosphite (14%), with a total yield of uridine phosphites and phosphates to be around 36%.Similarly, for cytidine, the best reaction yields were as follows: 2 -CMP and 3 -CMP (combined yields 0.5%), 5 -CMP (2%), 2 ,3 -cyclic CMP (7%), cytidine-phosphate-cytidine (C-P-C) (1.5%), 2 and 3 -cytidine-monophosphite combined yields (7.5%), and 5 -cytidine-monophosphite (10%), with a total yield of cytidine phosphites and phosphates to be around 28.5% (SI, and also see ref. [48]). In a separate set of experiments, we also studied the release of prebiotically relevant phosphate and phosphite materials and the possible release of soluble phosphorus from these mineral sources at 65-68 • C for three days under sealed conditions (Table 5).These studies were carried out in the non-aqueous and semi-aqueous solvents (formamide and UAFW) used in the phosphorylation and phosphonylation studies.The extent of the release of soluble phosphorus was determined on the basis of the total molarity of [P] in the solution as previously [37,51].The best result was observed, with total molarity of the P in the solution, to be around 0.1 M. We also observed the generation of pyrophosphite in this case.Overall, both formamide and UAFW showed an affinity for P solubilization.However, no P signals in the 31 P-NMR were observed in either phosphate minerals vivianite or apatite, at least not under same set of temperature, solvent volume, and, most importantly, number of scans for the 31 P-NMR.The number of scans in the case of natural samples, particularly minerals, was from 5000 to 10,000 per sample.Both phosphite materials actively released P into the solvent, indicating the ease of P-release.The relative abundances (%) of the inorganic P products were calculated on the basis of the total P dissolved and by the peak integration method as mentioned above [29,39,[48][49][50][51].The details of the samples are given in Table 2. Some of the abbreviation's descriptions are as follows: BDL (below detection limit), [M] T (total molarity of phosphite in the solution).Also, various abbreviations are as follows: VIV (vivianite), CA (calcium phosphite), FE (iron phosphite), APA (apatite), Form (formamide), and UAFW (urea, ammonium formate, and water).The total molarity of the solution was based on the relative abundance of the P species in each of the solutions with ~10% error factor. Discussion Heating organics with the oxidation products of hypophosphite, generated via Fenton reaction in non-aqueous and semi-aqueous solvents, formed phosphate and phosphite esters of prebiotic relevance.The reactions happened under mild heating at 55-68 • C and did not require a catalyst, condensation agent, or other additive.Our two-way approach, of using (1) non-aqueous and semi-aqueous solvents and (2) a reduced P source, seemed to be quite effective in forming various organophosphorus compounds with ease.In contrast to our previous studies, urea was not required [48].However, the better efficiency of the UAFW solvent over formamide can also be attributed to urea being part of the solvent composition.The overall yields in the current studies ranged from 14 to 89% (based on the amount of dissolved P) (Figures 8 and 9).We obtained a variety of phosphorylated and phosphonylated derivatives of nucleosides (uridine, adenosine, and cytidine) and other organics including glycerol, ethanolamine, and choline chloride.This overall increase in the reactivity of inorganic-P molecules generated via Fenton chemistry with the organics suggests that the reduced P compounds and their oxidation products bear an increased reactivity compared to their phosphate counterparts, and could have played a role in the origin of biological P compounds on the early Earth.Furthermore, heating reactions of IPF with nucleosides in the anhydrous solvents also form phosphodiesters of uridine, adenosine, and cytidine.At present, the exact structure of the dimer species is not clear; however, based on our previous studies, we suggest that these diester species were likely formed via the opening of 2′,3′-cyclic monophosphate.This is supported by an observation in our previous study, in which heating an IPF solution containing 2′-deoxyadenosine and urea at 55-60 °C, leading to dryness, did not form any diester molecules, implying that diester formation is linked with the ring opening of the nucleoside-2′,3′-cyclic monophosphate [48].We have also reported the generation of diester compounds of uridine by the heating reactions of uridine with pyrophosphate in the presence of Mg 2+ and urea [49].In both previous studies, diesters appear only when cyclic monoester is formed [48,49].Furthermore, heating reactions of IPF with nucleosides in the anhydrous solvents also form phosphodiesters of uridine, adenosine, and cytidine.At present, the exact structure of the dimer species is not clear; however, based on our previous studies, we suggest that these diester species were likely formed via the opening of 2′,3′-cyclic monophosphate.This is supported by an observation in our previous study, in which heating an IPF solution containing 2′-deoxyadenosine and urea at 55-60 °C, leading to dryness, did not form any diester molecules, implying that diester formation is linked with the ring opening of the nucleoside-2′,3′-cyclic monophosphate [48].We have also reported the generation of diester compounds of uridine by the heating reactions of uridine with pyrophosphate in the presence of Mg 2+ and urea [49].In both previous studies, diesters appear only when cyclic monoester is formed [48,49].Furthermore, heating reactions of IPF with nucleosides in the anhydrous solvents also form phosphodiesters of uridine, adenosine, and cytidine.At present, the exact structure of the dimer species is not clear; however, based on our previous studies, we suggest that these diester species were likely formed via the opening of 2 ,3 -cyclic monophosphate.This is supported by an observation in our previous study, in which heating an IPF solution containing 2 -deoxyadenosine and urea at 55-60 • C, leading to dryness, did not form any diester molecules, implying that diester formation is linked with the ring opening of the nucleoside-2 ,3 -cyclic monophosphate [48].We have also reported the generation of diester compounds of uridine by the heating reactions of uridine with pyrophosphate in the presence of Mg 2+ and urea [49].In both previous studies, diesters appear only when cyclic monoester is formed [48,49]. The present work suggests the prebiotic syntheses of a variety of molecules including nucleotides of uridine, adenosine, and cytidine, and their respective phosphate diesters that play a significant role in biochemistry.These diester molecules serve as a molecular 'tape' that connects the individual nucleotides in DNA and RNA through a sugar-phosphate backbone [49,53,54].Also, the other essential phosphate esters, such as glycerol phosphate, phosphoethanolamine, and phosphocholine, are also found in living organisms' biochemistry, especially in cell membranes.Modern life needs these phosphorylated biomolecules for storing genetic information, cell structures, respiration, and many other functions. Phosphorylation on the early Earth would have played a key role in the chemical milieu, forming phosphorylated biomolecules essential to life through the oxidation of reduced P compounds such as hypophosphite.Hypophosphite and related species such as Hphosphinic acid (H 3 PO 2 ) can also be sourced from meteoritic mineral schreibersite [37,55], and hence can be regarded as a prebiotic source of P present on the early Earth.Besides a reduced P source, the proposed reactions would also need an environment with both soluble iron and reduced P compounds, which is prebiotically plausbile [51]. Moreover, the semi-aqueous solvent (UAFW) in the study can be prebiotic.Urea has been prebiotically synthesized in the classic Urey-Miller gas-discharge experiments [56], by exposing ammonium cyanide to sunlight [57], and it is also identified to be a hydrolysis product of cyanamide [58], while ammonium formate is a hydrolysis product of HCN [59].Therefore, both of these compounds can be considered to be essentially prebiotic [60,61].Research has shown that, on heating, the UAFW eutectic solvent mixture is partially converted into formamide, therefore forming a four-component solvent mixture.This UAFW solvent system promotes dehydration to support C-O-P bond formation through condensation [30].Similarly, formamide is also considered to be a prebiotically significant compound, and has been employed in demonstrating various prebiotic chemical reactions for decades [14,15,[19][20][21][22][23][24][25]. This discussion supports the idea of the plausibility of 'a warm drying alkaline pond' on the early Earth with dissolved Fe 2+ , reduced P compounds (either supplied by meteorites [37] or formed through Fenton chemistry [41]), and other components forming mixtures with water such as ammonium formate, urea, and even formamide. Another important ingredient to support Fenton reactions on the prebiotic Earth would be the H 2 O 2 that would possibly have been supplied to 'Snowball Earth'.Such events would potentially result in a relatively weak hydrological cycle that would have sustained the formation of H 2 O 2, when coupled with certain photochemical reactions of water or ice [42].Furthermore, Fenton reactions have also been suggested on the Martian surface [62], and the idea is further supported by the discovery of H 2 O 2 on the Martian surface [63,64]. Our experiments studying the release of P from various P minerals, including vivianite, apatite, and phosphites of calcium and iron, showed that under similar conditions (e.g., sealed heating reactions, 65-68 • C, 3 days, formamide or UAFW), only the phosphite materials released P. Interestingly, we also observed the formation of condensed P compounds (pyrophosphite) while studying the release of P from a solution of CaHPO 3 .Overall, CaHPO 3 seemed to work out best in releasing the P into the solution of formamide under the reported experimental conditions (Figure 10).We detected no P signals in the 31 P-NMR from either of the phosphate minerals.Considering that ancient oceans were anoxic and Fe (II)-rich [65][66][67][68], it would highly plausible for the phosphate and phosphite mineral phases of iron to be precip out in the early oceans in the form of viviante (for phosphate) and FeHPO3 (for phosp Apatite [Ca5(PO4)3(F,Cl,OH)] is considered to be the dominant form of phosphates o early Earth (and other elements) [4,69], but if the early oceans were slightly less alk than today's oceans, likely due to a higher partial CO2 pressure [3,70], then precipitation of acid calcium salts of phosphate [71] and even phosphite could have possible in early oceans. Our results show the significance and increased reactivity of reduced (inorgan compounds towards organic compounds, and how these reduced P compounds, eve up as minerals, could potentially have readily released phosphite into the early ocea facilitate the origin of biological P compounds on the early Earth. Conclusions Phosphate and phosphite derivatives of various nucleosides (adenosine, uridine cytidine), alcohols (glycerol and ethanolamine), and organic ammonium compound line chloride) were prepared by using a reduced P source that was obtained from a F reaction of hypophosphite [48].The phosphorylation and phosphonylation reactions carried out at 55-68 o C from 20 h to 5 days, unsealed in a non-aqueous solvent (fo mide) and a semi-aqueous solvent (UAFW).In our studies, UAFW seemed to be a reaction medium than formamide.Also, urea was not found necessary in our current ies as non-aqueous solvents seemed to support the overall condensation process. The release of soluble P (as phosphite and phosphate) was also studied at 65under sealed conditions and with consistent stirring.It was seen that under the rep conditions, the phosphate minerals vivianite (Fe3(PO4)2•8H2O), and apatite (Ca5(PO Cl, OH)) did not release any P into the solution (as indicated by the detection of no s in the 31 P-NMR), while their phosphite compounds, iron(II) phosphite (FeHPO3) an cium phosphite (CaHPO3), not only released phosphite into the solution but also fo Considering that ancient oceans were anoxic and Fe (II)-rich [65][66][67][68], it would seem highly plausible for the phosphate and phosphite mineral phases of iron to be precipiated out in the early oceans in the form of viviante (for phosphate) and FeHPO 3 (for phosphite).Apatite [Ca 5 (PO 4 ) 3 (F,Cl,OH)] is considered to be the dominant form of phosphates on the early Earth (and other elements) [4,69], but if the early oceans were slightly less alkaline than today's oceans, likely due to a higher partial CO 2 pressure [3,70], then the precipitation of acid calcium salts of phosphate [71] and even phosphite could have been possible in early oceans. Our results show the significance and increased reactivity of reduced (inorganic) P compounds towards organic compounds, and how these reduced P compounds, even tied up as minerals, could potentially have readily released phosphite into the early oceans to facilitate the origin of biological P compounds on the early Earth. Conclusions Phosphate and phosphite derivatives of various nucleosides (adenosine, uridine, and cytidine), alcohols (glycerol and ethanolamine), and organic ammonium compound (choline chloride) were prepared by using a reduced P source that was obtained from a Fenton reaction of hypophosphite [48].The phosphorylation and phosphonylation reactions were carried out at 55-68 o C from 20 h to 5 days, unsealed in a non-aqueous solvent (formamide) and a semi-aqueous solvent (UAFW).In our studies, UAFW seemed to be a better reaction medium than formamide.Also, urea was not found necessary in our current studies as non-aqueous solvents seemed to support the overall condensation process. The release of soluble P (as phosphite and phosphate) was also studied at 65-68 • C under sealed conditions and with consistent stirring.It was seen that under the reported conditions, the phosphate minerals vivianite (Fe 3 (PO 4 ) 2 •8H 2 O), and apatite (Ca 5 (PO 4 ) 3 (F, Cl, OH)) did not release any P into the solution (as indicated by the detection of no signal in the 31 P-NMR), while their phosphite compounds, iron(II) phosphite (FeHPO 3 ) and calcium phosphite (CaHPO 3 ), not only released phosphite into the solution but also formed a highenergy condensed phosphite, i.e., pyrophosphite (at 65-68 • C under sealed conditions). Figure 1 . Figure 1.(a) H-coupled, 31 P-NMR spectrum of starting material (sodium hypophosphite) showing a wide triplet.(b) H-coupled 31 P-NMR spectrum of the inorganic P compounds after the Fenton reaction.The starting reduced P source is hypophosphite (a).The y-axis represents the signal strength (%) while x-axis represents δ (ppm). Figure 1 . Figure 1.(a) H-coupled, 31 P-NMR spectrum of starting material (sodium hypophosphite) showing a wide triplet.(b) H-coupled 31 P-NMR spectrum of the inorganic P compounds after the Fenton reaction.The starting reduced P source is hypophosphite (a).The y-axis represents the signal strength (%) while x-axis represents δ (ppm). Figure Figure Reaction of ethanolamine with IPF solution in formamide (Reaction EA-UAFW-4). Figure 8 . Figure 8. Comparative fractions (%) of various nucleoside phosphates and phosphites as a function of total P integrations. 4 RelativeFigure 8 . Figure 8. Comparative fractions (%) of various nucleoside phosphates and phosphites as a function of total P integrations. Figure 8 . Figure 8. Comparative fractions (%) of various nucleoside phosphates and phosphites as a function of total P integrations. Figure 9 . Figure 9. Comparative fractions (%) of various organic phosphates and phosphites as a function of total P integrations.These abundances were based on the amount of dissolved P in the solution and the peak integration method. Figure 10 . Figure 10.Phosphorus release from various minerals including vivianite, iron phosphite, a and calcium phosphite in formamide and UAFW solvents at 65-68 °C (3 days).The molarity o each solution was calculated by using Equation 1.Each value represents molarity [M].In the p study, calcium phosphite in formamide seemed to have released most amount of P into the sol Abbreviations are as follows: VIV (vivianite), CA (calcium phosphite), FE (iron phosphite) (apatite), Form (formamide), and UAFW (urea, ammonium formate, and water). Figure 10 . Figure 10.Phosphorus release from various minerals including vivianite, iron phosphite, apatite and calcium phosphite in formamide and UAFW solvents at 65-68 • C (3 days).The molarity of P in each solution was calculated by using Equation 1.Each value represents molarity [M].In the present study, calcium phosphite in formamide seemed to have released most amount of P into the solution.Abbreviations are as follows: VIV (vivianite), CA (calcium phosphite), FE (iron phosphite), APA (apatite), Form (formamide), and UAFW (urea, ammonium formate, and water). Table 1 . Reaction conditions of various reaction samples. Table 3 . The relative abundances (%) (with fractions relative to 100% total NMR integration) of various inorganic P products produced in the reactions comprising of nucleosides. Table 4 . The relative abundances (%) of various inorganic P products containing organic compounds other than nucleosides such as glycerol, choline chloride, and ethanolamine. Table 5 . Amounts of P released from various prebiotically relevant P minerals.
11,562.6
2023-10-29T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Hospital Readmission Prediction using Machine Learning Techniques One of the most critical problems in healthcare is predicting the likelihood of hospital readmission in case of chronic diseases such as diabetes to be able to allocate necessary resources such as beds, rooms, specialists, and medical staff, for an acceptable quality of service. Unfortunately relatively few research studies in the literature attempted to tackle this problem; the majority of the research studies are concerned with predicting the likelihood of the diseases themselves. Numerous machine learning techniques are suitable for prediction. Nevertheless, there is also shortage in adequate comparative studies that specify the most suitable techniques for the prediction process. Towards this goal, this paper presents a comparative study among five common techniques in the literature for predicting the likelihood of hospital readmission in case of diabetic patients. Those techniques are logistic regression (LR) analysis, multi-layer perceptron (MLP), Naïve Bayesian (NB) classifier, decision tree, and support vector machine (SVM). The comparative study is based on realistic data gathered from a number of hospitals in the United States. The comparative study revealed that SVM showed best performance, while the NB classifier and LR analysis were the worst. Keywords—Decision tree; hospital readmission; logistic regression; machine learning; multi-layer perceptron; Naïve Bayesian classifier; support vector machines I. INTRODUCTION Nowadays, numerous chronic diseases, such as diabetes, are widespread in the world; and the number of patients is increasing continuously. The estimated number of diabetic adults in 2014 was 422 million versus 108 million in 1980 [1]. Such patients visit hospitals frequently, requiring continuous preparation for ensuring the availability of required resources including hospital beds, rooms, and enough medical staff for an acceptable quality of service. Accordingly, predicting the likelihood of readmission of a given patient is of ultimate importance. In fact readmission during a one month period (30 days) of discharge indicates "a high-priority healthcare quality measure" and the goal is to address this problem [2]. Machine learning, which is one of the most important branches of artificial intelligence, provides methods and techniques for learning from experience [3]. Researchers often use it for complex statistical analysis tasks [4]. It is a wide multidisciplinary domain which is based on numerous disciplines including, but not limited to, data processing, statistics, algebra, knowledge analytics, information theory, control theory, biology, statistics, cognitive science, philosophy, and complexity of computations. This field plays an important role in term of discovering valuable knowledge from databases which could contain records of supply maintenance, medical records, financial transactions, applications of loans, etc. [5]. As indicated in Fig. 1, machine learning techniques can be broadly classified into three main categories [3]. Supervised learning techniques involve learning from training data, guided by the data scientist. There are two basic types of learning missions: classification and regression. Models of classification attempt to predict distinguished classes, such as blood groups, while models of regression prognosticate numerical values [3]. In unsupervised learning, on the other hand, the system could attempt to find hidden data patterns, associations among features or variables, or data trends [3], [4]. The main objective of unsupervised learning is the ability to specify hidden structures or data distributions without being subject to supervision or the prior categorization of the training data [6]. Finally, in reinforcement learning the system attempts to learn through interactions (trial and error) with a dynamic environment. During this learning mode, the computer program provides access to a dynamic environment in order to perform a specific objective. It is worth noting that in this case, the system does not have prior knowledge regarding the environment"s behavior, and the only way to figure it out is through trial and error [3], [7], [8]. According to Kaelbling et al., the term healthcare informatics refers to the combination between machine learning and healthcare with the purpose of specifying interest patterns [9]. In addition to this, it has the potential for establishing a good relationship between patients and doctors, and minimizing the increasing cost of healthcare [10]. The goal of this paper is to apply machine learning techniques, and specifically prediction techniques, for predicting the likelihood of readmission of patients to hospitals. This problem hasn"t been adequately addressed in the literature. In fact most research efforts are oriented towards prediction of diseases. Machine learning includes numerous analytic techniques for prediction and the literature lacks adequate comparative studies (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 10, No. 4, 2019 213 | P a g e www.ijacsa.thesai.org that assist in selecting a suitable technique for this purpose. Our research is based on a large data set collected by numerous United States hospitals [11], [12]. In short, this paper has two main contributions as follows:  Analyzing five most common machine learning techniques for prediction and providing a comparative study among them.  Addressing the problem of patient readmission to hospitals, since it has been rarely addressed by researchers. Organization of the rest of the paper is as follows: First, we present background about the machine learning techniques considered in this research. This is followed by related work to highlight the contributions of the paper. We then present our methodology and discuss the results of the experiments. Finally, we sum up this work via a conclusion and discussion of possible future work. II. BACKGROUND This section discusses the five basic machine learning techniques employed in this research study. A. Logistic Regression Analysis Regression is a statistical notion that can be used to identify the relationship weight between one variable called the dependent variable and a group of other changeable variables denoted as the independent variables. Logistic regression (LR) is a non-linear regression model, used to estimate the likelihood that an event will occur as a function of others [13]. B. Artificial Neural Network An Artificial Neural Network (ANN) is a computational model which attempts to emulate the human brain parallel processing nature. An ANN is a network of strongly interconnected processing elements (neurons), which operate in parallel [14] inspired by the biological nervous systems [15]. ANNs are broadly used in many researches because they are capable of modeling non-linear systems, where relationships among variables are either unknown of quite complicated [14]. An example of an ANN is the Multi-Layer Perceptron (MLP), which is typically formed of three layers of neurons (input layer, output layer, and hidden layer) and its neurons use nonlinear functions for data processing [16]. C. Naïve Bayesian Classifier Naïve Bayesian (NB) classifier relies on applying Bayes" theorem to estimate the most probable membership of a given event in one of a set of possible classes. It is described as being naïve, since it assumes independence among variables used in the classification process [15], [17], [18]. D. Support Vector Machine Support vector machines (SVMs) are supervised learning models, which can be applied for classification analysis and regression analysis. They have been proposed by Vapnik in 1995. They can perform both linear and non-linear classification tasks [5], [12], [17], [19]. E. Decision Tree Decision trees are one of the most famous techniques in machine learning. A decision tree relies on classification by using attribute values for making decisions. In general, a decision tree is a group of nodes, leaves, a root and branches [20]. Many algorithms have been proposed in the literature for implementing decision trees. One important algorithm is CART (Classification and Regression Tree). It is used for dealing with continuous and categorical variables [8], [21]. III. RELATED WORK Many researchers attempted to use machine learning techniques in healthcare problems other than hospital readmission likelihood prediction. For example, Arun and Sittidech used K-Nearest Neighbor (KNN), NB, and decision trees with boosting, bagging, and ensemble learning in diabetes classification. Their experiments confirmed that the highest accuracy is obtained by applying bagging with decision trees [22]. On the other hand, Perveen et al. attempted to improve the performance of such algorithms using AdaBoost. The evaluation of experimental outcomes showed that AdaBoost had better performance in comparison to bagging [23]. Orabi et al. [24] suggested integrating regression with randomization for predicting diabetes cases according to age, with an accuracy of 84%. Other researchers proposed building a predictive model using three machine learning techniques, which are random forests (RFs), LR, and SVMs; for predicting diabetes in Indians females, in addition to the factors causing diabetes. Their comparative study concluded that RFs had the best performance among the others [25]. Relatively few research studies addressed the problem of hospital readmission likelihood prediction. For example, Strack et al. used statistical models for this purpose [12]. Other researchers focused on comparing different machine learning techniques for addressing this problem. For example, Kerexeta [26] proposed two approaches. In the first, they combined supervised and unsupervised classification techniques, while in the latter, they combined NB and decision trees. They showed that the former approach had a better performance in comparison to the latter in terms of readmission prediction. To sum up, relatively few research efforts in healthcare are concerned with the problem of prediction of hospital readmission likelihood. Additionally, there is a shortage of adequate comparative studies for comparing machine learning techniques used for prediction. Hence, this paper attempts to tackle those two problems by comparing five common machine learning techniques for tackling the problem of hospital readmission likelihood prediction based on real data. IV. METHODOLOGY Before starting the comparative study, it is important to understand the data, perform preprocessing if necessary, and select features appropriate for the experiments as depicted in Fig. 2. Those tasks are explained below. It is worth noting that all the experiments were conducted using Python. A. Data Preparation 1) Understanding data: In this study, we exploited a sample of a diabetes patients" dataset, which has been extracted from many hospitals in the United States [11], [12]. This dataset includes 3090 instances in the age range of 30-50 and with 18 attributes. Table I depicts the variables of the dataset together with their descriptions. The scientific meanings of those variables are beyond the scope of this paper. 2) Data pre-processing: This is a very important stage which includes data transformation and cleaning. In data transformation, some variables were transformed from categorical to binary (0/1) such as (Change, DM, G, and A). Some other variables were transformed from integer to string such as AS, DI, and AS. In data cleaning, some values of categorical data were missing and had to be accounted for. For this purpose, we employed imputation (substitution) via the mode of the categorical data. 3) Feature Selection: In this step, we perform feature selection for dimensionality reduction. In other words, we select the most relevant features. In this study, towards this goal, we assessed the impact of variables on our target. This helped us eliminate variables with low importance. Features which have high influence on accuracy are the most important [27]. We used the Gradient Boosting technique [28] for categorical features. Table II demonstrates the average weights of the variables. We then utilized a threshold of 0.014 to obtain our feature set [29], [30]. Accordingly, the features A, AS, and DM were rejected since their weights were lower than 0.014. All the other features depicted in Fig. 9 were selected. B. Constructing Machine Learning Techniques Models In this comparative study, the selected models included one output/target with two values (True or False) regarding hospital readmission during a period of 30 days. In other words, the value of the readmission parameter is true if readmission is done during a period of 30 days. Otherwise, in case of no readmission or in case readmission is done after 30 days, its value if false. The set of drivers for the prediction was comprised of the selected features as discussed above. The training dataset and the testing dataset were selected randomly. Additionally, 10-fold cross validation was applied by selecting 40% of the data for testing and the rest for training. The settings of the various models are discussed below. 1) Logistic regression: This model was built by importing the Logistic regression module and using it to generate the classifier. Grid search was employed to detect the optimal accuracy and the best hyper-parameters. 2) Support vector machine: Kernel SVM was trained using the training set. A support vector classification (SVC) task was used. In this technique, there are many parameters such as C, kernel, and gamma; where C represents the error term penalty parameter, kernel determines the kernel type which can be utilized in the algorithm (in our case it is "rbf"), and gamma indicates the coefficient of the kernel, such that a high value of gamma attempts to completely fit the set of training data. Grid search was employed to determine the optimal parameters and accuracy. Table III illustrates that the optimal accuracy for C is 10. On the other hand, the optimal accuracy for gamma was 0.3. 3) Decision tree: This model was generated using the "gini" function to evaluate the split quality of the tree. In our study, the min_samples_split = 30 is the minimal number of samples needed for splitting an internal node, and max_depth is the maximal tree depth. Grid search was conducted and the best accuracy for max_depth was 15 as depicted in Table IV. 4) Naïve bayesian classifier: A NB model was created using Gaussian Naive Bayes, which assumes that the attributes follow a natural distribution. 5) Multi-Layer perceptron: We built a MLP network using 18 inputs. The number of neurons in a hidden layer was 5. The function of the neurons was stochastic gradient descent. The maximum number of iterations was 300, and the two outputs were (readmitted < 30 and readmitted > 30). Table V illustrates that result of MLP weight matrix after training. V. RESULTS AND DISCUSSION This work utilized various performance measures to compare the studied techniques [31]. Specifically, we relied on accuracy, recall, precision, and F1 scores for this purpose. Those parameters are defined in terms of the true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) as indicated in equations (1) through (4). TPs are cases in which we predicted yes (they will be readmitted in a month period), and they were really readmitted. TNs are cases in which we predicted no, and they were not readmitted. On the other hand, FPs are cases in which we predicted yes, but they were not actually readmitted; Type I error. Finally, FNs are cases in which we predicted no, but they were actually readmitted; Type II error. Accuracy indicates how often the classifier is correct. The recall is a sensitivity measure (ratio of TPs to the sum of TPs and FNs). It indicates the rate of cases the model predicted the patient will be readmitted in a month period (relative to the number of cases the patient was actually readmitted). The precision measures the rate of cases that the model predicts the patient will be readmitted in a month period correctly compared to total number of cases in which the model predicts the patients will be readmitted. Table VI depicts the values of the performance measures. As previously noted, we used 10-fold cross validation for the models. VI. CONCLUSION AND FUTURE WORK This paper presented a comparative study among five machine learning techniques; namely LR, MLP, NB classifier, decision trees, and SVMS; for predicting the likelihood of hospital readmission of diabetes patients. The study relied on real data collected from hospitals in the United States. Based on the study, the SVM provided the best performance. Nevertheless, the study will be extended to compare additional techniques and larger datasets will be considered as well.
3,616.2
2019-01-01T00:00:00.000
[ "Computer Science" ]
Strong pinning in very fast grown reactive co-evaporated GdBa2Cu3O7 coated conductors We report on compositional tuning to create excellent field-performance of J c in “self-doped,” GdBa 2 Cu 3 O 7 − y (GdBCO) coated conductors grown by ultrafast reactive co-evaporation. In order to give excess liquid and Gd 2 O 3 , the overall compositions were all Ba-poor and Cu-rich compared to GdBCO. The precise composition was found to be critical to the current carrying performance. The most copper-rich composition had an optimum self-field J c of 3.2 MA cm − 2 . A more Gd-rich composition had the best in-field performance because of the formation of low coherence, splayed Gd 2 O 3 nanoparticles, giving J c (77 K, 1 T) of over 1 MA cm − 2 and J c (77 K, 5 T) of over 0.1 MA cm − 2 . © 2014 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License. We report on compositional tuning to create excellent field-performance of J c in "selfdoped," GdBa 2 Cu 3 O 7−y (GdBCO) coated conductors grown by ultrafast reactive coevaporation. In order to give excess liquid and Gd 2 O 3 , the overall compositions were all Ba-poor and Cu-rich compared to GdBCO. The precise composition was found to be critical to the current carrying performance. The most copper-rich composition had an optimum self-field J c of 3.2 MA cm −2 . A more Gd-rich composition had the best in-field performance because of the formation of low coherence, splayed For many years, the evaporation method has been studied widely for the manufacture of high quality 2G coated conductors on large scales with a high production rate. 1,2 Reactive co-evaporation by cyclic deposition and reaction (RCE-CDR) is a very promising method for coated conductor production. Low oxygen partial pressure (pO 2 ) is used for the deposition followed by reaction in high pO 2 , in a cyclic manner. 3,4 However, this system is complex and incurs high labour or equipment depreciation costs and it is difficult to scale-up. On the other hand, reactive co-evaporation by deposition and reaction (RCE-DR) is simpler and has significantly higher deposition rates by around an order of magnitude (the conversion from an amorphous glassy phase to a superconducting phase occurs at high temperature and high oxygen pressure very quickly, within around 30 s). [5][6][7] The process involves moving through a liquid phase zone before conversion of these phases to the superconducting phase. It is possible to make ∼1 km length conductors in only 2-5 h 8, 9 with critical currents in excess of 450 A cm −1 at 77 K. For some applications of coated conductors it is necessary to optimize J c at low fields (e.g., for fault current limiters), whereas for others (e.g., motors or magnet inserts) it is necessary to optimize J c for higher fields. In the RCE-DR process, the fraction of liquid and excess rare earth pinning fraction depend critically on the precursor composition, and it is important to determine how precursor composition influences the J c (H) behaviour. We address this question in this study. Figure 1(a) shows a schematic image of the batch-type RCE-DR system and the crucial parameters of the process are shown in Table I 1. (a) Schematic of the Reactive Co-Evaporation-Deposition and Reaction (RCE-DR) method employed by SUNAM for making long length GdBCO conductors. (b) The growth path followed during the reactive co-evaporation deposition and reaction (RCE-DR) method employed by SUNAM superimposed on the GdBCO phase stability line. The position of the line is not well documented and hence is only schematic here. Note that this plot is applicable to films of GdBCO and hence the decomposition phases are different to the bulk, e.g., Gd 2 BaCuO 5 is not apparent when GdBCO decomposes at either high T or low pO 2 or both. Instead Gd 2 O 3 is formed owing to epitaxial stabilization of this phase. Differential pumping channel between reel to reel and heat treatment regions deposited in the deposition chamber on a ∼50 nm LaMnO 3 (LMO) buffered IBAD-MgO hastelloy metal tape 10 at a deposition rate of 0.1 μm/min with multi pass deposition and translation rate of 120 m/h, at a very low partial oxygen pressure around 10 −5 Torr. 9,11 Each metallic component evaporation rate is controlled by a quartz crystal monitor and feedback program. Each material was put in a crucible, and deposited with a differentially pumped pierce type e-gun (30 kW). 10 m length conductors were fabricated in around 5 min and pieces were cut from this length for the various measurements. Figure 1(b) shows a schematic phase stability diagram and the path followed for the conversion process through the GdBa 2 Cu 3 O 7−y (GdBCO) phase stability diagram. 12 The diagram is schematic because the conversion process is not undertaken at equilibrium. We also note that the GdBCO line position is not identical to the YBa 2 Cu 3 O 7-y (YBCO) line because different rare earths shift the stability lines quite considerably. 13 After deposition of the precursor, the tape is moved from the very low pO 2 zone (Region 1, ∼10 −2 mTorr O 2 ) where an amorphous phase (with possibly some nanocrystalline regions) is present, to the low pO 2 zone (Region 2, ∼30 mTorr) where nanocrystalline Gd 2 O 3 forms. While bulk equilibrium thermodynamics predicts Gd 2 BaCuO 5 to be stable in region 2, Gd 2 O 3 is stable instead owing to reduced kinetics and/or epitaxial driving forces. 14 Finally, in the higher pO 2 zone (Region 3, ∼100 mTorr), GdBCO forms very rapidly (<60 s). Up to now, a typical ratio of Gd:Ba:Cu of ∼1:1:2.5 has been studied. 9,15 However, here, to give a higher liquid fraction during processing, we explore 3 new Cu-rich compositions (A, B, C) as shown in Table II and in Figure 2. The position of the liquid region in Figure 2 is schematic. The region is not well documented for low pO 2 's. We have estimated the position of the liquid region based on the work of Wong-Ng and Cook. 16 A summary of the compositions of the 3 samples studied is as follows: • A is the most Gd-rich and (Ba+Cu) poor. It is therefore furthest away from the liquid region of Fig. 2. It is the most Gd 2 O 3 -rich and liquid-poor composition. • B is the intermediate composition where both moderate Gd 2 O 3 and liquid are present. • C is the most Gd-poor and hence most (Ba+Cu) rich. It is therefore the most Gd 2 O 3 -poor and liquid-rich composition. All compositions lie on quasi-equilibrium "L+Gd 2 O 3 " tie lines (to avoid confusion these lines are not shown in Fig. 2). Hence, for all three compositions, only 1/2Gd 2 O 3 and L should be present at stage 2 of the processing stage (before conversion to GdBCO), as shown both in Figs. 1(a) and 1(b). During the conversion to GdBCO, i.e., moving to region 3 of Fig. 1(b), the GdBCO crystallizes and the liquid moves to the surface of the film. 17 After the conversion is complete, "GdBCO (Gd123) + 1/2Gd 2 O 3 + solidified liquid" should be present in proportions roughly determined from the phase diagram of Fig. 2. The "solidified liquid" at the film surface is in the form of BaCuO 2 + Cu 2 O. 12 The microstructural and superconducting properties of a number of ∼1 cm sections of tape cut from each length were studied. There was a high reproducibility of all measured properties along the tape length. X-ray diffraction (XRD) (θ -2θ scans and φ scans) was undertaken to learn about the crystallinity and misorientation of the grains. Low resolution and high resolution cross sectional transmission electron microscopy (TEM) were undertaken to image the microstructural features in the tapes. The transport critical current density was measured at 77 K using a conventional four-point probe method with a 1 μV cm −1 criterion on 50 μm wide bridges. Using a photolithographic technique, resist pads were transferred on the electrical contact area of the sample and a solution of H 2 O 2 , NH 3 , and ethanol was used to remove the unprotected silver capping layer from the sample and expose the GdBCO layer. Similarly, resist track patterns were transferred on the GdBCO surface and the unprotected GdBCO layer was removed using ion milling process and wet etching. The transition temperature (T c ), the critical current density field dependency (J c (B)), and the angular dependency (J c (θ )) in maximum Lorentz force configuration by rotating the applied magnetic field in a plane perpendicular to the current direction were all measured. X-ray diffraction of the tapes showed all the expected (00l) peaks for the different materials present in the metallic tape, buffer, and film (see Figure S1(a) in the supplementary material 24 ). (00l) Gd 2 O 3 was present owing to its formation in Region B of the growth diagram (Figure 1(b)) and (111) Cu 2 O, (600) BaCuO 2 , and (211) BaCu 2 O 2 were present because it forms when remnant liquid phase solidifies. Figures S1(b) and S1(c) in the supplementary material 24 show the rocking curves and φ scans of the different samples, and the φ and ω values extracted from these plots are included in Table II. Summarizing the relationship between the microstructure and superconducting properties of the different samples, we find: • Sample A has the highest in-plane grain misorientation ( φ = 4.92 • ) and the lowest J c of 1.54 MA cm −2 . This is expected since grain orientation and self-field J c are closely connected. 18 The large in-plane misorientation likely results from the lower amount of liquid present in the sample, which means less kinetic assistance to align the GdBCO grains with respect to one another during the rapid conversion process. To understand about the pinning in the different samples, we now turn to the in-field J c performance. Figure 3 shows representative data from 1 cm sections from each sample composition. As noted already, within each composition batch there was very little difference between each section measured. The first observation of Fig. 3 is that sample B shows a very similar form of in-field performance as sample A, both with only a gradual drop-off of J c at higher fields. However, the overall levels of the J c (H) are different with B being shifted up compared to A. The result indicates that the difference in liquid fraction (more in B than A, see Table II) is more critical to the self-field J c than the difference in Gd 2 O 3 fraction (less in B than A) is to the c-axis pinning. The second observation is that there is a worse field dependence of J c for sample C compared to samples A and B. This is consistent with there being significantly less Gd (and hence less Gd 2 O 3 pinning phase) and more liquid in C. Less Gd 2 O 3 and more liquid will mean a greater possibility of more ripening of the lower Gd 2 O 3 fraction during the conversion process. We discuss this point more later when we analyse the TEM data. The key point that comes out of Fig. 3(a) is that precise tuning of the Gd to liquid fraction ratio is very important to the conductor performance. On the other hand, for the RCE-DR process, this result is perhaps not surprising considering the very rapid speed of phase formation in the conversion process and it indicates that precise control of the precursor composition is very important. Looking now in more detail into the pinning of the three samples using pinning force plots (inset to Fig. 3(a)), we find that A has a low, very broad peak with maximum pinning force Fp max at ∼0.8 T; B has a much larger and broader peak with Fp max at ∼1.8 T; and C has a small peak with Fp max at ∼0.6 T. The difference in the curves clearly shows the very different pinning landscapes resulting from the compositional variation, with the most effective pinning coming from sample B of intermediate Gd:liquid ratio. We then focused on understanding the reasons for the different performances of the samples. We focused on comparing the best in-field performance sample (sample B) to the best self-field performance (sample C). Field angular measurements of J c (Figs. 3(b) and 3(c)) and TEM (Fig. 4) analyses were undertaken. Looking first at the angular data ( Fig. 3(b)), we find that at any given field B has higher J c values over all angles than C (sample C was measured to only 3 T since above this field J c dropped off rapidly). For sample B, between the θ = 90 • (ab) peaks, a very broad, flat region of J c is observed, indicative of a smeared c axis peak. This flat region is much less obvious for sample C. To extract the c axis component of J c from the angular data, the curves were fitted according to Blatter's anisotropic scaling approach. 19 According to this model, the J c 's of samples with uncorrelated point-like disorder depends only on a single variableĤ = H(cos( ) 2 +γ −2 sin( ) 2 ), where γ ∼ 5−7 is the electronic mass anisotropy. Thus, the measured J c ( , H) curves collapse together in the ( , H) regions where only random pinning is present when plotted as a function ofĤ. In this case γ is a fitting parameter constrained to the values 5-7. The collapsed curve J r (Ĥ) represents the random pinning contribution to J c ( , H) and can be mapped back to the experimental data. The difference, J, between J c and J r is the contribution of correlated pinning to J c given by the combined effect of random and correlated random pinning. Figure 3(c) shows J versus field for both samples. The values are much larger for B than for C and the data for B also extend to higher fields (6 T compared to ∼3 T). Hence, for sample B strong c axis correlated pinning is observed at all fields. The width of the c axis peak for B (not shown) was large, approximately ∼150 • , indicative of splayed, low coherence nanoparticles along the c-axis. This is similar to what is observed in RE 2 O 3 excess Metal Organic Deposition (MOD) conductors Ref. 20. The difference though is that the conversion process using the RCE-DR process is much faster than the MOD process. Also, strong c axis pinning is not common for growth processes which involve a large liquid excess fraction. Growth in the presence of excess liquid normally leads to low pinning Refs. 21 and 22 because of high crystalline perfection. On the other hand, for the RCE-DR process the growth is very fast and so defects are not healed out by the liquid. There are similarities both in the nature of the process and in the properties obtained to the hybrid liquid phase epitaxy process for making coated conductors Ref. 23. An understanding of the different pinning exhibited by B and C is gained from cross-sectional TEM images (Fig. 4). For sample B (Fig. 4(a)), we observe low coherence regions of rather equiaxed nanoparticles of Gd 2 O 3 with alignment broadly along the c-axis (examples are indicated by two sets of arrows) and with spacing between ∼50 nm and 100 nm, corresponding to a matching field of ∼1-2 T. Wide possible tilt angles were observed which mimic a broad splay of defect correlations. The TEM data are consistent with the various J c data of Figs. 3(a)-3(c). Nanoparticles of Gd 2 O 3 are also present in sample C (Figure 4(b)) but they show no clear tilt alignment along c and are therefore considered to act mainly as random defects. This explains both the lower c axis peak in the angular data (Figures 3(b) and 3(c)) and the poorer field dependence (Figure 3(a)) observed for sample C. The nanoparticles are also slightly larger and appear more fused together than for sample B, consistent with the presence of a larger liquid phase fraction in C (Table II). Although it was not possible to accurately assess from TEM or XRD the relative fractions of Gd 2 O 3 between samples B and C, the higher out-of-plane grain tilting (ω) of B compared to C ( ω = 2.93 • and 2.25 • , respectively, Table II) is consistent with both more Gd 2 O 3 nanoparticles which disturb the growth of the GdBCO grains, and less liquid to assist the alignment of the GdBCO grains during growth. It should be noted though that for both samples B and C, despite the very fast growth process, from TEM the Gd 2 O 3 nanoparticles were observed to grow heteroepitaxially within the GdBCO matrix without any major disturbance of the surrounding GbBCO lattice. A high-resolution cross sectional TEM for sample C is shown in Figure S2(a) of the supplementary material. 24 Finally, it is important to note that the interface of GdBCO with IBAD MgO is very clean (see Figure S2(b) of the supplementary material 24 ) indicating no destruction of the buffer by the presence of the large amount of liquid. This is very important for production of long length conductors where no weak spots by corrosion of the buffer or metal by the liquid can be tolerated anywhere along the length. In normal circumstances of growth in the presence of a large amount of liquid, the buffer is attacked by the liquid, 17 but the special feature of the rapidity of the RCE-DR conversion process means this is not the case. The superconducting and microstructural properties of GdBa 2 Cu 3 O 7 coated conductors fabricated by an ultrafast RCE-DR process were studied. Three new precursor compositions were explored which were Ba-poor and Cu-rich compared to the 1:2:3 ratio, and which were more Curich than in previous studies. The conductor performance was found to be strongly dependent on the precise composition. The best conductor in terms of in-field performance had an intermediate Gd:Ba:Cu ratio (1.0:1.25:3.42). The conductor contained heteroepitaxial Gd 2 O 3 nanoparticles which had a broad splay of defect correlations with wide possible tilt angles. This gave rise to strong and broad correlated pinning around the c axis of the GdBCO, with J c (77 K, 1 T) of over 1 MA cm −2 and J c (77 K, 5 T) of over 0.1 MA cm −2 . Creation of this broad coherence of nanoparticles in very rapidly grown conductors is a unique feature of the RCE-DR process and is very promising for achieving high performance conductor at low cost.
4,438
2014-08-22T00:00:00.000
[ "Physics" ]
SOX2 Expression and Transcriptional Activity Identifies a Subpopulation of Cancer Stem Cells in Sarcoma with Prognostic Implications. Stemness in sarcomas is coordinated by the expression of pluripotency factors, like SOX2, in cancer stem cells (CSC). The role of SOX2 in tumor initiation and progression has been well characterized in osteosarcoma. However, the pro-tumorigenic features of SOX2 have been scarcely investigated in other sarcoma subtypes. Here, we show that SOX2 depletion dramatically reduced the ability of undifferentiated pleomorphic sarcoma (UPS) cells to form tumorspheres and to initiate tumor growth. Conversely, SOX2 overexpression resulted in increased in vivo tumorigenicity. Moreover, using a reporter system (SORE6) which allows to monitor viable cells expressing SOX2 and/or OCT4, we found that SORE6+ cells were significantly more tumorigenic than the SORE6- subpopulation. In agreement with this findings, SOX2 expression in sarcoma patients was associated to tumor grade, differentiation, invasive potential and lower patient survival. Finally, we studied the effect of a panel of anti-tumor drugs on the SORE6+ cells of the UPS model and patient-derived chondrosarcoma lines. We found that the mithramycin analogue EC-8042 was the most efficient in reducing SORE6+ cells in vitro and in vivo. Overall, this study demonstrates that SOX2 is a pro-tumorigenic factor with prognostic potential in sarcoma. Moreover, SORE6 transcriptional activity is a bona fide CSC marker in sarcoma and constitutes an excellent biomarker for evaluating the efficacy of anti-tumor treatments on CSC subpopulations. Introduction Similar to normal tissues, the cancer stem cell model proposes that tumors are hierarchically organized and at the apex of this structure there are cells presenting stem cell like properties (cancer stem cells, CSCs) able to self-renew and to differentiate and give rise to the rest of subpopulations present in the tumor [1]. Bona-fide CSCs are those subpopulations within the tumor with capacity to re-initiate tumor growth. Besides, they have enhanced ability to migrate and invade tissues and show increased resistance to chemotherapeutic drugs. A common characteristic of CSC subpopulations is the overexpression of transcription factors responsible for maintenance of the stem cell phenotype in embryonic and adult stem cells like SOX2 (Sex-determining region Y-box protein 2) or OCT4 (POU5F1, POU Class 5 Homeobox 1) [2]. Subpopulations expressing these pluripotency factors have been correlated with tumor progression, drug resistance and the presence of hierarchically organized CSCs in several types of tumors [3][4][5][6][7][8]. In sarcomas, SOX2 has been found overexpressed in CSCs from different subtypes [9][10][11][12][13][14][15][16][17][18] and was described to play specific pro-tumorigenic roles in osteosarcoma [19][20][21]. In addition, OCT-4 expression was also associated to CSCs subpopulations in Ewing sarcoma and osteosarcoma [22,23]. To unequivocally confirm a given factor as a marker for CSCs, the subpopulation of tumor cells expressing it and/or presenting its associated activity must be isolated and demonstrate a higher tumorigenicity in vivo than other subpopulations. Since pluripotency factors are intracellular molecules, the isolation of viable cells expressing these factors cannot be directly achieved using antibody-based flow cytometry. As an alternative method, the use of reporter systems where the expression of a fluorescent protein is driven by the SOX2 and/or OCT4 promoter or by SOX2/OCT4 response elements has proved the tumor-propagating potential of cells expressing pluripotency factors in several tumor models [4,8,22,[24][25][26][27][28][29][30][31]. Notably, this strategy allows the real-time tracking of CSCs and the study of their response to anti-tumor treatments or changes in tumor microenvironment. In a previous work, we developed cell-of-origin models of sarcoma based in the tumor transformation of human mesenchymal stem/stromal cells (MSCs) using relevant oncogenic events [32][33][34]. We found that self-renewed tumorspheres formed by these cells showed increased expression of several CSC-related genes, including SOX2. Furthermore, by comparing the tumorigenic properties of these models with those of their xenograft-derived cell lines, we found that SOX2 expression was progressively enhanced in CSC-enriched tumorspheres during sarcoma progression toward more aggressive phenotypes, hence highlighting its potential applicability as CSC marker in sarcomas [35]. To further and significantly extend these data, herein we introduced a reporter system to monitor the transcriptional activity due to SOX2 and/or OCT4 (SORE6) [29] in a model of undifferentiated pleomorphic sarcoma (UPS) and chondrosarcoma patient-derived cell lines thus analyzing for the first time the ability of isolated SOX2/OCT4-positive cells as tumor-promoting CSCs in sarcoma. The results of this approach, together with those obtained by SOX2 knockdown and overexpression, indicate that SOX2 expression/activity is a bona fide CSC marker in sarcoma. In addition, this reporter system constitutes an excellent approach for testing the effectiveness and mode of action of anti-tumor drugs on CSC subpopulations. SOX2 Expression in Sarcoma Tissue Specimens is Associated to Poor Prognosis and Survival We aimed to investigate whether the expression of pluripotency factors such as SOX2 and OCT4 in sarcoma patients is clinically relevant. SOX2 and OCT4 expression was analyzed by immunohistochemistry in a collection of tissue microarrays, including samples from 10 types of sarcomas. Nuclear SOX2 expression was detected in 25 (28.4%) sarcoma samples ( Figure 1A,B). On the other hand, nuclear expression of OCT4 was only detected in 10 cases (11%) and all of them displayed weak staining ( Figure S1A,B). We did not find any significant association between OCT4 expression and clinical parameters. However, a strong correlation between SOX2 and OCT4 expression was observed, all OCT4-positive cases were also positive for SOX2 expression ( Figure S1B). In summary, we found that SOX2, but not OCT4, correlated with advanced tumor stages, aggressive phenotypes and poor prognosis in sarcoma patients. According to these data SOX2, rather than OCT4, might primarily play an active role in the initiation and progression of sarcomas. SOX2 Is Required to Maintain the Tumorigenic Potential in Sarcoma Cells To study the possible pro-tumorigenic role of SOX2 in sarcoma, we performed knockdown experiments in T-5H-O cells, a previously described cell-of-origin model of UPS [32][33][34]. First, we transduced T-5H-O cells with lentiviral particles carrying a doxycycline-inducible SOX2 shRNA and selected three clones (T-5H-O-Tet-shSOX2#1, #3 and #8) that showed efficient depletion of SOX2 expression upon doxycycline treatment (Figure 2A,B). According to the reciprocal regulation of these pluripotency factors [2], SOX2-depleted cells also displayed reduced expression of OCT4 ( Figure S2). Consistent with the role of SOX2 in stemness, its depletion in all the clones significantly decreased tumorsphere formation ( Figure 2C,D). More importantly, doxycycline treatment of mice inoculated with doxycycline-pretreated T-5H-O-Tet-shSOX2#8 cells, but not with parental T-5H-O cells, was sufficient to prevent in vivo tumor growth ( Figure 2E). In line with these results, we found a significant reduction in both the ability to form colonies in soft-agar, a surrogate in vitro transformation assay, and the capacity to grow as tumorspheres upon depletion of SOX2 expression in T5H-O cells using another, non-conditional, shRNA ( Figure S3A-E) or a siRNA ( Figure S3F-J). To further confirm the SOX2-driven tumorigenic properties in sarcoma cells, we stably overexpressed SOX2 in T-5H-O cells using lentiviral particles for the expression of SOX2 cDNA ( Figure 3A). SOX2 overexpression did not show any impact in the ability to form colonies in soft agar ( Figure 3B,C) nor in the capacity to grow as tumorspheres ( Figure 3D,E). Nevertheless, cells overexpressing SOX2 were more tumorigenic and grew tumors in immunodeficient mice significantly faster than controls cells ( Figure 3F,G). Therefore, basal levels of SOX2 seems to be sufficient to efficiently promote clonal growth in vitro, however, certain microenviromental conditions present in the in vivo experiments might promote a long-term tumorigenic potential of those cells with higher expression of SOX2. Taken together, the data from depletion and overexpression experiments suggest that SOX2 expression plays an active role in the initiation and progression of sarcomas thereby emerging as a biologically and clinically relevant feature. SOX2 Activity Marks a Subpopulation of CSCs in Sarcoma To assess whether cells expressing pluripotency factors like SOX2 behave as a CSC subpopulation with increased tumor-promoting ability, we made use of a lentiviral-based reporter system in which a composite SOX2/OCT4 response element (SORE6) coupled to a minimal cytomegalovirus (CMV) promoter controls the expression of GFP fluorescent reporter gene. The inclusion of proteasome-targeting degron sequences in the reporter genes resulted in greater selectivity and temporal resolution [29]. This system allowed us to detect, monitor and isolate viable cells expressing transcriptionally active SOX2 and/or OCT4 by flow cytometry ( Figure 4A) or live cell time-lapse microscopy ( Figure 4B). Thus, we used these lentiviral construction to transduce T-5H-O cells, a patient-derived chrondrosarcoma primary cell line, CDS-17, and a cell line derived from a xenograft generated by CDS-17 cells, T-CDS-17, in order to generate lines with stable expression of the SORE6 construct (T-5H-O-SORE6-GFP, CDS-17-SORE6-GFP and T-CDS-17-SORE6-GFP) or its corresponding control without the SORE6 response element (T-5H-O-minCMV-GFP, CDS-17-minCMV-GFP and T-CDS-17-minCMV-GFP) which have been used as gating controls in the flow cytometry analyses. First, we found that the T-5H-O-SORE6-GFP, CDS-17-SORE6-GFP and T-CDS-17-SORE6-GFP cells displayed percentages of SORE6+ ranging between 20 and 40% ( Figure 4A,B). Therefore, we use SORE6 activity to isolate SORE6+ and SORE6-subpopulations by flow cytometry in the three cell lines ( Figure S4A-C). As expected, SORE6+ T-5H-O cells showed a significantly higher expression of SOX2 than the SORE6subpopulation ( Figure S4D,E). We also found that SORE6+ cells showed a much higher ability to form tumorspheres than SORE6-cells both in T-5H-O UPS cells ( Figure 4C,D) and in CDS-17 and T-CDS17 chondrosarcoma cells ( Figure S5). To study whether SORE6+ subpopulation was enriched in tumor-promoting cells, we inoculated 1 × 10 4 cells of both T-5H-O SORE6-and SORE6+ subpopulations in immunodeficient mice and measured tumor formation over time. We observed tumor growth in the SORE6+ series as early as day 6 post-inoculation. On the other hand, SORE6-cells did not generate measurable tumor growth till day 15 after the inoculation and showed significant statistical differences in tumor volume with the SORE6+ series ( Figure 4E). At the end-point, tumor weights confirmed that SORE6+ cells generated significantly larger tumors than those obtained from SORE6cells ( Figure 4F). To confirm and quantify the enrichment of the SORE6+ subpopulation in CSCs, we performed LDA comparing the ability of SORE6-and SORE6+ cells to initiate tumor growth in vivo. We found that SORE6+ cells produced tumors in all cases after the inoculation of 5000 or 1000 cells and in 2 out of 5 tumors (2/5) after the inoculation of 100 cells. On the other hand, SORE6-generated 5/5, 2/5 and 1/5 tumors after the injection of 5000, 1000 or 100 cells respectively ( Figure 4G). Therefore, the tumor-initiation frequency (TIF) calculated using ELDA software was 7-fold higher in SORE6+ cells (1 tumor-initiating cell out of 185) compared to SORE6-cells (1 out of 1273) ( Figure 4H). These experiments suggest that high SOX2/OCT4 transcriptional activity, measured by SORE6 activity, could be used as a surrogate marker for CSCs in sarcomas. SORE6 Response Element is a Valuable Tool to Monitor CSCs Response to Anti-tumor Treatments Given its role as CSC marker, we aimed to test whether SORE6 activity could be useful to evaluate the effectivity of anti-tumor drugs to target CSCs subpopulations. Therefore, we studied the impact on the SORE6+ subpopulation of drugs used in the treatment of sarcomas, like doxorubicin, trabectedin and paclitaxel, as well as the mithramycin analog EC-8042 which has proven as highly efficacious in targeting CSCs in sarcoma [36]. The IC 50 values of these drugs in T-5H-O-SORE6-GFP cells were 307, 0.66, 7, and 288 nM for doxorubicin, trabectedin, paclitaxel and EC-8042 respectively ( Figure S6). According to these values we evaluated SORE6 activity in T-5H-O cells in dose-response experiments using concentrations of each drug that induced low, medium (≈IC 60 ), and high toxicity after 48 h of treatment. In these experiments, EC-8042 was the most efficient drug to reduce the SORE6+ subpopulation, being able to induce a 75% decrease of SORE6+ cells with a concentration in the order of its IC 60 . On the other hand, the rest of drugs only induced a clear regression of the SORE6 subpopulation when the higher concentrations were used ( Figure 5). In addition, time course analysis after the treatment with concentrations in the order of the IC 60 values also confirmed the higher potential of EC-8042 to eradicate SORE6+ cells in comparison with doxorubicin, trabectedin and paclitaxel ( Figure 6A-C). This strong ability of EC-8042 to target SORE6+ cells was also evident in dose-response and time-course analysis performed in CDS-17-SORE6-GFP and T-CDS17-SORE6-GFP cells ( Figure S7). To better characterize the mechanism associated with the differential ability of these drugs to decrease SORE6+ cells, we simultaneously analyzed SORE6 activity and caspase-3 activation by flow cytometry in T-5H-O-SORE6-GFP cells treated with trabectedin or EC-8042. In these analyses we found that trabectedin was an efficient inductor of apoptosis both in SORE6+ and SORE6-cells. In addition, we found that EC-8042-treatment sharply reduced the percentage of SORE6+ cells even before the apoptotic effect become evident ( Figure 6D,E). These results suggest that both drugs were able to target CSCs subpopulations using different mechanisms. On one hand, trabectedin eliminated SORE6+ cells through the induction of apoptosis and, on the other hand, EC-8042 would be able to switch-off SORE6-related transcriptional activity, thus possibly affecting their CSC-associated properties, prior to the induction of apoptosis. Next, we treated mice bearing T-5H-O-SORE6-GFP tumors with different drugs, using previously established treatment regimens [36][37][38], to evaluate their effect on SORE6+ cells in vivo. With the exception of paclitaxel, all drugs were able to significantly reduced tumor growth, being EC-8042 the most efficient treatment ( Figure 7A,B). At the experimental end-point SORE6 activity was analyzed by flow cytometry in dissociated tumor cells. We found that EC-8042 was the only drug able to reduce the percentage and the fluorescence intensity of SORE6+ cells ( Figure 7C-E). On the other hand, trabectedin or doxorubicin treatments produced a slight increase in the percentage of SORE6+ cells and fluorescence intensity, resulting in significant differences with the levels detected in EC-8042-treated tumors ( Figure 7C-E). Altogether, these results prove the usefulness of analyzing SORE6+ activity to evaluate the activity of anti-tumor drugs to target CSCs in sarcoma both in vitro and in vivo. Representative flow cytometry dot plots (C) and summary graphs representing the percentage of SORE6+ cells (mean ± standard deviation) (D) and the mean SORE-GFP fluorescence intensity (mean ± standard deviation) (E) are shown. Asterisks indicate statistically significant differences between series (*: p < 0.05, **: p < 0.005, ***: p < 0.0005; one-way ANOVA). Discussion Similar to hematological malignancies and other solid tumors, intra-tumor heterogeneity in sarcomas may be explained, at least in part, by the emergence of subpopulations of CSCs which guide tumor growth and dissemination. The stemness state in sarcomas is orchestrated by the expression of pluripotency factors such as OCT3/4, NANOG, KLF4, and SOX2 [12,13,16,18]. Among them, SOX2 has been shown as a common CSC-related factor in different types of sarcoma [18,39]. The pro-tumorigenic role of SOX2 has been particularly well described in osteosarcoma models. Knockdown of this factor in osteosarcoma cell lines or in the osteoblastic lineage of an osteosarcoma mouse model resulted in the loss of proliferative potential in vitro and a drastic reduction of tumor formation in vivo [19][20][21]. Besides osteosarcoma, clues for pro-stemness and/or pro-tumorigenic role for SOX2 has also been reported in Ewing sarcoma [40,41] and embryonal rhabdomyosarcoma [42]. In addition, the level of SOX2 expression in a panel of primary sarcoma cell lines have been positively correlated with the ability to grow tumors in immunodeficient mice [17]. In line with these previous works, our knockdown experiments further expand the findings regarding the key role of SOX2 in sustaining tumorigenicity to a model of UPS. In addition, we show that SOX2 overexpression, explored for the first time in sarcoma, also support the prominent role of SOX2 in sarcomagenesis. We found that SOX2 is expressed in 28% of an array of 88 sarcoma patients, being UPS, synovial sarcomas and Ewing sarcomas those presenting a higher percentage of positive cases in concordance with the results of previous reports [43,44]. In our series, SOX2 expression significantly correlated with tumor grade, poor differentiation, invasive potential and poor patient survival. Similar results have been recently reported for Ewing Sarcoma [43], thus reinforcing their key role in sarcoma development and disease progression. The clinical significance of OCT4 expression has been barely addressed in sarcomas. In our series of patient samples, a weak expression of OCT4 was only detected in a small subset of sarcoma samples (11%), being synovial sarcoma the subtype with a higher percentage of positive samples (56%). Even though OCT4 expression was not significantly correlated with any clinicopathologic parameter and showed no impact on patient survival in our cohort of sarcoma patients, we cannot discard that the analysis of larger series of patients could unravel a clinically relevant role for OCT4 in specific sarcoma subtypes. Given the relevant role of SOX2 in tumorigenicity, here we used the SORE6 system [29] to study whether those subpopulations showing SOX2/OCT4 transcriptional activity behave as bona-fide CSCs with higher tumor-initiating potential than other subpopulations. SOX2-based reporter systems were previously used to demonstrate the CSC phenotype of SOX2-expressing subpopulations in glioma, breast, prostate, bladder or head and neck cancers [4,8,24,[26][27][28][29][30][31], although this strategy remained unexplored in sarcomas. In addition, a plasmid containing the human OCT4 promoter driving the expression of GFP was used to show that OCT4-expressing osteosarcoma cells were much more tumorigenic than OCT4 negative cells [22]. In line with these works, we found that SORE6+ UPS cells displayed greater potential than SORE6-cells to form tumorspheres in vitro and to develop tumors in vivo, thus confirming their CSC phenotype. In addition, we also detected a 20% of SORE6+ cells in a low passaged patient-derived chondrosarcoma cell line (CDS-17). Interestingly, this percentage was increased to 40% upon growth of CDS17 cells in a immunodeficient mice (xenograft TCDS-17 line). Considering that T-CDS17 displayed increased aggressiveness (higher invasion and tumor formation ability) than CDS-17 cells [45], the increase in SORE6+ cells could respond to an increase of the CSC burden during tumor progression and adaptation to new microenvironments. Similar findings regarding the gain of aggressiveness upon in vivo tumor growth associated to an increase of CSC markers, such as ALDH activity or OCT4 expression, have also been described in different types of sarcoma [22,35,45,46]. These findings support that serial transplantation could represent an efficient way of enriching/selecting CSC subpopulations [18]. In previous studies, we have reported that drugs already approved for sarcoma treatment such as trabectedin and experimental compounds such the mythramycin analog EC-8042 were able to target CSC subpopulations (tumorsphere cultures and/or Aldefluor-positive cells) in sarcomas with a higher efficacy than doxorubicin [36,38]. Here we used the SORE6 system to analyze both in vitro and in vivo models the effectiveness of these drugs and other chemotherapeutics used to treat sarcomas such as doxorubicin and paclitaxel to target CSCs. EC-8042 was the most efficient drug to target SORE6+ cells in vitro. Noteworthy, the reduction of SORE6+ subpopulations after in vivo treatment with EC8042 was not so efficient as observed in vitro. We may speculate that this difference could be due to the pharmacokinetic behavior of drugs in cells and animal models and/or the influence of factors from the tumor microenvironment. Nevertheless, EC8042 was the only treatment able to reduce this subpopulation in vivo. After EC8042 treatment, SORE6+ cells disappeared before apoptosis become evident, thus suggesting that EC-8042 was able to repress the expression of SOX2, as we previously observed in a related myxoid liposarcoma model [36]. According to this, it was reported that mithramycin was able to reduce in vivo proliferation of glioblastoma cells through the downregulation of SOX2 expression and its target genes [47]. Likewise, mithramycin was able to abrogate tumor growth in medulloblastoma by targeting SOX2-expressing CSCs [48]. Given that EC-8042 is 10-fold less toxic than mithramycin [49], it could represent a suitable therapeutic option to eliminate CSCs in sarcomas. Although trabectedin was not as selective as EC-8042 to eliminate SORE6+ cells, this drug proved to be an efficient apoptotic inductor in both SORE6-and SORE6+ subpopulations, similar to previous findings demonstrating its ability to eliminate tumorsphere and Aldefluor-positive cells in the same sarcoma model [38]. Therefore, our work show that different drugs may target CSCs in sarcoma using different mechanisms and also that the SORE6 system is a valuable tool to dynamically evaluate the activity of anti-tumor drugs to target CSCs in sarcoma as seen in other tumor types [29,30]. Cell Culture, Drugs and Ethics Statement The UPS cell line T-5H-O and the chondrosarcoma cell lines CDS-17 and T-CDS17 were previously characterized (Supplementary Information) [32][33][34][35]45]. Tumorsphere formation and soft agar colony formation assays were performed as previously described [35,36]. Cell suspensions were counted in a haemocytometer using tryplan blue staining to discard non-viable cells both for in vitro and in vivo experiments. The percentage of viable cells in all conditions was always higher than 95%. Trabectedin (PharmaMar, Madrid, Spain), paclitaxel (Selleckchem, Houston, TX, USA), doxorubicin (Sigma, St Louis, MO, USA) and EC-8042 (EntreChem, Oviedo, Spain) were prepared as described in Supplementary Information. All experimental protocols have been performed in accordance with institutional review board guidelines and were approved by the Institutional Ethics Committee of the Principado de Asturias (ref. 45/16). All samples and data from human origin were provided by the Principado de Asturias BioBank (PT17/0015/0023) after obtaining signed informed consent. Flow Cytometry and Cell Sorting The level of SORE6-driven GFP fluorescence in untreated cultures or after different drug treatments were analyzed and/or SORE6+ and SORE6-subpopulations were sorted by flow cytometry using a BD FACS Aria II Cell Sorter (BD Bioscience, Erembodegem, Belgium). Cells transduced with the minCMVp-GFP lentivirus were used as matched SORE6 negative control for gating purposes. In these analyses, dead cells were excluded by propidium iodide (0.5 µg/mL) staining ( Figure S8). To analyze the induction of apoptosis in SORE6+ and SORE6-subpopulations, unfixed cells were assayed for active caspase-3 immediately after treatment using the PE Active Caspase-3 Apoptosis Kit (BD Bioscience) according to the manufacturer's instructions and the level of GFP (SORE6) and PE (Caspase 3) fluorescence was simultaneously detected by flow cytometry. SOX2 and OCT4 expression was detected by flow cytometry in 70% ethanol-fixed cells using an anti-SOX2 antibody from Thermo Fisher (Waltham, MA) (PA1-094); 1: 1000 dilution) or anti-OCT4 antibody from AbCam (Cambridge, UK) ((ab19857); 1: 1000 dilution). Western Blotting Whole cell protein extraction and Western blot analysis were performed as previously described [36]. Antibodies used are described in Supplementary Information. Uncropped images of the Western Blottings are shown as in Figure S9. RT-qPCR Assays The expression of SOX2 was assessed by qPCR as described in Supplementary Information. In Vivo Tumor Growth Female NOD/SCID mice of 6-7 weeks old (Janvier Labs, St Berthevin, France) were inoculated subcutaneously (s.c.) as described [36]. In experiments aimed to evaluate the effect of anti-tumor drugs, mice with tumor xenografts with a volume of approximately 300 mm 3 were randomly assigned to receive the following intravenous treatments: vehicle (saline, every 7 days up to 3 doses), EC-8042 (18 mg/Kg; every 3/4 days up to 5 doses), trabectedin (0.15 mg/Kg; every 7 days up to 3 doses), doxorubicin (4 mg/Kg; every 7 days up to 3 doses) or paclitaxel (20 mg/Kg; every 7 days up to 3 doses). Treatment schedules were optimized according to the therapeutic window of the different drugs [36][37][38]. To analyze the effect of the conditional knockdown of SOX2, mice inoculated with cells expressing or not the Tet-pLKO-puro-SOX2 lentiviral vector received a daily intraperitoneal dose of doxycycline (50 mg/kg). Tumor size was measured with a caliper 2-3 times a week and tumor volume was determined using the equation (D × d2)/6 × 3.14, where D is the maximum diameter, and d is the minimum diameter. Relative tumor volume (RTV) for every xenograft was calculated as follows: RTV = tumor volume at day of measurement (V t ) − tumor volume at the beginning of the treatment (V 0 ). Tumor volumes, or RTV in drug-treated experiments, for all mice in each group were averaged to obtain the mean tumor volume for the corresponding group. Animals were sacrificed by CO 2 asphyxiation and tumors were weighted. To determine the effect of drugs on SORE6 activity, tumors were dissociated into single cell suspensions using MACS Tissue Dissociation Kit and the GentleMACS Dissociator system (Miltenyi Biotec, Bergisch Gladbach, Germany) and the SORE6-positive subpopulations were quantified by flow cytometry. In limited dilution assays (LDA) animals were sacrificed 4 weeks after cells inoculation. In these experiments, relative tumor-initiating frequency (TIF) was calculated using the ELDA software. All experimental protocols were carried out in accordance with the institutional guidelines of the University of Oviedo and were approved by the Animal Research Ethical Committee of the University of Oviedo prior to the study (Ref. PROAE11/2014). Patients and Immunohistochemical Analysis Paraffin-embedded tissues from 90 patients with sarcoma who underwent resection of their tumors at the Hospital Universitario Central de Asturias (HUCA) were used in this study. Tumor grade was evaluated in H&E-stained preparations using the French Federation of Comprehensive Cancer Centers grading system (Supplementary Information). Tissue microarray was constructed as previously described [51]. Immunohistochemical analysis of SOX2 and OCT4 expression was performed as detailed in Supplementary Information. The immunostaining was scored blinded to clinical data by two independent observers as negative or positive nuclear staining (> 1% positive nuclei). Statistical Analysis For the in vitro experiments and the tumor growth experiment in vivo, the statistical analysis was performed using the GraphPad Prism software (GraphPad Software, Inc, La Jolla, CA, USA). All data are represented as mean (±SD or SEM as indicated) of at least three independent experiments unless otherwise stated. Student's t test was performed to determine the statistical significance between groups. Multiple comparisons of the data were performed using the one-way ANOVA. For immunohistochemical analysis, the experimental results distributed among the different clinical groups of tumors were tested for significance employing the χ2 test (with Yates' correction, when appropriate). Survival curves were calculated using the Kaplan-Meier product limit estimate. Differences between survival times were analyzed by the log-rank method and the Hazard Ratio was calculated by univariate Cox regression analysis. All statistical analysis was carried out with the software package SPSS 24 (SPSS, IBM corp). All tests were two-sided and p < 0.05 values were considered statistically significant. Conclusions Overall, our results indicate that SOX2 is a critical stemness factor able to increase the tumorigenic properties of sarcoma cells. Notably, SOX2 expression correlate with advanced-disease related parameters in patients, therefore, suggesting its possible usefulness as prognostic marker in sarcoma. Moreover, the transcriptional activity of SOX2 and/or OCT4, measured using the SORE6 reporter, is a bona fide CSC marker in sarcoma and constitutes an excellent approach for testing the effectiveness of anti-tumor treatments to target CSCs. Biobanks Network, for its collaboration. Finally, we would like to thank the Translational Immunology group of ISPA and specially to Jose Ramon Vidal for their support with flow cytometry analysis. Conflicts of Interest: The authors declare no conflict of interest.
6,239.8
2020-01-15T00:00:00.000
[ "Biology", "Medicine" ]
Gas Saturated Sandstone Reservoir Modeling Using Bayesian Stochastic This study has been done to map the distribution of gas saturated sandstone reservoir by using stochastic seismic inversion i Bonaparte basin. Bayesian stochastic inversion seismic method is an inversion method that utilizes the principle of geostatistics so th will get a better subsurface picture with high resolution. The stages in conducting this stochastic inversion techniqu analysis, (ii) well to seismic tie, (iii) picking horizon, (iv) picking fault, (v) fault modeling, (vi) pillar gridding, ( vi (viii) scale up well logs, (ix) trend modeling, (x) variogram anal statistical wavelets are used because they can produce good correlation values. Then, the stochastic seismic inversion result reservoir in the study area is a reservoir with tight sandstone lithology which has a low porosity value and a value of High acoustic impedance ranging from 30,000 to 40,000 ft /s*g/cc. Introduction The Bonaparte Basin is mostly located off the coast of the Arafura Sea and has an area of about 270,000 km2. This basin is known as one of the basins that produces hydrocarbons in Indonesia, especially hydrocarbons in the form of gases and condensates. The Bonaparte Basin is generally dominated by extensional fractures and very few fractures or compressional structures are found. (O'brien et al, 1993 According to Barber et al (2003), lithological characters based on biostratigraphic data indicate that the depositional pattern in the Bonaparte Basin Plover Formation is domina by the deposition of braided fluvial types in the south of the study area to the coastal environment which is influenced by waves (wave dominated shoreline) and in the wave dominated shoreline the northern part is formed in a shallow marine environment. The direction of deposition in the braided fluvial environment is relatively northwest-southeast. In the oil and gas exploration stage, the seismic method is one of the top choice geophysical methods that can provide better subsurface information by utilizing the seismic wave propagation properties. There is a technique commonly used in this seismic method, namely seismic inversion technique. Seismic inversion is a method that can describe and estimate the physical properties of subsurface in the form of acoustic impedance values by utilizing seismic data as input and well data as control. Well data here has detailed resolution on thin layer thickness. Meanwhile, seismic data is strongly influenced by bandwidth which for thin layer thicknesses under tunning thickness cannot be resolved properly so as to cause high ambiguity problems in conducting inversions. Therefore, to overcome these problems, an inversion technique with a geostatistical approach can be used which will result in high resolution inversion results. Journal of Geoscience, Engineering, Environment, and Technology This study has been done to map the distribution of gas saturated sandstone reservoir by using stochastic seismic inversion i basin. Bayesian stochastic inversion seismic method is an inversion method that utilizes the principle of geostatistics so th will get a better subsurface picture with high resolution. The stages in conducting this stochastic inversion techniqu analysis, (ii) well to seismic tie, (iii) picking horizon, (iv) picking fault, (v) fault modeling, (vi) pillar gridding, ( vi (viii) scale up well logs, (ix) trend modeling, (x) variogram analysis, (xi) stochastic seismic inversion (SSI). In the process of well to seismic tie, statistical wavelets are used because they can produce good correlation values. Then, the stochastic seismic inversion result s a reservoir with tight sandstone lithology which has a low porosity value and a value of High acoustic impedance seismic, geostatistic, stochastic inversion, Bonaparte basin The Bonaparte Basin is mostly located off the coast of the and has an area of about 270,000 km2. This basin is known as one of the basins that produces hydrocarbons in Indonesia, especially hydrocarbons in the form of gases and condensates. The Bonaparte Basin is generally dominated by ry few fractures or compressional O'brien et al, 1993). According to Barber et al (2003), lithological characters based on biostratigraphic data indicate that the depositional pattern in the Bonaparte Basin Plover Formation is dominated by the deposition of braided fluvial types in the south of the study area to the coastal environment which is influenced by waves (wave dominated shoreline) and in the wave dominated shoreline the northern part is formed in a shallow marine . The direction of deposition in the braided fluvial southeast. In the oil and gas exploration stage, the seismic method is one of the top choice geophysical methods that can provide izing the seismic wave propagation properties. There is a technique commonly used in this seismic method, namely seismic inversion technique. Seismic inversion is a method that can describe and estimate the physical properties of subsurface in the form of acoustic impedance values by utilizing seismic data as input and well data as control. Well data here has detailed resolution on thin layer thickness. Meanwhile, seismic data is strongly influenced by bandwidth which for thin layer thicknesses g thickness cannot be resolved properly so as to cause high ambiguity problems in conducting inversions. Therefore, to overcome these problems, an inversion technique with a geostatistical approach can be used which results. Bayesian stochastic inversion seismic method is an inversion method that uses a geostatistical algorithm to obtain property models that have detailed resolution such as well data. In this study a mapping of sandstone reservoirs saturated with plover formation gas using stochastic inversion seismic methods in the "X" field in the Bonaparte Methodology In this research 3D post-stack time migration seismic data which is equipped with inline in the east totaling 300 lines (1100 -1400) and xline in the north direction totaling 800 lines (1000 between the lines which is 18.75 m. Well data used in this study amounted to 4 wells named AR AR-4. There are also data markers, namely t and base reservoir and checkshot data on each well. Data processing conducted in this study consisted of qualitative and quantitative data processing for obtaining hydrocarbon prospect zones, log data sensitivity analysis, wavelet extraction, well seismic tie, picking horizon, picking fault, fault modeling, pillar gridding, time structure map and depth structure map, isopatch map creation, AI scale-up log and trend modeling so that later stochastic inversion results will be obtained. Stochastic seismic inversion is an inversion technique whose basic principle uses a random simulation algorithm and produces more than one acoustic impedance model that fills observational seismic data. More than one solution can overcome the problem of non-uniqueness and uncertainty in deterministic inversion, especially in the case of thin films. Another advantage is that this method does not depend on the bandwidth of the seismic data used, but on the block size This study has been done to map the distribution of gas saturated sandstone reservoir by using stochastic seismic inversion in the "X" field, basin. Bayesian stochastic inversion seismic method is an inversion method that utilizes the principle of geostatistics so that later it will get a better subsurface picture with high resolution. The stages in conducting this stochastic inversion technique are as follows, (i) sensitivity analysis, (ii) well to seismic tie, (iii) picking horizon, (iv) picking fault, (v) fault modeling, (vi) pillar gridding, ( vii) making time structure maps, ysis, (xi) stochastic seismic inversion (SSI). In the process of well to seismic tie, statistical wavelets are used because they can produce good correlation values. Then, the stochastic seismic inversion results show that the s a reservoir with tight sandstone lithology which has a low porosity value and a value of High acoustic impedance Bayesian stochastic inversion seismic method is an inversion method that uses a geostatistical algorithm to obtain property models that have detailed resolution such as well In this study a mapping of sandstone reservoirs saturated ver formation gas using stochastic inversion seismic Bonaparte Basin. stack time migration seismic data which is equipped with inline in the east-west direction 1400) and xline in the north-south direction totaling 800 lines (1000 -1800) with the distance between the lines which is 18.75 m. Well data used in this study amounted to 4 wells named AR-1, AR-2, AR-3, and 4. There are also data markers, namely the top reservoir and base reservoir and checkshot data on each well. Data processing conducted in this study consisted of qualitative and quantitative data processing for obtaining hydrocarbon prospect zones, log data sensitivity analysis, on, well seismic tie, picking horizon, picking fault, fault modeling, pillar gridding, time-map mapping structure map and depth structure map, isopatch map creation, up log and trend modeling so that later stochastic ined. Stochastic seismic inversion is an inversion technique whose basic principle uses a random simulation algorithm and produces more than one acoustic impedance model that fills observational seismic data. More than one solution can uniqueness and uncertainty in deterministic inversion, especially in the case of thin films. Another advantage is that this method does not depend on the bandwidth of the seismic data used, but on the block size W when simulating the impedance model so that the results of this stochastic inversion are less smooth than the deterministic inversion results. The basic principle of this stochastic seismic inversion is the Bayesian principle. This principle uses the concept of probability which is interpreted as "a measure of a state of knowledge". This principle is a probability principle, where there is a prior probability model, which is then formulated with a likelihood probability function so as to obtain the output of the posterior probability model. This output model is the realization of the impedance model. The formulation of the Bayesian principle is as follows (H.Anders, 1998 σ : likelihood function k : constant Bayes' theorem is the result of a combination of probability theory and conditional probability. Probability theory states the likelihood of an event occurring with real numbers 0 to 1. Meanwhile, conditional probability is an event A that occurs when it is known that event B has already occurred (Wapole, 2000). Qualitative Analysis In this study four data wells were used, namely AR AR-2 well, AR-3 well, and AR-4 well. Qualitative analysis is the first step in determining the hydrocarbon prospect zone. This qualitative analysis is performed by the quick look method where by looking at the response from the gamma ray log, resistivity log, neutron porosity log, and density log. This quick look method aims to interpret zones that are permeable, impermeable, lithological types, and hydrocarbon fluid content. The basic principle of this stochastic seismic inversion is the Bayesian principle. This principle uses the concept of ed as "a measure of a state of knowledge". This principle is a probability principle, where there is a prior probability model, which is then formulated with a likelihood probability function so as to obtain the his output model is the realization of the impedance model. The formulation of the H.Anders, 1998). Bayes' theorem is the result of a combination of probability theory and conditional probability. Probability theory states the likelihood of an event occurring with real numbers 0 to 1. Meanwhile, conditional probability is an event when it is known that event B has already In this study four data wells were used, namely AR-1 well, 4 well. Qualitative analysis is the first step in determining the hydrocarbon prospect zone. This qualitative analysis is performed by the quick look oking at the response from the gamma ray log, resistivity log, neutron porosity log, and density log. This quick look method aims to interpret zones that are permeable, impermeable, lithological types, and hydrocarbon fluid d in the study Well Name In the AR-1 well contained in Figure 1 it can be seen that the depth of the reservoir layer containing hydrocarbons is 3955 ft to 4275 ft which is marked with a dark blue box. In this layer it is known that the response of the gamma ray log curve shows a low response which ranges from 10 gAPI to 60 gAPI so that the layer can be said to be a permeable layer and has a sandstone or sandstone lithology type. Then, the resistivity log curve response, the log LLD, shows a very high response of more than 80 ohms. So the layer can have a gas hydrocarbon fluid content. In addition, the separation of the log density and neutron porosity log also indicates that the layer is a hydrocarbon prospect zone. Furthermore, the AR-2 well did not show any reservoir layers containing hydrocarbons. This is because in AR there is no separation between log density and log neutron porosity and is also dominated by high gamma ray log response. In addition, if seen from the response of the log resistivity curve, there is no contrast of high resistivity values. The qualitative analysis on AR-2 wells can be seen in Figure 2. Figure 3 it can be seen that the depth of the reservoir layer conta hydrocarbons is 4185 ft to 4492 ft which is marked with a dark blue box. In this layer it is known that the response of the gamma ray log curve shows a low response which ranges from 10 gAPI to 60 gAPI so that the layer can be said to be a permeable layer and has a sandstone or sandstone lithology type. Then, the resistivity log curve response, the log LLD, shows a high response of more than 80 ohms. So that the layer can be said to have a gas hydrocarbon fluid content. In addition, the separation of the log density and neutron porosity log also indicates that the layer is a hydrocarbon prospect zone. Then, in the AR-4 well contained in Figure 4 it can be seen that the depth of the reservoir layer containing hydrocarbons is 4220 ft to 4541 ft marked w box. In this layer it is known that the response of the gamma ray log curve shows a low response which ranges from 10 gAPI to 60 gAPI so that the layer can be said to be a permeable layer and has a sandstone or sandstone lithology type. Then, the resistivity log curve response, the log LLD, shows a high response of more than 80 ohms. So that the layer can be said to have a gas hydrocarbon fluid content. In addition, the separation of the log density and neutron porosity log also indicates that the layer is a hydrocarbon prospect zone. 1 well contained in Figure 1 it can be seen that the depth of the reservoir layer containing hydrocarbons is 3955 ft to 4275 ft which is marked with a dark blue box. In this layer it is known that the response of the gamma ray log ow response which ranges from 10 gAPI to 60 gAPI so that the layer can be said to be a permeable layer and has a sandstone or sandstone lithology type. Then, the resistivity log curve response, the log LLD, shows a very high So the layer can be said to have a gas hydrocarbon fluid content. In addition, the separation of the log density and neutron porosity log also indicates that the layer is a hydrocarbon prospect zone. 2 well did not show any reservoir layers containing hydrocarbons. This is because in AR-2 well there is no separation between log density and log neutron porosity and is also dominated by high gamma ray log response. In addition, if seen from the response of the log e is no contrast of high resistivity values. 2 wells can be seen in Figure Qualitative analysis of AR-2 wells 3 well contained in Figure 3 it can be seen that the depth of the reservoir layer containing hydrocarbons is 4185 ft to 4492 ft which is marked with a dark blue box. In this layer it is known that the response of the gamma ray log curve shows a low response which ranges from 10 gAPI to 60 gAPI so that the layer can be said to be a layer and has a sandstone or sandstone lithology type. Then, the resistivity log curve response, the log LLD, shows a high response of more than 80 ohms. So that the layer can be said to have a gas hydrocarbon fluid content. In the log density and neutron porosity log also indicates that the layer is a hydrocarbon prospect 4 well contained in Figure 4 it can be seen that the depth of the reservoir layer containing hydrocarbons is 4220 ft to 4541 ft marked with a dark blue box. In this layer it is known that the response of the gamma ray log curve shows a low response which ranges from 10 gAPI to 60 gAPI so that the layer can be said to be a permeable layer and has a sandstone or sandstone lithology n, the resistivity log curve response, the log LLD, shows a high response of more than 80 ohms. So that the layer can be said to have a gas hydrocarbon fluid content. In addition, the separation of the log density and neutron porosity at the layer is a hydrocarbon prospect Sensitivity Analysis In the sensitivity analysis in the research, a cross between the P-Impedance and log porosity (PHIT) logs is used with a color scale of values in the form of a gamma ray log. This is done to see the sensitivity of these parameters in separating shale and sand lithology. In this research, sensitivity analysis is divided into two zones, name marked in yellow is identified as sandstone lithology or sandstone while the zone marked in blue is identified as shale lithology. Based on the results of cross plots on each well it can be seen that the separation of lithology (sandstone) and shale is considered sensitive because it can separate the boundary between sand and shale contained in the reservoir zone. Lithology of sandstone or sandstone is indicated by an acoustic impedance value of 27.500 ft/s* /cc to 50.000 ft/s*g/cc where this is also indicated by a low gamma ray log value and a low porosity value caused by the study area consisting the reservoir with lithology in the form of tight sand. Meanwhile, shale lithology is indicated by acoustic impedance values of 17.000 ft/s*g/cc to 25.000 ft/s*g/cc In the sensitivity analysis in the research, a cross-plot porosity (PHIT) logs is used with a color scale of values in the form of a gamma ray log. This is done to see the sensitivity of these parameters in separating shale and sand lithology. In this research, sensitivity analysis is divided into two zones, namely the zone marked in yellow is identified as sandstone lithology or sandstone while the zone marked in blue is identified as shale Based on the results of cross plots on each well it can be seen that the separation of lithologybetween sandstone (sandstone) and shale is considered sensitive because it can separate the boundary between sand and shale contained in the reservoir zone. Lithology of sandstone or sandstone is indicated by an acoustic impedance value of 27.500 ft/s* /cc to /cc where this is also indicated by a low gamma ray log value and a low porosity value caused by the study area consisting the reservoir with lithology in the form of tight sand. Meanwhile, shale lithology is indicated by acoustic 0 ft/s*g/cc to 25.000 ft/s*g/cc where this is also indicated by high gamma ray log values and relatively high porosity values as well. Time Structure Map Time structure map making is done to see trends in the research area which will be used in making trend modeling in the stochastic inversion process. Time structure map is obtained from the results of picking top reservoir and base reservoir so that it produces a map of the depth of the reservoir zone in the time domain. Based on the results of the time structure map on the top reservoir and base reservoir, it is known that the area marked in red has shallow depth or the highest elevation area. Meanwhile, the blue area has a fairly deep depth. On the structure map the top reservoir tim dominant time range of -1100 ms to base reservoir time structure map is known to have a range of -1200 ms to -2100 ms. From the two time structure maps above, it can be identified that the location of the four peak and it can be identified that the hydrocarbons in the study area migrated from the southeast to the northwest. This is indicated by the contour that represents the lowlands in the southeast and contours that represent the heig northwest. where this is also indicated by high gamma ray log values and relatively high porosity values as well. Time structure map making is done to see trends in the research area which will be used in making trend models or trend modeling in the stochastic inversion process. Time structure map is obtained from the results of picking top reservoir and base reservoir so that it produces a map of the depth of the reservoir zone in the time domain. results of the time structure map on the top reservoir and base reservoir, it is known that the area marked in red has shallow depth or the highest elevation area. Meanwhile, the blue area has a fairly deep depth. On the structure map the top reservoir time is known to have a 1100 ms to -2100 ms. Meanwhile, the base reservoir time structure map is known to have a range of From the two time structure maps above, it can be identified that the location of the four wells ison the anticline peak and it can be identified that the hydrocarbons in the study area migrated from the southeast to the northwest. This is indicated by the contour that represents the lowlands in the southeast and contours that represent the height in the Upscaling Based on the results of acoustic impedance log scale up that has been done on AR-1 wells, AR-2 wells, AR and AR-4 wells it is known thatthere are similarities from the acoustic impedance log data with the acoustic impedance log data that has been done upscaling. This is evident from the results of the histogram that shows similarities to each other so that it can be said that the upscaling process is correct. The acoustic impedance scale up log process is useful to assist in spreading the acoustic impedance values during a stochastic inversion process. Trend Modeling Making trend modeling is done by using input in the form of acoustic impedance scale up log data and time structure map. The making of this trend model is done to describe the spatial distribution based on the value of the acoustic impedance scale up log with the trend direction based on the time structure map. The results of the modeling trend are shown in Figure 9. Based on the results of acoustic impedance log scale up 2 wells, AR-3 wells, there are similarities from the acoustic impedance log data with the acoustic impedance log data that has been done upscaling. This is evident from the results of the histogram that shows similarities to each other so upscaling process is correct. The acoustic impedance scale up log process is useful to assist in spreading the acoustic impedance values during a stochastic Histogram analysis scale up the acoustic impedance log Making trend modeling is done by using input in the form of acoustic impedance scale up log data and time structure map. The making of this trend model is done to describe the spatial distribution based on the value of the acoustic up log with the trend direction based on the time structure map. The results of the modeling trend are Based on the results of the trend modeling conducted, it is known that the acoustic impedance value in the a range of values of 10,000 ft / s * g / cc to 50,000 ft / s * g / cc where the high acoustic impedance values are marked with red to green in around the AR-3 well and AR while the low acoustic impedance values are marked in pur to blue around the AR-1 well and AR Stochastic Inversion In this study, stochastic inversion was carried out using the Bayesian method approach with a total of 20. Meanwhile, the variogram used was the cubic model. In doing this stoch inversion seismic method is done by using input in the form of acoustic impedance scale up log data and trend models. The results of this stochastic inversion are in Figure 10. Based on the acoustic impedance distribution map using stochastic seismic inversion it is known that the area marked in red is an area that has a high acoustic impedance value with a range of values of 30,000 ft / s * g / cc to 40,000 ft / s * g / cc and is identified as reservoir with lithology in the form of tight sand saturated with hydrocarbons in the form of gas which is also supported by the results of interpretation of data wells that have been carried out in the previous process on AR-1 wells, AR-3 wells, and AR-4 wells that are identified as having a gaseous hydridarbon content. Depth Structure Map Depth structure map is obtained from the conversion of time structure map into depth domain. In doing time to depth Based on the results of the trend modeling conducted, it is known that the acoustic impedance value in the study area has a range of values of 10,000 ft / s * g / cc to 50,000 ft / s * g / cc where the high acoustic impedance values are marked with 3 well and AR-4 well area, while the low acoustic impedance values are marked in purple 1 well and AR-2 well area. In this study, stochastic inversion was carried out using the Bayesian method approach with a total of 20. Meanwhile, the variogram used was the cubic model. In doing this stochastic inversion seismic method is done by using input in the form of acoustic impedance scale up log data and trend models. The results of this stochastic inversion are in Figure 10. Map of acoustic impedance acoustic impedance distribution on the acoustic impedance distribution map using stochastic seismic inversion it is known that the area marked in red is an area that has a high acoustic impedance value with a range of values of 30,000 ft / s * g / cc to 40,000 ft / s * g / ntified as reservoir with lithology in the form of tight sand saturated with hydrocarbons in the form of gas which is also supported by the results of interpretation of data wells that have been carried out in the previous process on 4 wells that are identified as having a gaseous hydridarbon content. section of stochastic seismic inversion results Depth structure map is obtained from the conversion of in. In doing time to depth conversion the stacking velocity method is used which is a method that can convert from time domain to depth domain using the speed model. Making this depth structure map is very important because there are differences between th domains that can cause ambiguity when interpreted where in fact conditions are domain depth, but seismic data has a time domain (TWT). Based on the map of the depth structure of the top reservoir and base reservoir it is known that the prospect area of the four wells is identified to be in the height area in the southeast where it has a depth of 4000 ft to 4500 ft. In the prospect area there are two areas of height in the form of anticline which are in the southeast and northwest. This structure was formed because there is a fault that separates the two height areas. In the research area, there are three main Based on the map of the depth structure of the top reservoir and base reservoir it is known that the prospect area of the four wells is identified to be in the height area in the southeast where it has a depth of 4000 ft to 4500 ft. In the ere are two areas of height in the form of anticline which are in the southeast and northwest. This structure was formed because there is a fault that separates the two height areas. In the research area, there are three main faults namely faults that are southwest are east-west, and faults that are southeast these three faults are identified as normal faults or normal faults. These faults can act as structural traps where hydrocarbons will migrate and are trapped in t traps so that hydrocarbons can accumulate in them. Isopach Map Isopach map making is done to describe the thickness of the reservoir layer where this map is made by depth of the top reservoir from the base reservoir. Based on the isopatch map that has been made it is known that in AR wells, AR-3 wells, and AR-4 wells have a thick reservoir layer which is around 300 ft to 450 ft. Meanwhile, the AR has a thinner reservoir layer which is arou isopach map found in Figure 14 it can be said that there was a depletion of the reservoir from the southeast to the northwest. Conclusion This study showed that the Bayesian can produce better results than the 27 southwest-northeast, faults that west, and faults that are southeast-northwest where these three faults are identified as normal faults or normal faults. These faults can act as structural traps where hydrocarbons will migrate and are trapped in these structural traps so that hydrocarbons can accumulate in them. 3D map of the structure of the top reservoir depth ch map making is done to describe the thickness of the reservoir layer where this map is made by subtracting the depth of the top reservoir from the base reservoir. Based on the isopatch map that has been made it is known that in AR-1 4 wells have a thick reservoir layer which is around 300 ft to 450 ft. Meanwhile, the AR-2 well has a thinner reservoir layer which is around 210 ft. On the ch map found in Figure 14 it can be said that there was a depletion of the reservoir from the southeast to the northwest. Isopach Map Bayesian stochastic inversion can produce better results than the deterministic inversion. W Because of the smoothness and average, the deterministic inversion was unsuitable for constraining reservoir models when this model was used for volumetric calculati estimation of connectivity, and individualization of sand bodies or fluid flow simulation Based on the results, it is known that the hydrocarbon prospect zone has a high acoustic impedance value ranging from 30,000 ft/s * g/cc to 40,000 ft/s * g/cc wh of 4,000 ft to 4,500 ft. This is due to the reservoir of the study area which is dominated by tight sandstone lithology which contains gas and has a low porosity value. Finally, the Bayesian stochastic inversion efficiently improved the reservoir characterization process by not only producing accurate estimation of the low heterogeneities but also generating outcomes that help study the uncertainty analysis.
7,228.2
2020-03-30T00:00:00.000
[ "Geology" ]
Space Charge Effects in Bunches for Different rf Wave Forms As part of the required beam stability and feedback studies for the SIS upgrade we investigate the interplay of nonlinear rf fields and space charge using simplified analytic models as well as large scale particle simulation scans. Starting from the matched elliptic (’Hofmann-Pedersen’) distribution analytic expressions for the synchrotron tune and for the rigid dipole mode are obtained. The threshold intensities for the space charge induced loss of Landau damping in single and double rf wave forms are derived. The thresholds are compared with machine observations and with simulations of the bunch response to a weak rf phase modulation. The simulation results are related to previous work on beam transfer functions in single and double rf waves. INTRODUCTION Longitudinal space charge effects play an important role in storage rings or synchrotrons for high current ion beams.The induced effects range from synchrotron tune shifts (see e.g.[1]) to coherent mode splitting [2] that can both be observed with high accuracy from the Schottky noise spectrum, as demonstrated in the GSI heavy ion cooler storage ring ESR [3]. Below transition space charge reduces the effective rf voltage seen by the beam particles.This usually requires an increase of the applied rf voltage in order to compensate for the reduction of the bucket area.Space charge affects the frequencies and the damping of coherent bunch modes.This in turn changes the bunch instability thresholds and the corresponding impedance budget. For bunches that are very short relative to the rf wave length, but still long compared to the beam pipe diameter it is straightforward to calculate the space charge induced incoherent synchrotron frequency shift (see e.g.[1]).The space charge induced coherent mode splitting in short bunches was analyzed by Neuffer in Ref. [2]. At low or medium beam energies bunches usually cannot be regarded as being short relative to the rf wave length.Particles with large amplitudes will be affected by the nonlinear components of the rf field and by space charge.This is especially this case if a second harmonic rf system is employed in order to flatten the bunch profile and to increase the transverse space charge limit (see e.g.[4]).In case of such a double rf system the particle motion is fully nonlinear.A double rf system is foreseen in order to increase the bucket area and also the transverse space charge limit in the SIS [5].Because of the low injection energy (11.4 MeV/u) and the demand for highest longitudinal beam quality the intense Uranium bunches will be strongly affected by longitudinal space charge. An excellent review of nonlinear single particle motion in single and double rf waves forms can be found in Ref. [6].A self-consistent treatment of matched bunches affected by space charge in arbitrary rf wave forms was presented by Hofmann and Pedersen [7] and applied to long bunches in a single rf wave.They obtained an analytic expression for the threshold intensity for the 'loss of Landau damping' in case of the rigid dipole mode in a single rf wave.For sufficiently high intensities space charge suppresses the decoherence of dipole oscillations, leading to persistent oscillations of intense bunches [8,9].In the present work we apply the theory by Hofmann and Pedersen to double rf systems. Of particular interest for the study of beam stability thresholds is the response of long bunches to rf phase or amplitude modulations.In Ref. [10] the response of a bunch to small rf phase or amplitude modulations was studied in the framework of the beam transfer function formalism.It was found that long bunches in a double rf wave are intrinsically unstable, because of the vanishing derivative of the synchrotron frequency inside the bunch (but outside the bunch center).In a system with non monotonic behavior of the synchrotron frequency Landau damping can be lost for certain particle amplitudes.Whether the infinite response function obtained in Ref. [10] from linearized Vlasov theory lead to observable effects in high resolution, non-perturbative simulation studies is one of the subjects of the present work. LONGITUDINAL EQUATION OF MOTION Let φ be the phase coordinates of an off-momentum particle.Then the longitudinal equation of motion is with the effective mass m * = −γ 0 m/η, the relativistic parameter γ 0 , the slip factor η, the ring circumference L and radius R, the charge q, the harmonic number n and the voltage profile V (φ ).The voltage profile for a single (α = 0) and for a double (α > 0) rf system, operating at the second harmonic of the main rf, is given through (see e.g.[6]): with the synchronous phases φ s of the main rf and of the second rf φ s2 , respectively.In order to obtain a flattened rf potential well with a double rf system the first and the second derivative of the voltage profile should vanish at φ = φ s [4].For stationary bunches one obtains α = 0.5.The equation of motion in the (φ , v) coordinates can be derived from the 'Hamiltonian' with the potential and the small amplitude synchrotron frequency for α = 0 and φ s = 0 The voltage profile can be divided into the external (rf) voltage part and the space charge part The space charge voltage is given through (see e.g.Ref. [1]) with the space charge electric field E s , the space charge reactance X s , the line density λ (φ ) and the g-factor.For the space charge potential one obtains with the line density λ 0 at φ = φ s . ELLIPTIC BUNCH DISTRIBUTION If the Hamiltonian is a constant of motion any stationary ('matched') distribution function can be written as a function of H.The analytic analysis in the presence of space charge can be greatly simplified if a local elliptic ('Hofmann-Pedersen') distribution function [7] is assumed with the normalization constant c 1 , the value of the Hamiltonian H m for the bunch boundary particle, the maximum phase velocity v m = φm at the bunch center φ = φ s and the potential Y (φ m2 ) at one end of the bunch. For the distribution function in (φ , v) space we get with the phase velocity function for the boundary particle The line density is obtained from Eq. 12 as with the number of particles N in the bunch.In the case of an elliptic distribution function the space charge potential induced by a bunch is directly proportional to the external potential.The total potential can be written in the form with the space charge voltage amplitude V s0 .Below transition (m * > 0) the limiting bunch intensity is given through V s0 = V 0 .At this intensity the external focusing field is exactly canceled by the space charge field.Below this limiting bunch intensity and for bunch boundaries φ m1 , φ m2 not exceeding the bucket boundaries the matched voltage amplitude can be obtained as with the space charge parameter SYNCHROTRON FREQUENCY The synchrotron period as a function of the left and right particle oscillation amplitudes φ1 , φ2 can be derived from with the velocity amplitude function For the elliptic distribution the effect of space charge can be cast into a simple, multiplicative factor In the case of a single, stationary rf wave and small amplitudes ( φ π) the following result for the synchrotron oscillation frequency can be obtained (see e.g.[6], p. 235, for Σ = 0) For a double rf wave we obtain with the elliptic integral of the first kind K(x).In a stationary double rf wave the maximum synchrotron frequency is located at φcrit ≈ 117 o with the numerical value given by ω max s ≈ 0.78 ω s0 / √ 1 + Σ.In Ref. [10] it was shown that if the bunch length φ m exceeds φcrit Landau damping will be lost for frequencies close to ω max s because of the vanishing dervative of the synchrotron frequency.Therefore this amplitude and the corresponding synchrotron frequency are called 'critical'. RIGID DIPOLE OSCILLATIONS Let φ c be the position of the bunch center.If a matched bunched is rigidly displaced by the amount ∆φ c = φ c − φ s from the synchronous phase the net force acting on the bunch center can be obtained from the rf force averaged over the bunch profile (see also Ref. [7]) Expanding the integrand for small displacements (∆φ c π) yields the equation of motion for a harmonic oscillator with the dipole oscillation frequency For a stationary single rf wave we obtain and for a double rf wave 2 φ m cos 2φ m + 2 sin φ m − 1 4 sin 2φ m In the limit of short bunches one obtains for a single rf wave and for a double rf wave For a single rf wave the dipole mode as well as the incoherent synchrotron frequency can be identified from the simulation 'Schottky' noise from a Particle-In-Cell (PIC) code [11].Fig. 1 shows the simulation noise from a matched elliptic distribution of macro-particles as a function of the relative frequency ∆ω = ω − hω 0 divided by ω s0 at harmonic h = 10 for Σ = 1.0 and φ m = 90 0 .In case of a double rf wave the noise spectrum remains incoherent also in the presence of space charge. LANDAU DAMPING OF DIPOLE OSCILLATIONS Landau damping is lost when the coherent frequency (here the dipole mode) is outside the band of incoherent synchrotron frequencies.The corresponding threshold intensity can be calculated from ) being the minimum (maximum) synchrotron frequency inside the bunch.The first case is e.g.relevant for a single rf wave above transition (or in the case of a broadband inductive impedance below transition), where the single particle synchrotron frequencies are shifted upwards with increasing space charge.Here the threshold intensity is determined by the lowest synchrotron frequency.Below transition (or for an capacitive impedance above transition) the second case applies. For short bunches the threshold parameters can be obtained analytically.In a single rf wave we obtain with and with ω min In a double rf wave ω min s = 0 holds.The band of incoherent synchrotron frequencies in a double rf wave extends from 0 to ω max s .For positive m * and for a short bunch one obtains the constant threshold space charge parameter It is interesting to point out that for negative m * the criterium for Landau damping Eq. 31 is always fullfilled in case of a double rf wave.In Fig. 2 Σ th is shown for stationary single and double rf waves.It can be seen that in a single rf wave with increasing bunch length the threshold space charge parameter increases ∼ φ 2 m .In a double rf wave Landau damping is lost below transition at much lower space charge parameters than in a single rf wave.Slightly above the critical bunch length φ crit ≈ 117 0 Landau damping is lost for all Σ.For bunch lengths exceeding this value Σ th increases again.Persistent dipole oscillations of ion bunches in single rf waves with Σ > Σ th can be observed in the GSI heavy ion synchrotron SIS [9].This is due to the present lack of an rf phase control system in the SIS.If the energy of the injected beam from the UNILAC linear accelerator differs from the rf cavity frequency the coasting beam is captured with a momentum offset. BUNCH RESPONSE TO A RF PHASE MODULATION The intensity threshold for the loss of Landau damping was obtained in a non-self-consistent fashion.A more rigorous analytical approach would start from the linearized Vlasov theory.For Σ = 0 this approach was pursued in Ref. [10] where the beam transfer functions (BTF) in single and double rf waves were calculated for rf phase modulations (in a double rf wave only the phase of the first rf wave is modulated).The BTF amplitudes for single and double rf waves show pronounced maxima at Ω = Ω c , with Ω c being the rigid dipole oscillation frequency.In a double rf wave and for bunch lengths equal to or longer than φ crit the BTF amplitude diverges for Ω = ω max s .The infinite response for modulation frequencies Ω = ω max s = ω crit is due to the vanishing derivative of the synchrotron frequency inside the bunch.For those frequencies Landau damping is lost and within a linearized theory there is no other damping mechanism. In the present work we study the bunch response to a weak rf phase modulation within a PIC code starting from a matched elliptic distribution.The maximum dipole amplitudes are excited for modulation frequencies tuned close to the rigid dipole mode.For bunch intensities exceeding the threshold space charge parameter for the loss of Landau damping a strong increase in the maximum dipole amplitudes can be observed in both, single and double rf waves.For long bunches, exceeding the critical bunch length φ crit ≈ 117 0 in a double rf wave, we do not observe a pronounced dipole response for Ω = ω crit , as it would be expected from the BTF.Instead characteristic bunch shoulders around φ = ±φ crit (see Fig. 3) are formed.It is interesting to note that similar shoulders on long bunches in a double rf wave were measured in the CERN SPS [12].How these bunch shoulders affect and eventually restore Landau damping will be a topic of future work. CONCLUSIONS In the framework of the elliptic distribution function we obtained analytic expressions for the synchrotron frequency, the rigid dipole oscillation frequency and for the threshold space charge parameter for the loss of Landau damping in single and double rf wave forms.It was shown that below transition energy the threshold in a double rf wave is much lower than in a single rf wave.Above transition Landau damping is always effective in a double rf wave.We showed that the synchotron frequency and the rigid dipole mode frequency can both be well identified from the 'Schottky' simulation noise from long bunches in a single rf wave.The bunch response to a weak rf phase modulation obtained from PIC simulations shows the maximum dipole amplitudes are excited for modulation frequencies tuned close to the rigid dipole mode.For bunch intensities exceeding the threshold space charge parameter for the loss of Lan- dau damping a strong increase in the maximum dipole amplitudes can be observed in both, single and double rf waves.We find that the infinite beam transfer functions obtained in Ref. [10] for long bunches in double rf waves do not lead to pronounced dipole amplitudes.Instead we observe bunch shoulders, possibly restoring Landau damping.From this one can conclude that linearized Vlasov theory might not be adequate in order to predict the stability of bunches affected by nonlinear rf fields.Besides simulation studies an elaborate quasilinear Vlasov theory could lead to more insight into damping and stabilization mechanisms in long bunches.On the other side our work shows that the threshold space charge parameter Σ th for the dipole mode accurately predicts the observed strong increase of the bunch response in single and double rf waves.Preliminary simulation studies including resistive impedances sources show that Σ th (φ m ) can approximate very well the instability thresholds in long bunches affected by space charge.Future work will also address the effect of space charge on high order modes in long bunches.Especially the quadrupolar and sextupolar modes will be studied.The experience at the PSB e.g.shows that especially sextupolar modes can be excited in double rf waves [13]. FIGURE 1 . FIGURE 1. Simulation noise power spectrum at harmonic h = 10 from a matched bunch in a single rf wave.The space charge parameter is Σ = 1 and the bunch length φ m = 90 0 . FIGURE 2 . FIGURE 2. Σ th as a function of the bunch length φ m for stationary single and double rf waves above ('inductive') and below ('capacitive') transition.
3,693.2
2005-03-10T00:00:00.000
[ "Physics" ]
Invalid Data Rejection of Audible Noise on AC Transmission Lines Based on Moving Window Kernel Principal Component Analysis The statistical characteristics of the nighttime noise data of 1000 kV AC transmission lines were investigated, the noise data of the Huainan-Shanghai 1000 kV AC transmission line collected at night (0:00 to 6:00) from September 25, 2015, to February 16, 2016, were statistically analyzed using the nonparametric statistical K-S test, and the outliers were detected using the moving window kernel principal component analysis (MWKPCA). The results show that after the ineffective data are removed by MWKPCA, the 5, 50, and 95% values of the data are basically unchanged. To a certain extent, the method proposed in this paper can remove the invalid audible noise (AN) data of 1000 kV AC transmission lines without affecting the subsequent study of AN, we use various machine learning algorithms to predict the A weight sound level (Awsl) before and after the invalid data rejection, and the results show that the invalid data rejection has contributed to the improvement of the transmission line AN Awsl prediction accuracy. INTRODUCTION Audible noise (AN) of transmission lines, as one of the design criteria of transmission lines, affects the conductor selection, corridor width, insulator string length, and conductor arrangement. However, in the process of collecting the transmission lines AN, there is a large amount of ambient noise, and the data collection is easily disturbed by the ambient noises. If the transmission lines AN is smaller than the ambient noises, then the ambient noises will probably become invalid data in the data set, and the invalid data will have an impact on the transmission line evaluation. Previous research on transmission lines AN contains empirical formulas for transmission lines AN in various countries (Juette and Zaffanella, 1970;Trinh and Maruvada, 1977;Perry et al., 1979;Chartier and Stearns, 2007;Tang et al., 2010;Chen et al., 2012), analysis of transmission lines AN domain characteristics and frequency domain characteristics Cheng et al., 2019), and transmission line design parameters, meteorological factors, environmental factors on transmission lines AN, and so on (Li et al., 2016;Guo et al., 2019;Zao et al., 2021;Xie et al., 2016;Du et al., 2016;Xie et al., 2017;Yang et al., 2016;Li et al., 2018;Pengfei et al., 2019). However, in order to solve the influence of ambient noises on data acquisition, Yuanqing Liu et al. studied the frequency spectrum of corona AN and ambient noises of positive and negative conductors of DC transmission lines at different voltages through corona cage test and studied the conversion relationship between A-weighted sound level (Awsl) and 8 kHz component of DC transmission lines AN, so as to avoid the interference of ambient noises (Liu et al., 2014a). Yingyi Liu et al. studied the relationship between corona current and AN on transmission lines and summarized the empirical formula for calculating the A-weighted sound pressure level (Awsl) by corona current, so as to indirectly get the effective data of AN evading the ambient noises interference (Liu et al., 2019). Li Xebao et al. showed that, to accurately study the time-domain characteristics of the AN generated by single corona discharge, the ambient noise was removed by correlation analysis and impulse characteristics (Li et al., 2015). Liu Yuanqing et al. used a finite impulse response filter to reject the invalid data of AN on DC transmission lines. The above-mentioned research on the effective data of the AN of transmission lines is divided into two types: indirect acquisition of effective data and rejection of invalid data. The research on the rejection of invalid data uses methods for single-dimensional data, which directly process the original data of the sound signal or the Awsl, ignoring the connection between the individual octave components of the sound signal (Liu et al., 2014b). The above-mentioned studies on the effective data of AN on transmission lines are divided into two types: indirect acquisition of effective data and rejection of invalid data. The studies on the rejection of invalid data use methods for singledimensional data, which directly process the original data of the sound signal and repair the sound pressure data disturbed by ambient noise, ignoring the connection between the individual octave band components of the sound signal. Therefore, this paper introduces a data-driven approach based on the determination of multidimensional data, and the data disturbed by environmental noise are directly eliminated. Data-driven-based methods have more applications in power system stability, energy optimization and dispatch, voltage and current monitoring, transportation, etc. (Zhang and Luo, 2018;Zhu et al., 2019;Li et al., 2020;Yang et al., 2020;Shen and Raksincharoensak, 2021). In this paper, data consisting of 10 components of AN octave band from 16 Hz octave band to 8 kHz octave band and Awsl which are determined with moving window kernel principal component analysis (MWKPCA) by establishing the SPE statistic in the residual subspace of the principal component analysis with the T 2 statistic in the principal component subspace are used to evaluate AN invalid data, and the data that exceed the threshold of SPE statistic or T 2 statistic are excluded, so that the AN invalid data in the dataset are removed. AN DISTRIBUTION CHARACTERISTICS Noise data for a total of 69 days of the Huainan-Shanghai AC transmission line were collected at night (0:00 to 6:00) from September 25, 2015, to February 16, 2016. The conductor adopts 8×LGJ-630/45. Subconductor diameter is 33.6 mm. Subconductor spacing is 400 mm and the operating voltage is 1050 kV. The surface gradient of phase A, phase B, and phase C is 14.44, 14.82, and 14.73 kV/cm, respectively. The distribution characteristics of each octave band of AN and Awsl were analyzed using the K-S test (Kolmogorov-Smirnov test) one after another. The following hypothesis is made for the sample data H 0 : the overall sample data is conformed to the normal distribution, and the alternative hypothesis H 1 : the overall sample data from which the sample comes does not conform to normal distribution. The test statistic is defined as (1) where f(x) is the cumulative probability of the sample value in the normal distribution and g(x) is the actual cumulative probability. Since the actual f(x) and g(x) are discrete values, Equation 1 is modified to where n is the sample size. When the data size is large and the original hypothesis holds, D M approximately conforms to the Kolmogorov distribution, and the distribution function is expressed as Taking the significance level α as 0.05, calculate the test statistic Z values and the corresponding probability p values. If p is less than the significance level, then the original hypothesis H 0 is rejected and the distribution of the sample from the total is considered to be significantly different from the normal distribution. If p is greater than the significance level α, then the original hypothesis H 0 should not be rejected and the distribution of the total from which the sample comes is not significantly different from the normal distribution. Normal distribution analysis in days for a total of 69 days of data: 16 Hz octave band of AN has the highest number of days conforming to the normal distribution with 46 days, the lowest octave band of AN has only 23 days conforming to the normal distribution, average 33 days conforming to the normal distribution. A test of 44 days in which the data size exceeded the average group size of 110 groups: 16 Hz octave band of AN has the highest number of days conforming to the normal distribution with 29 days, and the lowest octave band of AN has only 9 days conforming to the normal distribution, average 17.8 days conforming to the normal distribution. AN INVALID DATA DETERMINATION Correlation Analysis of Each Octave Band Component When the electric field strength on the surface of AC transmission lines exceeds the critical strength, due to a large number of ionization effects, ionization zone will appear around the conductor, under the action of the electric field, positive ions in the positive zone and negative ions in negative zone are moved the radially outward movement, respectively. In the role of the alternating electric field around the conductor charged ions along Frontiers in Energy Research | www.frontiersin.org November 2021 | Volume 9 | Article 775519 the conductor to do round-trip movement to produce "humming" sound, this noise is "pure tone," and its frequency is a multiple of the frequency of 50 Hz. At the same time, the rapid movement of these ions will produce corona current pulses around the conductor, while a large number of ions in the direction away from the conductor and air molecules collide to produce sound pressure pulses. The AN generated by the sound pressure pulses and corona current pulses together in the broadband noise belongs to the medium and high-frequency AN (Fa Yuan et al., 2016;Zelong et al., 2012;Cheng, 2020). Both "pure tone" and broadband noise are periodic outward propagation of sound waves due to the pressure exerted on the air layer by ion motion under the effect of alternating electric fields (Di et al., 2012). There are many sound sources that produce various ambient noises during the acquisition of transmission lines AN. The frequency spectrum of different types of sound sources is not the same (Lu et al., 2010;Liu et al., 2018), and the final collected sound signal is the result of the joint action of the noise components belonging to different octave band. Therefore, it is necessary to consider the noise component data belonging to different octave band center frequency as a whole and to determine the invalid data for the data set composed of them. Eqs 4, 5 were used to calculate Pearson's correlation coefficient and gray correlation coefficient between each octave band component, respectively. where x i and y i are the sample observations of variable X and variable Y, respectively; μ and ] are the mean values of variables X and Y, respectively; N is the total number of samples. where Δ i (k) is the absolute value of the difference between the variable y(k) and the corresponding element of the variable x i (k) and ρ is the resolution factor; usually ρ is 0.5. A total of 55 pairs of correlation coefficients were obtained after calculating the Pearson correlation coefficients between each AN component by Equation 4, of which 33 groups had correlation coefficients less than 0.5 and 28 groups had correlation coefficients less than 0.4. A total of 55 pairs of gray correlation coefficients obtained after calculating the nonlinear relationship between the AN components by Equation 5 are all greater than 0.7. It can be found that there is a strong nonlinear relationship between each octave band component, so it is necessary to consider each octave band component as a whole composed of multidimensional data. It has been proved that the data do not satisfy the normal distribution in most cases, the time span of the transmission line AN collection is long, and the meteorological factors change a lot during the data collection process, so MWKPCA is used to determine the invalid data day by day to reduce the influence of the change of meteorological factors on the determination results. Algorithm Principle of MWKPCA KPCA can be viewed as a principal component analysis in highdimensional feature space Zhang and Luo, 2018;Zhu et al., 2021); compared with traditional PCA, it needs to project the dataset X [x 1 , x 2 /, x N ] into the high-dimensional feature space Γ through a nonlinear mapping b to obtain a new dataset: where X is a matrix of N rows and M columns, ϕ(x) is a matrix of D rows and M columns, and D > N. Then the covariance matrix in the higher dimensional space is C Γ : The kernel matrix Kϵϕ N×N is usually obtained in the highdimensional feature space using the kernel function instead of the mapping function, followed by the calculation of the kernel matrix K after centering. where k is a kernel matrix and 1 N is an N × N matrix where each element is 1 N . The eigenvectors (P 1 , P 2 , /, P 3 ) and the corresponding eigenvalues (λ 1 , λ 2 , /, λ A ) are obtained by the singular value decomposition of the covariance matrix R of the matrix K, where A (A<N) is the number of principal elements obtained by the cumulative variance contribution, and the covariance matrix of the matrix K is shown in the following equation: where P is the principal component load matrix and P e is the residual load matrix. By building a good KPCA model, the T 2 statistic is used to determine the information of K projection into the principal component subspace, as the following equation: where Λ diag(λ 1 , λ 2 , /λ m ) is the principal variance matrix, n is the number of samples, m is the number of principals, F(n, n − p) is the F distribution with degrees of freedom n and n-p. Let the confidence coefficient be α; then the control threshold of the T 2 statistic is T 2 UCL . The SPE statistics in the residual subspace are used to determine data anomalies. The SPE statistic is given in the following Eq. 12: The control threshold SPE UCL is given in the following Equation 13: where α is the confidence level, C is the critical value of the normal distribution at the detection level of α,h 0 1 − 2θ 1 θ 3 /3θ 2 2 , and θ i m j A+1 , i 1, 2, 3. MWKPCA introduces the moving window function on the basis of KPCA, and for such cases as this paper where the time span is up to 6 months, the invalid data is determined in days, and the training data and test data are continuously updated with SPE UCL and T 2 UCL , so as to reduce the negative impact of changes in meteorological factors on the results of invalid data determination. The flow of MWKPCA calculation is shown in Figure 1. Multidimensional Invalid Data Determination The 484 sets of data for each octave band component which are close to the average value of that component are selected as the initial training data, and the training data are updated in the process of determining invalid data day by day, adding the data judged as normal on that day to the training data, and eliminating the corresponding number of data from the previous training data, so as to detect abnormal data for 7,658 sets of test data day by day. The computed significance level of the initial training modelα 0.85, kernel width gamma 16 for the radial basis function, corresponding to the control threshold SPE UCL for the SPE statistic and the control threshold T 2 UCL for the T 2 statistic, and the corresponding number of principal elements is 9. The final outlier determination results are shown in Figure 2: the total number of groups that exceeded the threshold of SPE statistics or T 2 statistics was 1,013, the total number of groups that exceeded the threshold of T2 statistics was 703, and the final rejected data were 1,475. PREDICTION OF AWSL EFFECTIVE DATA Percentile Comparison Table 1 shows the percentile of each octave band component of AN in the two stages of original data and after MWKPCA (Ln in the table indicates the values ranked in the top n% positions by arranging the data in descending order), and it can be found that most of the octave band components L5, L50, L95 do not change much after the removing of invalid data screening, so the elimination of invalid data using the method of this paper basically does not affect the study of AN data (Liu et al., 2014a). , and the values are converted to between 0 and 1 to avoid the effect of the difference in magnitude between different features on the prediction accuracy. Prediction Result Comparison where S is the normalized result of each feature; s is the original data of each feature; S max and S min are the maximum and minimum values of each feature. In order to prevent the influence of chance on the prediction results due to the random combination of data when dividing the train sets and test sets, this paper divides the data sets into 10 copies by 10-fold cross validation, taking one of them as the train sets and the remaining nine as the test sets, and quantifies the error of the model prediction results by root mean square error (RMSE), mean absolute error (MAE), Mean Absolute Percentage Error (MAPE), and Symmetric Mean Absolute Percentage Error (SMAPE) (as shown in Eqs. 15-18, the smaller the error, the better the where y i and y i represent the true and predicted values; n represents the number of predicted versus true values. In order to better reflect the improvement of the prediction accuracy by the outlier rejection algorithm, this paper uses LightGBM and XGBoost based on Boosting model, SVR based on hyperplane, KNN based on distance, and elastic network and linear regression to predict the Awsl, and the mean value of the final Awsl prediction result is shown in Table 2: predictions were made using the data sets before and after invalid data rejection in this paper, respectively. The mean error of the prediction results after invalid data rejection using MWKPA is lower than that of the original data, and the invalid data rejection has contributed to the improvement of the prediction accuracy. Using the above six algorithms to predict the effective Awsl data after eliminating invalid data by IF, DBSCAN, LOF, KPCA, and MWKPCA, the comparison of the mean error values of the prediction results is shown in Table 2; the mean error values after eliminating invalid data by using MWKPCA are significantly lower than those of the other four methods. CONCLUSION A method is proposed to reject the invalid data of AN on transmission lines using MWKPCA. After using this method to reject the invalid transmission line AN data, there is no impact on the subsequent study of AN.
4,221
2021-11-05T00:00:00.000
[ "Physics" ]
Comparison of Autonomous Orbit Determination for Satellite Pairs in Lunar Halo and Distant Retrograde Orbits is carried out. A factor called dynamic and geometric dilution of precision (DAGDOP) is proposed to simultaneously incorporate influences from the dynamics and geometry of satellite pairs. Based on the DAGDOP, the effect of different observation arcs on the AOD accuracy is investigated. Next, the AOD accuracy of three different types of satellite pairs—halo+halo, DRO+DRO, and halo+DRO—is systematically analyzed. The hybrid halo+DRO type shows the best overall accuracy. Finally, the AOD performance of the hybrid type is verified in a realistic model. Our studies find that the average AOD accuracy of the halo orbit is about 170 meters, and that of the DRO is about 190 meters. The relative time synchronization error of two satellites is less than 30 nanoseconds. INTRODUCTION Traditionally, deep space navigation for lunar satellites (e.g., Apollo [Wollenhaupt, 1970], Lunar Prospector [Beckman & Concha, 1998], Lunar Reconnaissance Orbiter [LRO; Mazarico et al., 2018], and China's Chang'E [Duan et al., 2019]) strongly depends on radiometric tracking via ground stations, such as the Deep Space Network (DSN).For an unmanned lunar probe, the traditional tracking technique suffices.But for future crewed lunar exploration missions that are more demanding, such as NASA's Artemis program (Smith et al., 2020), it may no longer be accurate enough due to the lack of continuous tracking.Tracking signals can be eclipsed by the Moon and Earth (if the number of ground stations is not enough). Autonomous navigation that can be achieved on board is essential in this case.To strengthen the capability of autonomy, many autonomous navigation methods have been studied from the very beginning of the space era (Battin, 1962;Bowers Jr., 1966;Chory et al., 1984).For example, optical sensors and celestial navigation were the most common methods to obtain self-contained navigation results by tracking landmarks (Hur-Diaz et al., 2008) or celestial bodies (Christian & Lightsey, 2009). What's more, GNSS constellations show a great capacity for the autonomous navigation of low Earth orbits (LEO) since real-time positions can be obtained immediately with a GNSS receiver. For orbits with altitudes higher than those of GNSS, GNSS constellations also show the potential for navigation using signals from the other side of the Earth.Some experiments have been carried out in high Earth orbits (HEO;Balbach et al., 1996), geostationary Earth orbits (GEO; Kronman, 2000), and highly elliptic orbits (Winternitz et al., 2017).In the Chinese CE-5T1 mission, experiments were also conducted and the feasibility of GNSS was verified in lunar transfer orbits (Liu et al., 2017).A weak GPS receiver was developed at NASA's Goddard Space Flight Center (GSFC; Bamford et al., 2008).Capuano et al. (2017) proposed the use of an adaptive orbital filter to aid GNSS signal acquisition and increase navigation accuracy.However, as far as the authors know, no such experiments have been conducted in lunar orbits. Apart from the use of GNSS, a novel autonomous navigation technique known as Linked Autonomous Interplanetary Satellite Orbit Navigation (LiAISON) was proposed by Hill et al. (2005).The absolute states of the satellites can be solved on board simultaneously with the use of satellite-to-satellite tracking (SST), which would significantly reduce the burden of ground stations. The utilization of SST data was originally conceived to help with the orbit determination of satellites around the Earth (Vonbun et al., 1978).However, autonomous orbit determination (AOD) using SST data solely cannot work in the framework of the two-body problem.Absolute states of the satellites cannot be obtained due to a rank deficiency problem.It was proven by Liu and Liu (2001) that this problem could be solved in two ways.One way is to add a ground station and combine the SST data with the ground station tracking data.Another one is to provide a-priori information about the orbit plane orientation, i.e.,(i, Ω) of the orbits.Namely, the (i, Ω) have to be fixed to determine the other elements.Furthermore, Hill and Born (2007) found that the cause of this rank deficiency problem was that the Ω 1 and Ω 2 only appeared in the form of Δ Ω in the two-body problem.Tang et al. (2021) has demonstrated this idea with the use of the centralized AOD strategy by artificially fixing the Ω of at least one satellite.Further study on the observability of different satellite configurations was conducted by Qin et al. (2019).Nonetheless, cross-links have been used in GPS Block IIR satellites.However, due to the problem discussed above, GPS Block IIR satellites have to use stored navigation messages computed by ground stations to achieve autonomous navigation (Abusali et al., 1998).Significant enhancements to position and timing performance can be provided with the use of anchor stations (Rajan et al., 2003). According to the LiAISON strategy, in the three-body problem if the satellites are deployed on some special orbits, such as halo orbits around collinear libration points (CLPs), the absolute states of the satellites will be observable.This is because of the strong perturbation of the third body to the satellites.An illustration of the strength of asymmetry in the Earth-Moon system was given (Hill & Born, 2007) by defining a parameter α j .A total number of n sources of acceleration act on the spacecraft, including the particle gravity of the Earth and the Moon.The definition of third-body depends on the satellite's position.When the gravitational force due to the Moon is stronger, then Earth is taken as the third body.If the gravitational force due to the Earth is stronger, then the Moon is taken as the third body.The 2D and 3D maps of α, along with halo orbits and distant retrograde orbits (DROs) in the Earth-Moon system are shown in Figure 1.Frames (a) and (b) show the 2D contour map and frames, while (c) and (d) show the 3D contour map of α.Numbers on the level curves of frames (a), (b), and numbers on the nearly spherical shells in frames (c) and (d) indicate the values of α.It can be found that the halo orbits and DROs are mostly distributed in the area where α is between 30% and 40%.It is obvious that halo orbits and DROs are in the regions where the third-body perturbation is very strong, which is favorable for the AOD. Considering the asymmetry of the dynamics, halo orbits around CLPs and DROs are good candidates and are the most used to build a conceptual navigation constellation.Plenty of studies on halo orbits have been conducted (Du et al., 2013;Gao et al., 2014;Hamera et al., 2008;Hill et al., 2006;Zhang & Xu, 2016).For lunar surface navigation, Hesar et al. (2015b) studied the feasibility of the LiAISON strategy to navigate rovers on the lunar farside surface with one satellite on a halo orbit around the Earth-Moon L2 point.According to their study, a precision of tens of meters can be achieved over the majority of the lunar farside surface. The LiAISON strategy was also able to be extended to an Earth-Moon constellation which contains a satellite in a GEO and another in a halo orbit around Earth-Moon L1 or L2 point (Fujimoto et al., 2012;Leonard et al., 2012;Liu & Hou, 2014;Parker et al., 2012).Navigation of a crewed vehicle between the Earth and the Moon using both ground tracking and satellite-to-satellite tracking was analyzed by Leonard et al. (2013).What's more, the LiAISON method was also employed to support interplanetary exploration (McGranaghan et al., 2013).Apart from halo orbits, DROs can also serve as orbit candidates to achieve AOD, and they show better stability which is of vital importance for navigation constellations.Liu et al. (2014) first proposed the use of DROs to replace halo orbits in the LiAISON strategy.They investigated the AOD accuracy of a lunar satellite+a DRO configuration and compared the accuracy with that of a lunar satellite+a halo orbit configuration.Wang et al. (2019) further exploited the idea and studied the navigation performance between the DROs and different types of cislunar orbits considering dynamic and clock model errors.Apart from navigation of satellites, Hesar et al. (2015a) presented a method to estimate the gravity field of a small body based on LiAISON strategy.Furthermore, the LiAISON strategy has progressed into actual engineering testing.Advanced Space has partnered with NASA to develop the Cislunar Autonomous Positioning System (CAPS) based on this strategy to provide navigation solutions for missions throughout cislunar space (Cheetham, 2017). Previous studies focus on a scenario in which the AOD is based on the SST data between a navigation satellite moving in a halo orbit (or a DRO) and a user satellite.This may limit the use because the state of the navigation satellite is still unknown and it is determined together with the state of the user satellite.This means that the AOD results have to be transmitted back to the navigation satellite, which is usually not practical. A more practical scenario is one in which a navigation constellation moves in the proximity of the Moon, and the states of the constellation have been determined in advance.The whole AOD process is self-contained.That is, the two navigation satellites would track each other and use SST data to autonomously determine their orbits, with no support from ground stations or GNSS.Then, the navigation message would be broadcast from the navigation satellites to the user satellites. Considering the cost of building such an autonomous lunar navigation constellation, a minimum configuration of two satellites is a good starting point.Also, considering the service volume of such a constellation, the two satellites should move on orbits with high enough altitudes.Based on these two considerations, we chose the well-studied DROs and halo orbits as candidate orbits for the navigation satellite pairs.In such a case, we think it is very important to figure out the AOD performance of different configurations based on these orbits, which can serve as an impact factor when designing such a two-satellite navigation constellation.We have to admit that better AOD results can be expected if more satellites are deployed in the constellation, but as a preliminary study, we restrict our analysis to the two satellites in the current work.In the following, we call this two-satellite navigation constellation satellite pairs. Even for the simplest satellite pairs, a complete AOD analysis is not trivial.The analysis can be deconstructed into two sub-problems.The first sub-problem is with the AOD analysis of the satellite pairs themselves.This is to be investigated in this work.The second sub-problem is that the AOD analysis of the user satellites uses the navigation message.In the second sub-problem, accuracy of the two satellites' position and velocity obtained in the first sub-problem influences the AOD accuracy of the user satellites.Moreover, combinations of different user satellites and different satellite pair configurations also influence the AOD accuracy of the user satellites.The second sub-problem will be the focus of our forthcoming paper.We remark that other factors that are required to be considered when designing lunar navigation constellations include coverage ability (Gao & Hou, 2020) and maintenance (Guzzetti et al., 2017), which are beyond the scope of the current study. Figure 2 demonstrates the idea of the AOD process for the satellite pairs discussed in this paper.Three types of satellite pairs are considered (i.e., halo+halo, DRO+DRO, and halo+DRO).The halo+halo type contains two kinds of configuration (i.e., L2 halo+L2 halo and L1 halo+L2 halo).Thus, in total, four cases of different configurations are studied.Case (a) consists of one L2 south halo orbit and one L2 north halo orbit; Case (b) consists of one L1 south halo orbit and one L2 north halo orbit; Case (c) consists of two spatial DROs; and Case (d) consists of one halo and one DRO.Of course, the satellite pair based on a halo orbit (or a DRO) and a low-altitude lunar orbit (LLO) is also feasible, but an LLO usually has limited navigation service capability.Also, the AOD of this type of satellite pair is exactly the same as the case of a navigation satellite and a user lunar satellite, so it will not be repeated in this work. In frames (a), (b), and (d), the blue orbit represents a nearly rectilinear halo orbit (NRHO) originating from the Earth-Moon L2 point.The NRHO is a subset of the halo orbit family (Howell & Breakwell, 1984).Researchers are interested in the NRHO because of its relatively weaker instability when compared to usual halo orbits that are far away from the Moon.The red orbits in frames (c) and (d) around the Moon are the DROs, which are practically stable.There is one satellite on each orbit serving as a navigation satellite.With the SST technique, the two navigation satellites track each other, represented by a yellow dashed line.Through the LiAISON strategy previously mentioned, the absolute states of the two satellites are obtained.That is, the whole constellation is anchored.Based on this, navigation service could be provided to lunar probes by measuring the distance between them.Navigation message from the two navigation satellites to a lunar probe is illustrated in Figure 2, represented by white dashed lines. We propose a novel factor, coined with the term dynamic and geometric dilution of precision (DAGDOP).It captures both the dynamic characteristics and the observation geometry of the constellation, and can be reduced to the well-known factor of geometric dilution of precision (GDOP) if no dynamics are involved.Our studies show that this factor agrees quite well with AOD accuracy.As a result, it can be used as an indicator of AOD accuracy without actually performing the AOD process.It is more time-efficient using this factor.This factor is especially useful for satellite constellation design when there are massive candidate orbits.We recommend its use for similar future studies. Based on the proposed DAGDOP factor, a systematic analysis was conducted in the simple circular restricted three-body problem (CRTBP).This systematic analysis was aimed at analyzing the selection effects of different orbital arcs or orbits on AOD performance.To remove the influence of the measurement process, an instantaneous measurement model was adopted.The analysis was divided into two sub-problems.The first sub-problem regards the effect of different observation arcs on AOD performance.Taking the halo+DRO configuration as an example, it was found that AOD accuracy strongly depends on different observation arcs when the data length cannot cover one complete orbit whose orbital period is longer.By the use of longer arcs of data, this phenomenon is no longer obvious, and AOD accuracy can reach an accuracy level of SST measurements when the length of data covers more than one complete orbit with a longer period.For example, in the halo+DRO constellation studied in this paper, an arc length of 15 days is sufficient to achieve the accuracy of the observation data. The second sub-problem is about the effect of different orbits on AOD performance.Three types of constellations mentioned above are investigated.Among them, the hybrid halo+DRO configuration has the smallest DAGDOP.This result confirms the advantage of hybrid halo+DRO constellations in the aspect of orbit geometry and dynamics, and it reveals the potential of combinations of halo orbits and DROs for future navigation constellations in cislunar space.Furthermore, smaller DAGDOP values can be obtained when a large-amplitude halo orbit and a small-amplitude DRO are combined.On the other hand, for the L2 halo+L2 halo and DRO+DRO configuration, it was found that a rank-deficiency problem occurs when the initial phases of two satellites are too close to each other.This situation should be avoided in actual mission designs.It is also recommended to avoid simultaneously using orbits that are symmetrical to each other with respect to the x-y plane. Finally, take the best performing halo+DRO constellation in the simplified CRTBP model.As an example, in the second half of this study, a simulation of the real force model was carried out to check the actual AOD accuracy of the satellite pairs.The simulation results showed that the AOD process of the hybrid halo+DRO constellation works well in reality.The average AOD accuracy of the halo orbit was about 170 meters, and the average AOD accuracy of the DRO was about 190 meters.The clock synchronization accuracy can achieve an accuracy of no more than 30 nanoseconds. The contributions of the current paper are threefold. (1) We propose a novel factor DAGDOP to describe the AOD performance between satellite pairs.It is an efficient factor to compare the AOD performance of different orbit pairs, and is recommended in future studies.This factor can be extended to multi-satellite constellations with SST measurements.It can also be applied to other types of observation data.(2) A comprehensive study of the AOD performance of halo orbits and DROs is conducted in the CRTBP.Halo orbits and DROs with different amplitudes and different configurations are compared.Such a systematic survey, to the authors' knowledge, is seldom seen in the literature.(3) A hybrid configuration composed of a halo orbit and a DRO is proposed for the first time, and is verified in a simulated real model that includes a high-fidelity force model and clock model.Through Monte-Carlo simulations, the AOD accuracy of the satellite pair is verified.These findings can be regarded as a practical reference for the lunar navigation constellation design in the future. This paper is organized as follows.In Section 2, the basics of two kinds of dynamic models used in this paper are introduced.One is the CRTBP, and the other is the real force model, taking into account a more realistic dynamical environment.In Section 3, the measurement system is described, including the time delay, clock model, etc.In Section 4, the principles of the AOD process are introduced.In Section 5, the factor DAGDOP is proposed and a systematic analysis of the AOD problem is carried out in the CRTBP based on the DAGDOP.In Section 6, the AOD performance of the hybrid halo+DRO configuration in the real force model is verified.Section 7 summarizes the study as a whole. DYNAMIC MODEL In this paper, two force models are used.First, the simple CRTBP model is used for the systematic analysis of different observations arcs and different types of satellite pairs.Then, taking the best performing halo+DRO satellite pair as an example, the AOD performance is verified in the real force model.In the following, as basics to our study, the two force models are briefly introduced. CRTBP In the synodic frame centered at the barycenter, Figure 3 / ( ) is the mass parameter, with m and m 1 2 as masses of the Earth and the Moon, respectively.Following mass, length and time units are used throughout our study: where P P 1 2 means the distance between P 1 and P 2 .Equations of motion are as follows: where: Realistic Model For the real Earth-Moon system, besides the particle gravity of the Earth and the Moon, there are many perturbations that need to be considered when studying the orbit determination problem.For libration point orbits, the most important perturbations are the real motion of the Moon with respect to the Earth and the solar gravity perturbation.The relatively minor perturbations include non-spherical terms of the Earth and the Moon, third-body perturbations from other planets besides the Sun, and solar radiation pressure.Some even weaker perturbations also exist, such as relativistic corrections, tidal effects of the Earth and the Moon, etc.It is impossible to incorporate all the perturbations into our force model and there are uncertainties for parameters of these perturbation forces.In our work, the force model adopted to approximate the real Earth-Moon system include: • F 1 : particle gravity of the Earth, Moon, and Sun; • F 2 : non-spherical gravity of the Earth and the Moon, including the tidal effects of the Moon; and • F 3 : solar radiation pressure. We point out that although the DROs and halo orbits are dynamic structures in the synodic frame of the Earth-Moon system, the AOD process is carried out in the Moon-centered international celestial reference frame (ICRS).That is, equations of motion are integrated in this frame and the SST data is calculated using the difference between the positions of the satellite pairs.In the Moon-centered ICRS, equations of motion follow: ( ) ( ) where M µ , E µ , and S µ represent the gravitational constants of the Moon, the Earth, and the Sun, respectively.r, r E , and r S are position vectors of the satellite, the Earth, and the Sun in the Moon-centered ICRS.E ∆ ( s ∆ ) is the position vector that points from the Earth (Sun) to the satellite.∇U E and ∇U M represent the non-spherical gravity of the Earth and the Moon.[TE] and [TM] are two rotation matrices that convert vectors from the Earth's body-fixed frame and the Moon's body-fixed frame to the ICRS, respectively.Non-spherical gravity fields are modeled in the form of: where µ is the gravitational parameter of the center body, ( , , ) R λ φ are the spherical coordinates of the satellite in the corresponding body-fixed frame, and a is the reference radius of the center body.P l m , is the associated Legendre function of degree l and order m.C and S l m l m , , are normalized Stokes coefficients.To make the simulations more realistic, tidal effects of the lunar geopotential are also considered.Changes caused by lunar tides are usually expressed as variations C and S ∆ ∆ (Konopliv et al., 2013).They are expressed as: where k l m , are the lunar love numbers, j represents the disturbing body (the Earth or Sun), r j is the distance between the disturbing body and the Moon, and φ j and λ j are the latitude and longitude of the disturbing body in the Moon's body-fixed frame.The gravity field is truncated at lower degrees and order (see Table 1) in the AOD process, and only the where P SR represents the solar radiation pressure at 1 astronomical unit (AU).C R represents the reflectivity coefficient, which is a value usually between zero and two.When we simulate SST data, we assume that the radiation term consists of a constant part plus a time-varying periodic part.The constant part is C R0 1 5 = . .The period of the time-varying part is assumed to be one month, which is reasonable since the satellites are moving in the vicinity of the Moon.Uncertainty is included in the amplitude of the time-varying part, which is assumed as a Gaussian distribution with 1 10 σ = % and a mean of zero.In the AOD process, the time-varying periodic part is neglected and the constant part is corrected along with the state vectors.S is the area facing the Sun.m is the mass of the satellite.S ∆ remains the same as the one in Equation ( 4).The gravity model GGM05C (Ries et al., 2016) for the Earth and GRGM1200A for the Moon are used (Goossens et al., 2016;Lemoine et al., 2014).Values of the parameters of the force model are summarized in Table 1. Due to the many perturbations in the real Earth-Moon system, the CLPs lose their meaning as dynamical equilibrium points in the synodic frame (Gómez et al., 2001;Hou & Liu, 2010).Nevertheless, dynamics around these points are proven to be similar to those of the CRTBP (Hou & Liu, 2011;Jorba et al., 2020).Numerical approaches can be taken to construct the DROs and the halo orbits (including the NRHOs) in the ephemeris model (Bezrouk & Parker, 2017;Dei Tos & Topputo, 2017;Hou & Liu, 2011;Lian et al., 2013;Qian et al., 2018;Whitley et al., 2018).Again, details are omitted here.Readers can refer to the references above for more details on how to compute these orbits in a more realistic Earth-Moon system. MEASUREMENT MODEL In the simple CRTBP model, it is assumed that the ranging measurements are obtained instantly.The measurement noise is modeled as Gaussian noise with a standard deviation of 1 m.We have to emphasize that the main purpose of the systematic analysis in the simple CRTBP model is to evaluate the general AOD characteristics of different constellations from the viewpoint of pure geometry and dynamics, so only this simple instantaneous measurement model is considered. As for the simulation in the real force model, it is necessary to establish a more realistic measurement model, taking the time delay, clock errors, and other aspects into consideration.In order to improve the accuracy of the orbit determination of the satellite pair, a dual one-way ranging method is adopted.Suppose that a ranging signal is transmitted by Satellite 1 at time t −τ and received by Satellite 2 at time t.The general form of ranging measurement can be written as: where  R t 1 ( ) −τ and  R t 2 ( ) represent the position vector of Satellite 1 at time t −τ and the position vector of Satellite 2 at time t, respectively.τ is traveling time of signal, which is generated in an iterative way and described below.δ t 1 and δ t 2 represent the clock errors of Satellite 1 and Satellite 2, including the deterministic error and the stochastic error.The last term ε is the thermal noise, which is modeled as a Gaussian noise with its 1σ varying with the distance between the two satellites. Time Delay One of the most important effects that should be modeled is the time delay.Suppose Satellite 1 transmits a ranging signal at time t T .At time t R , Satellite 2 receives that signal.τ = − t t R T is the time delay in this ranging process.A simple iteration process is used to obtain τ , which is shown in Figure 4. t R is assumed to equal t T as an initial guess.Then, R t R 2 ( ) can be solved using one of two methods, orbit propagation or ephemeris interpolation.The second method is more appropriate for the onboard AOD process.In this work, the first method is used.Then, the expected range ρ is calculated and τ is obtained.Then, a new t R is obtained.The iteration is finished until the difference between τ i and τ i−1 is less than , which is set as 10 8 − seconds in this work. Clock Model One key part of a navigation satellite is the accurate atomic clock.The first satellite navigation system was TRANSIT, developed by the John Hopkins Applied Physics Laboratory.Quartz crystal oscillators were used to generate accurate time FIGURE 4 The iteration process used to generate accurate time delay τ reference.In 1974, rubidium atomic clocks were carried onboard for the first time (Bhaskar et al., 1996).Deterministic and stochastic errors were involved in atomic clocks onboard, denoted as δ t d and δ t s , respectively.Usually, δ t d can be modeled as the following second-order polynomial: where a 0 , a 1 , and a 2 represent the clock bias, clock drift, and clock aging parameter, respectively.Only the relative difference of clock bias, drift, and aging coefficients between two clocks can be determined, which will be explained in detail in Section 4. The relative difference of clock bias, drift, and aging coefficients between two clocks onboard are: where the subscripts represent clock bias, drift, and aging parameters, respectively. The superscripts indicate the satellite number.The relative difference of clock bias, drift, and aging coefficients between two clocks onboard are summarized in Table 2. The Allan variance is usually used to describe the frequency stability of atomic clocks.Rubidium clocks on the GPS Block IIF satellites offer an Allan variance of (Vannicola et al., 2010): Atomic clocks of the same stability are adopted in our work.Stochastic errors can be then generated from the Allan variance above.The conversion method is the same as Wang et al. (2019) and will not be repeated here.A time history sample of stochastic error is shown in Figure 5. Thermal Noise Thermal noise of the ranging system is related to the received signal power, which is dependent on the distance between the two satellites.The thermal noise is modeled as a Gaussian noise in this paper with 1σ varying with the distance.The σ can be approximated by: where λ c is the wavelength of each code chip (293.05m); B n typically equals 0.5 Hz, representing the code loop noise bandwidth; D is the early-to-late correlator spacing (chips), with a typical value of 1 chip; T is the prediction integration time; and T s = 1 is used in this work.C N / 0 is the carrier-to-noise ratio (Hz): where ( ) N dB 0 is the thermal noise power component in a 1-Hz bandwidth (dBW) with a typical value of -200.9 dBW (Kaplan & Hegarty, 2005). A free-space propagation loss model given by Kaplan and Hegarty (2005) is expressed as: where ( ) P R dBW and ( ) P T dBW are the receiving and the transmitting power.( ) G T dB and ( ) G R dB are the gain of receiving and the transmitting antennas.λ = c f / is the wavelength of the signal. According to the International Telecommunication Union (ITU) Radio Regulations, the 300-MHz to 2-GHz range should be reserved for radio astronomy observations.Therefore, we use a frequency of 2,500 MHz in this work.d represents the distance between the transmitting antenna and the receiving antenna.Two antennas with the same parameters are carried on each satellite.The antenna parameters used in this work are summarized in Table 3.The standard deviation of thermal noise is shown in Figure 6, which varies linearly with the distance. Dual One-Way Ranging A dual one-way ranging (DOWR) method is applied in this work.Two satellites move on their respective orbits, transmitting ranging signals to each other.A time slot of 1 second is allotted to each satellite for transmitting signals.As seen in Figure 7, the first slot in blue is allotted to Satellite 1 and the other colored in red is allotted to Satellite 2. During the transmission time slot of one satellite, the other satellite is in the mode of receiving.Each of the two satellites carries one atomic clock and transmits a ranging signal at its transmitting time slot.Then, the signal is received by the other satellite. As seen in Figure 7, Satellite 1 transmits a signal at t T 1 , and then that signal is received by Satellite 2 at t R 2 .The subscript indicates the satellite number and the superscript indicates the signal mode, T for transmit and R for receive.Then, the actual time of flight (TOF) τ 1 2 1 = − t t R T is obtained.However, due to inevitable clock errors in the atomic clocks carried on Satellite 1 and Satellite 2, the observed TOF is slightly different from the actual TOF by δ t 1 and δ t 2 .Thus, the observed TOF is In the time slot of Satellite 2, the ranging signal is transmitted from Satellite 2 and received by Satellite 1. Finally, dual one-way measurements are obtained: One common method to deal with the DOWR measurements is to decouple state vectors with clock parameters.To achieve this, a common time t t T 0 2 = in Figure 7 is used for the decoupling process.After decoupling, state vectors and clock parameters are solved separately.This is described in Section 4. ORBIT DETERMINATION The AOD process in the CRTBP (see Section 5) is simple.Only the absolute states of the two satellites are solved.This is a classic orbit determination problem and can be found in any textbook (Tapley et al., 2004).Thus, the AOD process in the simple CRTBP is not described here.The AOD algorithm for simulations in Section 6, which focuses on the more realistic model, is described in detail.In Section 6, apart from a more realistic force model, the dual one-way measurements are generated based on onboard atomic clocks.The absolute states and the clock parameters of the two satellites are to be estimated together with the reflectivity coefficient C R . Preprocessing the DOWR Data Before the AOD process, preprocessing the DOWR data is necessary.Usually, the DOWR data are processed in advance in order to decouple the range measurements and clock parameters.The DOWR measurements directly obtained onboard are measurements at different times.Through orbit propagation and clock parameter prediction, the states of Satellite 1 and Satellite 2 at the common time t 0 are obtained.Then, the corrections to ρ 1 and ρ 2 are expressed as: Then, DOWR measurements at time t 0 are obtained: Consequently, two virtual DOWR measurements are derived from combinations of ρ 1 0 and ρ 2 0 : States and clock errors are now decoupled.Clock errors are eliminated in ρ + .States are eliminated in ρ − .In the following, the two virtual measurements are processed to achieve the AOD. The AOD Process In the CRTBP, only the states of two satellites are to be estimated.Thus, it is basically identical to the classical orbit determination problem.As mentioned above, this is simple and will not be repeated here. In the simulations in the realistic model, the AOD process is more complicated and is described in detail.In our study, we use the batch filter based on the least squares principle (Tapley et al., 2004). Since the states and the clock parameters are decoupled, the states of the two satellites and the clock parameters are solved separately.The reflectivity parameter C R is estimated together with the states of the two satellites.We denote the integrated vector to be estimated as: where the superscript denotes Satellite 1 or 2. X r r is the state vector of the spacecraft i , which is a 6 1 × vector.Absolute clock parameters cannot be solved.Only relative value of two clocks can be determined.That is to say, the two clocks are synchronized.An absolute clock bias may exist, but it does not affect the SST measurement.The clock vector which contains the three relative clock error parameters is denoted as: Denote the equations of motion in the real Earth-Moon system as: The initial condition of the integrated state vector is denoted as: In our case, the measurement data is the decoupled DOWR data.Supposing at the epoch t i , we have: The AOD process is to determine the value of X 0 and a through a series of SST measurements carried out at different epochs t i n i ( ) , ... = 1 . Supposing we have an initial estimate X 0 * and a * for X 0 and a, we denote: starting from X 0 * and integrating the above equations of motion, we get an estimated trajectory X * .We denote the residuals between the actual DOWR measurements ρ + ( ) X i , ρ − ( ) a and the calculated values ρ + ( ) X i * , ρ − ( ) a * of the estimated trajectory at the epoch t i as: We denote: According to the batch estimation theory, supposing all measurements have the same weight, the following equation can be used to calculate x from y + to gradually refine the estimate X 0 * for the initial state vector: x H H H y H H H y a δ (28) in which the matrix H is defined as: The observation matrix  H i for y + is defined as: (30) The observation matrix  H i for y − is defined as: SYSTEMATIC ANALYSIS IN CRTBP In this section, a factor called dynamic and geometric dilution of precision (DAGDOP) is introduced first.Then, based on this factor, a systematic analysis of the AOD accuracy of the satellite pair moving on the halo orbit or the DRO is carried out.The analysis is divided into two sub-problems.One regards the effects of different observation arcs, while the other focuses on different configurations of the satellite pair.In this section, the AOD problem is studied in the CRTBP. The DAGDOP Factor In the area of GNSS, a geometry factor, geometric dilution of precision (GDOP), is commonly used to describe the amplification of the standard deviation of the measurement errors onto the navigation solution due to the user/satellite relative geometry.Inspired by the GDOP, a similar factor (DAGDOP) is proposed. Different from the GDOP, which only considers geometry, dynamic aspects are also included in the DAGDOP.This is understandable because it is actually an orbit determination problem.Assume that the integrated state vector of two satellites is denoted as: where the definition of X i ( ) is the same as Equation ( 20).The state deviation vector is denoted as x.Then, x is related to the range deviation vector ∆ρ by: ( ) where: The first term ∂ ∂X ρ indicates the observation geometry.ρ is an m ×1 vector, where m is the total number of observations.For the i-th observation: is the state transition matrix (STM), which is a 12 12 × matrix.The STM reflects the characteristics of dynamics.The dimension of matrix H is m ×12.∆ρ and x are assumed to be Gaussian and zero mean.The covariance of x is obtained: Substitution from Equation (33) yields: Assume each measurement is independent of each other.The standard deviation is: Assuming: 37 The subscript i, j of D i j , indicates the row and column number, respectively.The superscript of the DAGDOP represents the satellite number.The subscript of the DAGDOP (p or v) represents position and velocity, respectively.Position accuracy is of the most concern to us.Thus, we only show computation results of DAGDOP p (hereinafter referred to as DAGDOP, for simplicity) in the following. The GDOP factor commonly used in the GNSS only considers the relative geometry between the navigation satellite and the receiver.By contrast, the DAGDOP factor considers both the relative geometry and the STM of the satellites.This factor indicates the amplification of the standard deviation of measurement errors onto the orbit determination results due to both the observation geometry and the dynamics. Some further remarks about the DAGDOP factor are made here.First, in the case that the state of one of the two satellites (say Satellite 2) is known a priori, then we only need to estimate the state of the other satellite (Satellite 1).The state vector X to be estimated is a six-dimensional vector, and the matrix in Equation ( 35) is then simply changed as a 1 × 6 vector: We can also define the matrix D in the same form, but in this case, it is a 6 × 6 matrix, and we can only define this factor for the satellite to be estimated. Second, in the case that we have N (≥ 3) satellites, if all of their states should be estimated using the SST data, we can define the same observation matrix as Equation ( 35), but in this case, the state vector X is a 6N -dimensional vector and the matrix D is a 6 6 N N × matrix.We can define this factor for each of the satellites.If only one of the satellites' states is to be estimated and the states of all other satellites are known a priori, then the state vector X is still a six-dimensional vector, and the matrix the same as Equation ( 41) can be defined, only with more satellites involved.As a result, the DAGDOP factor can be generalized to the case of multiple satellites. Third, for other types of observations besides SST, we can define the same factor.The only difference is that the matrix ∂ ∂ ∂ ∂ ρ ρ / X should be replaced with the corresponding observation matrix. One final remark is that our following studies indicate that the DAGDOP factor agrees well with AOD accuracy.This means it is a useful indicator of AOD accuracy.Yet to compute this factor, we only need to integrate the orbits once.In other words, we don't need to go into the iterations of the AOD process.Due to this benefit, we advocate its use in assessing the orbit determination accuracy of different constellations. Sub-Problem 1-Different Observation Arcs In this section, the effects of different observation arcs on AOD performance are studied based on the DAGDOP.When the observation arc is shorter than one orbital period, the different initial phases of the orbits will lead to different observation arcs, which will affect AOD performance.In this first sub-problem, we take the halo+DRO configuration as an example to show the results. In order to intuitively display the different arc segments due to different initial phases, an example DRO and an example halo orbit are shown in Figure 8.The period of the halo orbit is 14 days.The period of the DRO is 6 days.The blue solid curves represent four different arcs of the halo orbit.The red solid curves represent four different arcs of the DRO.The blue stars and the red stars represent the starting points of each arc.The corresponding phase value is marked, with the unit as the orbit period.The phase is defined as , where t t − 0 represents the time from the initial point (τ i = 0) and p i is the period of the corresponding orbit (We use one to indicate the DRO and two to indicate the halo orbit). It can be seen from Figure 8 that the combination of different arcs of the two orbits generates different relative geometry.The DAGDOP factor for different combinations is expected to be quite different accordingly.On the other hand, if the two arcs are long enough to cover the whole of the two orbits, the relative geometry of the two orbits can be covered by the arcs.The DAGDOP factor in this case is expected to change little when different combinations of the arcs are chosen. To demonstrate this, in our research different lengths (5 days, 10 days, 15 days, and 20 days) of the arcs are chosen.For each length, we chose different values of τ 1 0 1 ∈[ , ] and τ 2 0 1 ∈[ , ] to represent different combinations of the two arcs.A step size of 0.01 was chosen when we surveyed different combinations of τ 1 and τ 2 . The DAGDOP factor of the halo orbits of different arcs is shown in Figure 9. From left to right and from top to bottom, the arc length is 5, 10, 15, and 20 days.For the halo orbit whose orbit period is about 14 days in this example, some information can be obtained: • For an arc length of 5 days, the DAGDOP factor varies in a range from 10 10 2 5 4 . ∼ .There is a difference of 1.5 orders of magnitude between the best and the worst result. • For an arc length of 10 days, the situation is similar to the case of 5 days.The DAGDOP factor varies in a range from 10 10 0 6 1 6 . . ∼ . The overall accuracy is improved, but still there is a difference of 1 order of magnitude between the best and the worst result. • For an arc length of 15 days, the overall DAGDOP factor is about 10 0 (light blue region).This means the level of AOD accuracy equals that of the SST measurement.There is a difference of less than 1 order of magnitude between the best and the worst result.• For an arc length of 20 days, when compared with the result of 15 days, the overall DAGDOP factor further improves, but the improvement is limited.The For the DRO whose orbit period is about 6 days in this example, similar phenomena as Figure 9 can be observed.The difference is that the overall DAGDOP factor of the DRO is smaller than that of the halo orbit for the same time length.This is understandable, because the DRO's orbit period is shorter and a same time length covers a longer arc of the DRO. From the results displayed in Figure 9 and Figure 10 and other example tests not displayed, we reach the following conclusions: • For short arc lengths, the accuracy is greatly influenced by the combinations of arcs used for the AOD.Moreover, for a a 14-day halo + a 6-day DRO configuration, when the arc length is less than 3 days, the AOD usually fails.This means that for the AOD algorithm to work, a length of at least 3 days' SST data is necessary.• For the AOD accuracy to reach the level of the SST measurement accuracy (i.e., the DAGDOP equals one), the time length should be able to cover one complete orbit whose orbit period is longer.Even longer lengths of data can further improve the AOD result, but generally the improvement is limited. To demonstrate our argument that the DAGDOP factor can be used as an indicator of AOD accuracy, a detailed calculation of the AOD has been carried out.Position root-mean-square (RMS) error is used to evaluate the AOD accuracy.Suppose the reference orbit is generated, and the true position vector at epoch t i is FIGURE 9 The DAGDOP factor for different combinations of initial phases of the halo orbit and for the different lengths (5 days, 10 days, 15 days, and 20 days) of arcs ( , , ). x y z i i i The position vector at t i obtained from the AOD process is ( , , . ˆˆ) ˆi i i x y z The x , y , and z components of the RMS are: ) Figure 11 shows the AOD accuracy of the halo orbit for different combinations of arcs the same as Figure 9, and Figure 12 shows the AOD accuracy of the DRO for different combinations of arcs the same as Figure 10.The noise on the SST measurement is modeled as a Gaussian noise with a 1σ of 1 m.It is obvious that the patterns of DAGDOP in Figure 9 are consistent with the AOD results in Figure 11, and the patterns of DAGDOP in Figure 10 are consistent with the AOD results in Figure 12.The edges of regions with different colors in the AOD contour map are not as smooth as those in Figure 9 and Figure 10.This is understandable since the noise in observation data and other errors are included in the AOD process. By and large, the overall magnitude of the DAGDOP factor and the AOD accuracy are consistent with each other. Consequently, with the use of the DAGDOP factor, the performance of different satellite pairs can be obtained in an efficient and reliable way.In the following analysis, we focus on showing the DAGDOP factor. Sub-Problem 2-Different Configurations After studying the effects of different arcs for one specific configuration, different configurations of constellations will be studied in this section.Different configurations of constellations can be formed using different kinds of orbits at different amplitudes. Configurations In this sub-problem, three types of constellations will be investigated-the halo+halo type, the DRO+DRO type, and the halo+DRO type.For the halo+halo type, two cases (L2 halo+L2 halo and L1 halo+L2 halo) are studied.As a result, in total, four cases will be studied in this section as summarized in Table 4. For each configuration, a total of 35 pairs of orbits of different amplitudes are chosen, as seen in Figure 13.The images of orbits in Cases 1, 2, 3, and 4 are shown in Figure 13(a)-(d), respectively.The orbits in are shown in the Moon-centered synodic frame.The symbol * in each figure indicates the starting point of each orbit.The orbits are numbered according to their location in the orbit family.The closer the orbit is to the Moon, the larger the number is.To be clear, the Number 1 and Number 35 orbits have also been marked in Figure 13 with the arrows pointing to the corresponding orbits. For Case 1, as seen in Figure 13(a), the north halo orbits and the south halo orbits are symmetrical to each other with respect to the x-y plane.With the increase of orbit number, the period of the orbits decreases from 14.8 days to 8 days.The maximum out-of-plane amplitude increases from 14,674 km to 76,957 km and then decreases to 75,001 km.For Case 2, as seen in Figure 13(b), with the increase of the orbit number, the period of L1 halo orbits decreases from 12 days to 8.9 days, and the same north L2 halo orbits are used as in Case 1.As for Case 3, it should be noted that planar DROs are restricted to the Moon's orbital plane.Thus, it is impossible to determine the out-of-plane component of the DRO if we only use planar DROs.This problem can be solved by adding a certain out-of-plane amplitudes to generate spatial DROs.Quasi-periodic orbits are usually generated in this way (Gao & Hou, 2020;Liu et al., 2014). As can be seen in Figure 13(c), two families of spatial DROs are generated with an out-of-plane amplitude of 5,000 km.To be specific, a vertical displacement of 5,000 km is added to the planar DROs, generating the blue orbits.A vertical displacement of -5,000 km is added to the same DROs, generating the red orbits.The two families of spatial DROs are symmetrical to one another with respect to the x-y plane.With the decrease of the in-plane amplitude, the orbit number of the 13(d).With the increase of the orbit number, the period of L2 halo orbits decreases from 14.8 days to 8 days, and the period of the DROs decreases from 11.8 days to 5 days. DAGDOP Results Considering the fact that the DAGDOP factor is influenced by the initial phase angles of the arcs (see the previous section), to avoid the influence of different phases, the following method was used.For each combination, we surveyed the initial phase angles of two satellites τ 1 and τ 2 in the range of [0,1].To save the computation cost, the step size was chosen to be 0.1.For each pair of τ 1 and τ 2 , we computed the average DAGDOP factor for the halo orbit and the DRO, and then we picked up the maximum as the indicator of this constellation. We have to mention that for some combinations of the constellation, the DAGDOP factor for some combinations of τ 1 and τ 2 was extremely large, which means the AOD process did not converge in those cases.We used N n = 1 35 , ,  and N s = 1 35 , ,  to identify the L2 north and L2 south halo orbits.For example, when N N n s = and τ τ 1 2 = , this phenomenon occurs.In these situations, we simply ignore them.= , then the following conditions always hold: , , Thus, the observation matrix is written as: where: Therefore: , , ... Meanwhile, the state transition matrix is written as: Due to symmetry, many elements in the state transition matrix are the same as each other or are just the opposite number of the other.For example: Therefore, the first element equals the seventh element in Equation ( 47).Likewise, a similar relationship exists for other elements.In summary, the i-th element equals the ( ) ≠ , this rank-deficiency problem can be avoided, but the DAGDOP factor is large as long as τ 1 is too close to τ 2 . On the other hand, for the case of a large north L2 halo and a small south L2 halo, or for the case of a small north L2 halo and a large south L2 halo, the DAGDOP result is the best.Results in Figure 14(a) suggest that if we build a navigation constellation like the one shown in Figure 13(a) from the viewpoint of AOD accuracy, the best choice is to choose halo orbits with a large difference in the orbit amplitude.For the case of L1 north halo+L1 south halo orbit, the same conclusion holds. As for Case 2, the diagonal structure which is obvious in Figure 14(a) is no longer strictly diagonal in Figure 14(b) because of the asymmetry between L1 and L2 halo orbits.The cases in which the AOD process failed no longer appear in Case 2. That is because the rank-deficiency problem shown in Equation ( 44) would be impossible for Case 2. Figure 14(c) shows the DAGDOP factor of Case 3. The best DAGDOP value for Case 3 occurs for DROs with small in-plane amplitudes.This is because spatial DROs with smaller in-plane amplitudes have shorter periods.Thus, spatial DROs with smaller in-plane amplitudes are recommended.It should be noted that the DAGDOP results of combinations in the diagonal area are also extremely large because of the same reason as in Case 1.These extremely large results are omitted in Figure 14(c). As for Case 4, an obvious phenomenon in Figure 14(d) is that the DAGDOP factor is smallest for a combination of a large-amplitude halo orbit and a small-amplitude DRO.This suggests that we could use this combination if we were to deploy the satellite pair on such a configuration.The instability of halo orbits reduces with the increasing amplitude (Gao & Hou, 2020), which is also a property favoring this choice. As a concluding remark to this section, we mention that the same patterns as those shown in Figure 14 are found in the contour maps of AOD accuracy.This, again, demonstrates the feasibility of the DAGDOP factor when assessing AOD accuracy. SIMULATION IN REAL-FORCE MODEL Based on the above analysis in the CRTBP model, it is found that the hybrid halo+DRO configuration shows better overall accuracy when compared with the halo+halo and DRO+DRO configurations, especially when short observation arcs are used.In this section, a Monte Carlo analysis of the hybrid halo+DRO constellation in a realistic Earth-Moon system is conducted. Simulation Setup Two satellites move on a halo orbit and a DRO, respectively.The initial epoch for all simulations was January 1, 2020, at 00:00:00.000UTC.Initial states of the two satellites are shown in Table 5.The initials are in the Moon-centered ICRS.The two orbits integrated for 20 days are shown in Figure 15.The halo orbit is in blue and the DRO is in red.To validate the orbit determination results, a Monte Carlo simulation including 100 tests was conducted.The uncertainty of initial values was modeled as Gaussian noise.The 1σ uncertainty of the initials and other parameters to be estimated in the AOD process is summarized in Table 6. In The statistical clones were produced from the full covariance matrix and are provided along with the gravity field coefficients (Goossens et al., 2016;Lemoine et al., 2014).Lunar tides were also modeled with and 2,2 S ∆ computed by Equation ( 6).When generating the reference orbit, C R was a time-varying parameter.But in the AOD process, C R was considered a constant parameter, which was estimated together with state vectors.The force model parameter used in the AOD process can be seen in Table 1. Simulation Results 100 runs of the AOD process were conducted.Results of the 100 tests are shown in Figure 16, 17, and 18. Satellite 1 moved in a halo orbit while Satellite 2 moved in a DRO.The AOD errors of Satellite 1 were more densely distributed, varying in the range of 150 ∼ 220 m.The average position error of Satellite 1 was about 170 meters.The AOD errors of Satellite 2 were distributed between 150 ∼ 250 m.The average position error of Satellite 2 was about 190 meters.The velocity error of Satellite 2 varied between 1.5 mm/s and 2.5 mm/s, while that of the Satellite 1 varied in the range of 1.5 ∼ 2 mm/s.Therefore, the halo orbit had a slightly better AOD accuracy than the DRO in our simulated hybrid constellation.What's more, synchronization errors of two clocks (δ a 0 1 2 − ) can be corrected to less than 30 ns, as seen in Figure 17.It should be noted that only relative clock errors could be determined.The absolute clock errors could not be determined.In the estimation process, we assumed C R as a constant.Figure 18 shows the estimated C R value. In a word, simulations in this section prove that the AOD of the satellite pair composed of a halo orbit and a DRO is feasible in the real Earth-Moon system.For the case studied, the AOD accuracy of the satellite on the DRO was better than the satellite in the halo orbit.The clocks could also be synchronized, in fact, the accuracy of the clock synchronization was better than 30 ns. CONCLUSION This study focused on the AOD analysis of a satellite pair, which moved in either a halo orbit or a DRO.The satellite pair determined their orbits using only SST measurement data, without the support from ground stations or current GNSS.The two navigation satellites tracked each other, and the observation data was the DOWR data between the two satellites.With the LiAISON strategy, the states of the two satellites were determined simultaneously.In total, three types of configurations for the satellite pair were considered-halo+halo, halo+DRO, and DRO+DRO. We proposed a factor named DAGDOP.This factor reflects the influence of AOD accuracy from both observational geometry and dynamics.Our studies indicated that the DAGDOP factor agreed quite well with the final AOD accuracy.As a result, it is a good indicator to assess the AOD accuracy.It can be easily generalized to a multi-satellite case or other types of observations, so we recommend its use in future similar studies, since we don't need to carry out the iteration process in orbit determination.This saves a lot of computation time, especially for extensive surveys such as those carried out in the current study. In the simplified CRTBP model, the AOD performance of combinations of different orbital arcs was studied using the DAGDOP factor.Our studies showed that the DAGDOP factor can approach one (i.e., the AOD results can achieve the level of observation accuracy) if the data length is long enough to cover one complete orbital period of the orbit, which has a longer period.Even longer data length can further decrease the DAGDOP factor, but improvement is generally limited. In the simplified CRTBP model, the AOD performance of combinations of different configurations was studied also using the DAGDOP factor.Our studies showed FIGURE 18 Solutions of solar radiation reflectivity coefficient C R that for the halo+halo type constellation, two halo orbits of the same amplitudes around the same collinear libration points should be avoided.Out of the three studied constellation types, the hybrid halo+DRO configuration performed best.The weaker instability of a nearly rectilinear halo orbit and the stability of a DRO orbit is preferred when building such a navigation constellation. At last, the case of the hybrid halo+DRO constellation was generalized to a more realistic model of the Earth-Moon system.In our model, particle gravity of the Earth, the Moon, and the Sun, along with the Earth's and Moon's non-spherical gravity (lunar tides included) and solar radiation pressure were considered.The model used to generate SST measurements was different from the model used for the AOD in order to simulate the dynamic errors and uncertainties.The DOWR system was used to generate the SST measurements, with consideration of time delay, accurate clock model, signal propagation loss, and so on.Studies in such a realistic model show that the AOD process is feasible.In this case, the average position accuracy of a halo orbit was about 170 meters.The AOD accuracy of the halo orbit was slightly better than that of the DRO, which was about 190 meters on average.The two clocks were synchronized with a time accuracy of 30 nanoseconds. The background behind the current study is the construction of an autonomous lunar navigation constellation.The basic idea is that the navigation constellation first determines its orbits autonomously with SST data and without support from ground stations or the GNSS around the Earth, and then broadcasts the navigation message to its users.Considering the cost and service volume, the simplest two-satellite constellation using the halo orbit and the DRO was considered in the current study.The proposed DAGDOP factor greatly helped us in assessing the AOD accuracy of different configurations of the navigation satellite pair. Besides the AOD performance of the constellation itself, we have to admit that there are a lot of other factors that need to be considered when building such a constellation as well-like coverage ability, the ability to serve different users, and the possibility of including multiple satellites in the constellation.The best configuration of the criteria for AOD accuracy alone for the navigation constellation itself might not be the best choice when other factors are involved. So, the current study serves only as a very preliminary attempt to construct such a navigation constellation.The proposed DAGDOP factor may be helpful for future studies in which more navigation satellites can be added to the constellation and the user satellites are involved.In a forthcoming paper, we will study the service ability of navigation constellations to different types of users around the Moon using this factor.This study will include a visibility analysis and a discussion of the influence on users' position errors from the navigation satellites' errors for different types of users around the Moon. a c k n o w l e d g m e n t The authors thank the two anonymous reviewers and the editor for their valuable input, which greatly helped improve the paper. This work is supported by national Natural Science Foundation of China (NSFC 11773017, 11703013, 11673072). Tracking GPS above GPS satellite altitude: First results of the GPS experiment on the HEO FIGURE 1 FIGURE 1 2D and 3D maps of α due to the third-body perturbation in the Earth-Moon system: (a) a 2D map of α ; (b) a 2D map of α scaled to the vicinity of the Moon; (c) a 3D map of α ; and (d) a 3D map of α scaled to the vicinity of the Moon.Blue curves are halo orbits around the two CLPs and the DROs. FIGURE 2 FIGURE 2 The four cases of satellite pair configurations studied in this work shown in a Moon-centered synodic frame: (a) one L2 south halo orbit+one L2 north halo orbit; (b) one L1 south halo orbit+one L2 north halo orbit; (c) two spatial DROs; and (d) one L2 halo orbit+one DRO.Yellow dashed lines represent SST between the two navigation satellites.White dashed lines represent the navigation message broadcast from the two navigation satellites to a lunar probe. FIGURE 3Geometry of the CRTBP model in the Earth-Moon barycentric synodic frame.P 1 is the Earth center and P 2 is the Moon center. FIGURE 5 A FIGURE 5 A time history sample of the stochastic error in the atomic clock D, which is a 12 12 × matrix.DAGDOP of position and velocity can be defined by the diagonal components of D: FIGURE 8 FIGURE 8 Combinations of the two arcs of the halo orbit and the DRO FIGURE 10 The FIGURE 10 The DAGDOP factor for different combinations of initial phases of the DRO and for the different lengths (5 days, 10 days, 15 days, and 20 days) of arcs FIGURE 13 FIGURE 13 Orbits for four cases shown in the Moon-centered synodic frame: (a) Case 1; (b) Case 2; (c) Case 3; and (d) Case 4. The orbit numbers are marked with arrows pointing to the corresponding orbit. FIGURE 16 FIGURE 16Position (left) and velocity (right) errors of two satellites; Satellite 1 moved in a halo orbit while Satellite 2 moved in a DRO. FIGURE 17 FIGURE 17 Clock parameter synchronization error between Satellite 1 and Satellite 2 TABLE 1 Force Model Parameters Used to Generate Reference Orbits Used in the AOD Process TABLE 2 Relative Difference of Clock Parameters TABLE 3 Parameters of Antennas Carried Onboard FIGURE 6 Variation of the 1σ of thermal noise with respect to distance FIGURE 7 Time slots of two satellites TABLE 4 Four Cases of Constellations spatial DROs gets larger.For the hybrid configuration in Case 4, 35 L2 halo orbits and 35 DROs are shown in Figure our work, the force model to generate the SST measurement data is different from the force model used in the AOD.A relatively more accurate dynamic model was used to generate the simulated SST data.A 100 100 TABLE 5 Initial States of Two Satellites Moving on a Halo Orbit and a Distant Retrograde Orbit TABLE 6 Uncertainty of the Initials and Parameters Used in the AOD Process FIGURE 15 Two satellites moving on a halo orbit and a DRO are integrated over 20 days; the left frame shows the orbits in the Moon-centered ICRS; the right frame shows the orbits in the Moon-centered synodic frame
14,793.8
2022-01-01T00:00:00.000
[ "Physics" ]
Low-Cost Microbolometer Type Infrared Detectors The complementary metal oxide semiconductor (CMOS) microbolometer technology provides a low-cost approach for the long-wave infrared (LWIR) imaging applications. The fabrication of the CMOS-compatible microbolometer infrared focal plane arrays (IRFPAs) is based on the combination of the standard CMOS process and simple post-CMOS micro-electro-mechanical system (MEMS) process. With the technological development, the performance of the commercialized CMOS-compatible microbolometers shows only a small gap with that of the mainstream ones. This paper reviews the basics and recent advances of the CMOS-compatible microbolometer IRFPAs in the aspects of the pixel structure, the read-out integrated circuit (ROIC), the focal plane array, and the vacuum packaging. Introduction Infrared (IR) detectors are devices that measure the incident IR radiation by turning it into other easy-to-measure physical phenomenon. The IR detectors may be classified into photon detectors and thermal detectors according to the basis of their operating principle [1]. The photon IR detector absorbs the radiation by the interaction with electrons in the semiconductor material, and then the variation in the electronic energy distribution results in observable electrical output signal. This kind of detectors shows perfect signal-to-noise performance and very fast response, while its utilization is limited because of the requirement of cryogenic cooling [2][3][4][5]. Compared to its competitor, the thermal IR detector, which absorbs the incident IR power to cause temperature rise and measures the consequent change in some physical properties, presents smaller volume, lower cost, and non-necessity of cryogenic cooling, therefore it has wide application in automobile, security, and electric appliance [6][7][8]. The development of thermal IR detectors could be traced back to Langley's bolometer in 1880, which use two platinum foils to form the arms of a Wheatstone bridge [9]. However, thermal IR detectors failed to attract sufficient attention until the last decade of the 20th century. The reason is that the thermal IR detectors are considered to be much slower and less insensitive than the photon IR detectors [6]. In 1992, both Texas Instruments and Honeywell published their uncooled IRFPAs (infrared focal plane array) based on pyroelectric type and microbolometer type thermal detector, respectively, with fascinating performance [10,11], successfully encouraging a sustained effort to further reduce the pixel size, improve the device performance, and reduce the production cost . Today, one of the most attractive thermal IR detectors for imaging purpose is the microbolometer IRFPA. Comparing to other thermal IR detectors like thermopile detector [47][48][49][50], pyroelectric detector [51][52][53][54], and superconducting transition edge sensor (TES) bolometer detector [55][56][57][58], it is promising for the commercial imaging applications because of its respectable performance, small pixel size, and ease to make [59]. Attributing to the continuous efforts and the technological advances, the pixel size of the microbolometer detector fabricated via the low-cost manufacture technology based on silicon LSI (large scale integration) circuit process has been reduced to beyond 17 µm [18][19][20]. Not only does the high-integration process lower the production cost of the detectors, but also it provides mature approach with small feature size and high uniformity to benefit the pixel size and the device performance. Especially, the complementary metal oxide semiconductor (CMOS) microbolometer technology is developed for long-wavelength IR (LWIR, 8-14 µm) FPAs via CMOS foundry compatible approaches . During the fabrication process, the layer structures of the absorber and the thermal sensor are formed with CMOS process, and then post-CMOS micro-electro-mechanical system (MEMS) process are used to form suspended microbridge structures in purpose of thermal isolation. This technology aims to eliminate the requirement of special process and simplify the post-CMOS MEMS process in order to achieve the ultra-low-cost microbolometer IRFPAs. However, the most common thermistor materials like vanadium oxide (VO x ) [60][61][62] and silicon derivatives (a-Si, a-SiGe, a-Ge x Si 1−x O y , etc.,) [63][64][65], which have appropriate electrical properties, are not compatible with the CMOS process. For the CMOS-compatible microbolometer IR detector, one choice is the p-n junction diode which has acceptable properties and compatibility with CMOS process; therefore the silicon-on-insulator (SOI) diode IRFPAs have attracted continuous attention since first reported by Ishikawa et al. in 1999 [13], and have been widely adopted in low-cost commercial IR detectors. Besides, CMOS-compatible metal or semiconductor materials (e.g., aluminum [41][42][43], titanium [12,29], polycrystalline silicon [44], etc.,) have been investigated as another choice as well. Although the SOI diode IRFPA and the CMOS-compatible material microbolometer IRFPA have relative low temperature coefficient, it could be compensated by the high integration and high uniformity. Till now, a lot of efforts have been done to improve these two types of microbolometer detectors: Ueno et al. proposed a multi-level structure that has an independent metal reflector between the absorber and the thermistor for interference IR absorption in SOI diode IRFPA [15]; Takamuro et al. invented the 2-in-1 SOI diode pixel technology to significantly increase the diode series number in a pixel, leading to the increase of responsivity [18]; Ning et al. implemented a double-sacrificial-layer aluminum microbolometer fabrication process to enhance both the thermal isolation of the suspended microbridge structure and the IR absorption of the optical resonant cavity [42]. In this paper, we focus on the CMOS-compatible microbolometer IR detectors, that is, the low-cost microbolometer type IR detectors for imaging purpose fabricated via CMOS process (or conventional silicon LSI circuit process). During the fabrication process, no special delicate approach (e.g., the deposition of vanadium oxides) should be needed, and only simple MEMS process is applied after the CMOS process. The basics and the fabrication processes of such low-cost microbolometer IR detectors will be introduced, while the development trends and the technological advances are also discussed. Basics of Microbolometer When the IR radiation falls on the surface of the bolometer, it is absorbed and results in a temperature increase ∆T. When the heat balance is reached, the temperature rise is here C is the thermal capacitance of the absorber, which is connected to the environment via the thermal conductance G. ε is the emissivity (absorptance) of the incident IR radiation with amplitude P 0 and angular frequency ω. τ is the thermal time constant, which commonly ranges from several to several tens of milliseconds for the thermal IR detector. For both resistance type and diode type microbolometers, the temperature increase is transferred into the electric signal and then measured. Lower thermal conductance results in larger temperature increase and higher sensitivity, but worse time constant. Therefore, a small thermal capacitance is always necessary in order to relax the restriction of the trade-off between the sensitivity and the thermal constant time. The output signal of the microbolometer accompanies with noise that originates from various uncorrelated source, resulting in undesired random fluctuations. There are several major noise sources that should be considered in a microbolometer IR detector: Johnson noise, temperature fluctuation noise, and 1/f noise [66]. Besides, the shot noise also could be taken into consideration for the diode type microbolometer detector [27]. The total noise could be calculated in terms of its mean square as the sum of the mean squares of these noises: These noises determine the noise equivalent temperature difference (NETD). NETD is defined as the change in temperature when the output signal equals to the noise, i.e., the minimum temperature difference that could be measured. The performance of a microbolometer IR detector with optics may be evaluated in terms of the NETD. It is given by [67]. Here F = f /D is the F-number of the optical system, where f and D are the focal length and the aperture of the optics, respectively. A is the size of the absorber, R v is the responsivity defined as the change of the output voltage resulted from per unit incident IR power, (dP/dT t ) λ1−λ2 is the change in power per unit area radiated by a blackbody at temperature T t measured within the IR spectral band from λ 1 to λ 2 . The value of (dP/dT t ) λ1−λ2 for a 295 K blackbody within the 8-14 µm band is 2.62 × 10 −4 W/cm 2 K [68]. The NETD of a low-cost microbolometer IRFPA under its operation condition typically ranges from 50 to 500 mK. Development Trends Before the thermal detector has been demonstrated to be practical for imaging purpose, the IR detector field was dominated by the photon detectors which were restricted to military applications because of the expensive materials and requirement of cryogenic coolers. The appearance of the commercialized thermal IR detector encourages the expectation for non-military application. The CMOS-compatible microbolometer IRFPA, from its very beginning, aims to further lower the cost and the chip size, while maintain an acceptable performance. Pixel size, as the indicator of the integration level, is the key factor limiting the chip size. The pixel size reduction of the CMOS-compatible IR detectors is shown in Figure 1. Because of the considerable efforts, the pixel size of the SOI diode uncooled IRFPAs has been reduced to 15 µm in 2011 [18]. Meanwhile, the spatial resolution or the array size is related to the pixel size. With the progress of the CMOS microbolometer technology, the array size also increases from 128 × 128 reported in 1996 [12] to 2000 × 1000 reported in 2012 [19]. As shown in Figure 2, unlike the pixel size and spatial resolution, the NETD generally shows a trend of remaining in the same level rather than continuously improving. Since the CMOS microbolometer IR detectors mainly aim the non-military market, a NETD of several tens mK is already capable in handling those applications. Smaller pixels collect less IR power to increase the temperature, resulting in lower sensitivity. In a conventional structure of the microbolometer pixel, the absorber, the thermal sensor, and the supporting legs are in the same suspended layer. When the pixels are scaled down, higher fill factor and higher emissivity are necessary in maintaining the same sensitivity since less IR radiation is absorbed, bringing a hard task for the trade-off between the thermal conductance and the fill factor. To fix this issue, the multi-level structures with hidden support leg [69][70][71] or umbrella absorber [15,72,73] have been proposed. By these new structures, the high fill factor and low thermal conductance could be simultaneously achieved in small pixels. However, it seems that the further step of the pixel size reduction has slowed down in recent year, indicating the requirement of novel technical innovation. In addition, the restriction of the diffraction limitation also impedes the progress of pixel size reduction, which is discussed later. Micromachines 2020, 11, x 4 of 19 [15,72,73] have been proposed. By these new structures, the high fill factor and low thermal conductance could be simultaneously achieved in small pixels. However, it seems that the further step of the pixel size reduction has slowed down in recent year, indicating the requirement of novel technical innovation. In addition, the restriction of the diffraction limitation also impedes the progress of pixel size reduction, which is discussed later. [15,72,73] have been proposed. By these new structures, the high fill factor and low thermal conductance could be simultaneously achieved in small pixels. However, it seems that the further step of the pixel size reduction has slowed down in recent year, indicating the requirement of novel technical innovation. In addition, the restriction of the diffraction limitation also impedes the progress of pixel size reduction, which is discussed later. Figure 3 shows the pixel structure of the resistance type microbolometer. The microbolometer pixel contains three parts: the infrared absorber, the thermal sensor, and the microbridge structure. The infrared absorber usually consists of the dielectric layer or the multi-layer structure of dielectric and metal layers [74]. The thermal sensor is implemented using a CMOS-compatible thermistor layer sandwiched in the absorber, which is designed to be serpentine to maximize the resistance. The microbridge structure consists of two support legs to sustain the suspended area, creating a thermally isolated cavity between the absorber and the substrate in order to greatly reduce the thermal conductance. In an Al microbolometer, the IR absorber is implemented using the SiO 2 /Si 3 N 4 layer, with the Al thermistor from the metal interconnect layer Metal 3 sandwiched inside the SiO 2 layer. The SiO 2 and Si 3 N 4 also provide protection for the thermistor and the read-out circuit during the post-CMOS etching process. The Resistance Type Microbolometer Pixel Micromachines 2020, 11, x 5 of 19 Figure 3 shows the pixel structure of the resistance type microbolometer. The microbolometer pixel contains three parts: the infrared absorber, the thermal sensor, and the microbridge structure. The infrared absorber usually consists of the dielectric layer or the multi-layer structure of dielectric and metal layers [74]. The thermal sensor is implemented using a CMOS-compatible thermistor layer sandwiched in the absorber, which is designed to be serpentine to maximize the resistance. The microbridge structure consists of two support legs to sustain the suspended area, creating a thermally isolated cavity between the absorber and the substrate in order to greatly reduce the thermal conductance. In an Al microbolometer, the IR absorber is implemented using the SiO2/Si3N4 layer, with the Al thermistor from the metal interconnect layer Metal 3 sandwiched inside the SiO2 layer. The SiO2 and Si3N4 also provide protection for the thermistor and the read-out circuit during the post-CMOS etching process. As shown in Figure 4, the process flow of the Al microbolometer shown in Figure 3 is as follows: a. The p+/n−well (2,3), gate oxide (4), and polysilicon (5) are fabricated on the substrate (1) via lithography, deposition, ion implantation, and annealing in order to form the transistor. b. Deposit SiO2 (6) as the isolation layer, then etch and deposit W (7) to form the contacts. Afterwards the metal interconnect layer Metal 1 (and the subsequent metal interconnection layers in the active region as well) is formed by depositing Al (8) as the connection of the readout circuit. c. Deposit SiO2 (6) and then form the W (7) vias. The Al (8) in metal interconnect layer Metal 2 is deposited as the sacrificial layer in the sensor region. d. Deposit SiO2 (6), form the W (7) vias, and then deposit Al (8) for the interconnect layer Metal 3 to form the thermistor in the sensor region. As shown in Figure 4, the process flow of the Al microbolometer shown in Figure 3 is as follows: a. The p+/n−well (2,3), gate oxide (4), and polysilicon (5) are fabricated on the substrate (1) via lithography, deposition, ion implantation, and annealing in order to form the transistor. b. Deposit SiO 2 (6) as the isolation layer, then etch and deposit W (7) to form the contacts. Afterwards the metal interconnect layer Metal 1 (and the subsequent metal interconnection layers in the active region as well) is formed by depositing Al (8) as the connection of the read-out circuit. c. Deposit SiO 2 (6) and then form the W (7) vias. The Al (8) in metal interconnect layer Metal 2 is deposited as the sacrificial layer in the sensor region. d. Deposit SiO 2 (6), form the W (7) vias, and then deposit Al (8) for the interconnect layer Metal 3 to form the thermistor in the sensor region. e. Deposit SiO 2 /Si 3 N 4 (6,9) to protect the device. Then dry etch the SiO 2 /Si 3 N 4 over the pad area and expose the sacrificial layer. f. Use photoresist (10) to protect the pad area during the post-CMOS etching. Use the phosphoric acid solution to etch the sacrificial layer to form the cavity and expose the microbridge structure. e. Deposit SiO2/Si3N4 (6,9) to protect the device. Then dry etch the SiO2/Si3N4 over the pad area and expose the sacrificial layer. f. Use photoresist (10) to protect the pad area during the post-CMOS etching. Use the phosphoric acid solution to etch the sacrificial layer to form the cavity and expose the microbridge structure. The steps a to e are in a standard CMOS process, while step f is in a post-CMOS MEMS process. The whole process could be completed in a CMOS foundry to achieve high uniformity devices in The steps a to e are in a standard CMOS process, while step f is in a post-CMOS MEMS process. The whole process could be completed in a CMOS foundry to achieve high uniformity devices in ultra-low production cost. However, an intrinsic limitation of the CMOS-compatible microbolometer is the thermistor material. When infrared radiation illuminates the surface of the absorber, the thermistor in the absorber is heated and causes a change in its resistance related to its temperature coefficient of resistance (TCR) α, defined as: R 0 is the resistance of the bolometer at room temperature; dR b is the resistance change depending on the temperature change dT. Under a certain bias current, the change of the thermistor resistance could be obtained by measuring the output voltage. Therefore, the value of TCR significantly influences the device sensitivity. Generally, the semiconductor-based microbolometers have negative TCR values, while the metal ones have positive TCR values. Table 1 lists several common CMOS-compatible thermistor materials. Compared to the high TCR thermistor materials like VO x which has a TCR of about 2-3%/K, the CMOS-compatible materials have obvious disadvantage in the TCR. This results in a low sensitivity which needs to be compensated by the high-spec read-out circuit. The Diode Type Microbolometer Pixel The pixel structure of the diode type microbolometer is similar to that of the resistance type microbolometer; it also consists of three parts: the infrared absorber, the thermal sensor, and the microbridge structure. Here the thermal sensor becomes the p-n junction diodes, which are connected in series to enlarge the output signal. The diodes are usually fabricated on the SOI film for several reasons: (a) The diodes fabricated on deposited Si film exhibit large 1/f noise [76,77]; (b) the diodes fabricated on Si substrate need a special electrochemical etch-stop technique to protect the n−well during the post-CMOS etching process [27,33,78]; (c) the SOI film is expected to have fewer defects and localized states which could reduce the 1/f noise. The pixel structure of a SOI diode detector is shown in Figure 5. The BOX (buried oxide) layer and the dielectric film over the diodes protect the diodes during the post-CMOS etching process. The temperature change in the diode under a certain bias current results in a voltage shift. The temperature coefficient in a diode type microbolometer is determined by the forward voltage V f . With diodes in series connection, the sensitivity is given by [14]: where n is the number of the diodes in the series. The typical value of the sensitivity for a single diode at 300 K is~2 mV/K under a bias voltage of 0.6 V [59], which is equivalent to a temperature coefficient of only~0.33%/K. However, as the number of the diodes in the series increases, the temperature coefficient could become comparable to the TCR of VO x . For instance, when n = 8, the diodes in series connection have a temperature coefficient of~3%/K. Meanwhile, benefiting from the high uniformity of the CMOS process and the low defect density in the SOI film, the diode type microbolometer usually exhibits much better noise. reasons: (a) The diodes fabricated on deposited Si film exhibit large 1/f noise [76,77]; (b) the diodes fabricated on Si substrate need a special electrochemical etch-stop technique to protect the n−well during the post-CMOS etching process [27,33,78]; (c) the SOI film is expected to have fewer defects and localized states which could reduce the 1/f noise. The pixel structure of a SOI diode detector is shown in Figure 5. The BOX (buried oxide) layer and the dielectric film over the diodes protect the diodes during the post-CMOS etching process. The temperature change in the diode under a certain bias current results in a voltage shift. The temperature coefficient in a diode type microbolometer is determined by the forward voltage Vf. With diodes in series connection, the sensitivity is given by [14]: where n is the number of the diodes in the series. The typical value of the sensitivity for a single diode at 300 K is ~2 mV/K under a bias voltage of 0.6 V [59], which is equivalent to a temperature coefficient of only ~0.33%/K. However, as the number of the diodes in the series increases, the temperature coefficient could become comparable to the TCR of VOx. For instance, when n = 8, the diodes in series connection have a temperature coefficient of ~3%/K. Meanwhile, benefiting from the high uniformity of the CMOS process and the low defect density in the SOI film, the diode type microbolometer usually exhibits much better noise. Improvement in Absorber for Small Pixel Structure The small pixels benefit the detectors from a production point of view. For instance, when the scaling down from 25 μm pixel to 17 μm, it decrease the processing cost by 40% and the power consumption by 33%, while the detection range is increased significantly [61]. However, since the IR absorption is proportional to the absorber area, it demands novel structures that achieve high fill factor or high emissivity in order to compensate the disadvantage of small pixel size. The umbrella absorber is a widely adopted design to maximize the absorber area that captures more incident IR energy. As shown in Figure 6a, it consists of an IR absorber layer, which individually Improvement in Absorber for Small Pixel Structure The small pixels benefit the detectors from a production point of view. For instance, when the scaling down from 25 µm pixel to 17 µm, it decrease the processing cost by 40% and the power consumption by 33%, while the detection range is increased significantly [61]. However, since the IR absorption is proportional to the absorber area, it demands novel structures that achieve high fill factor or high emissivity in order to compensate the disadvantage of small pixel size. The umbrella absorber is a widely adopted design to maximize the absorber area that captures more incident IR energy. As shown in Figure 6a, it consists of an IR absorber layer, which individually suspends over the bolometer and support legs, supported by one or more posts. The umbrella absorber consists of dielectric layer or multi-layer structure of metal and dielectric layers, which is the same as that of the conventional absorber layers. Some umbrella absorbers have etch holes designed to enhance the sacrificial removal. These etch holes also benefit the responsivity due to the decrease of thermal capacitance of the umbrella absorber [79]. The umbrella absorber can achieve a fill factor above 90% and~23% improvement in responsivity [72]. This structure provides the fill factor close to an ideal value at the expense of more process steps, usually increasing 2-5 masking layers and corresponding deposition and etching steps [45]. Micromachines 2020, 11, x 9 of 19 Another prospective approach to improve the absorption is the absorber with a metasurface. The magnetic resonance in the metasurface could control the thermal emission of phonon, therefore the IR absorption spectrum of the metasurface could be manipulated via changing the structure parameter [80]. This could be implemented to the surface of the absorber in order to enhance the IR absorption in the microbolometer pixel, as shown in Figure 6b. This novel approach has attracted the attention of several groups and the preliminary results reveal its potential of frequency selection and absorption enhancement [81][82][83]. Read-Out Integrated Circuit (ROIC) The IR energy absorbed by the microbolometer pixel is transformed into weak photocurrent, which is not capable for direct processing due to the noise interference. The photocurrent needs to be amplified and finally turned into digital signal by the read-out integrated circuit (ROIC). Benefiting from the CMOS technology, the ROIC has the advantages of high signal handling capacity, high circuit density, low power dissipation, high uniformity and low noise [3]. As shown in Figure 7, the ROIC usually contains several blocks: (1) The read-out circuit (ROC) to amplify the photocurrent and turn it into a voltage signal; (2) the row decoder and the column multiplexer to select an individual pixel; (3) the power supply and clock signal generator to provide the bias and the clock signal; (4) some IRFPAs have the on-chip analog-to-digital converter (ADC) integrated in the ROIC, while others implement the external ADC. Among all these blocks, the ROC and the ADC are the core blocks which determine the performance of the ROIC. Another prospective approach to improve the absorption is the absorber with a metasurface. The magnetic resonance in the metasurface could control the thermal emission of phonon, therefore the IR absorption spectrum of the metasurface could be manipulated via changing the structure parameter [80]. This could be implemented to the surface of the absorber in order to enhance the IR absorption in the microbolometer pixel, as shown in Figure 6b. This novel approach has attracted the attention of several groups and the preliminary results reveal its potential of frequency selection and absorption enhancement [81][82][83]. Read-Out Integrated Circuit (ROIC) The IR energy absorbed by the microbolometer pixel is transformed into weak photocurrent, which is not capable for direct processing due to the noise interference. The photocurrent needs to be amplified and finally turned into digital signal by the read-out integrated circuit (ROIC). Benefiting from the CMOS technology, the ROIC has the advantages of high signal handling capacity, high circuit density, low power dissipation, high uniformity and low noise [3]. As shown in Figure 7, the ROIC usually contains several blocks: (1) The read-out circuit (ROC) to amplify the photocurrent and turn it into a voltage signal; (2) the row decoder and the column multiplexer to select an individual pixel; (3) the power supply and clock signal generator to provide the bias and the clock signal; (4) some IRFPAs have the on-chip analog-to-digital converter (ADC) integrated in the ROIC, while others implement the external ADC. Among all these blocks, the ROC and the ADC are the core blocks which determine the performance of the ROIC. Another prospective approach to improve the absorption is the absorber with a metasurface. The magnetic resonance in the metasurface could control the thermal emission of phonon, therefore the IR absorption spectrum of the metasurface could be manipulated via changing the structure parameter [80]. This could be implemented to the surface of the absorber in order to enhance the IR absorption in the microbolometer pixel, as shown in Figure 6b. This novel approach has attracted the attention of several groups and the preliminary results reveal its potential of frequency selection and absorption enhancement [81][82][83]. Read-Out Integrated Circuit (ROIC) The IR energy absorbed by the microbolometer pixel is transformed into weak photocurrent, which is not capable for direct processing due to the noise interference. The photocurrent needs to be amplified and finally turned into digital signal by the read-out integrated circuit (ROIC). Benefiting from the CMOS technology, the ROIC has the advantages of high signal handling capacity, high circuit density, low power dissipation, high uniformity and low noise [3]. As shown in Figure 7, the ROIC usually contains several blocks: (1) The read-out circuit (ROC) to amplify the photocurrent and turn it into a voltage signal; (2) the row decoder and the column multiplexer to select an individual pixel; (3) the power supply and clock signal generator to provide the bias and the clock signal; (4) some IRFPAs have the on-chip analog-to-digital converter (ADC) integrated in the ROIC, while others implement the external ADC. Among all these blocks, the ROC and the ADC are the core blocks which determine the performance of the ROIC. Read-Out Circuit (ROC) In the ROC, the photocurrent generated from the pixel is amplified and accumulated by a capacitor during an integration time to form a stronger voltage signal, which is then read out into a sample-and-hold (S/H) circuit for the consequent digital conversion in ADC. The design of the ROC significantly affects the power dissipation and the quality of the analog output signal before converting. The most commonly used ROC configuration in microbolometer IRFPAs are direct injection (DI) [61,84,85], gate modulation input (GMI) [13,34,35], and capacitive transimpedance amplifier (CTIA) [20,31,65,86,87]. The design concepts involve the performance and the structural complexity; each designer may prefer a different design depending on the technical requirement and the process schedule. The structure of the DI configuration is shown in Figure 8a. The photocurrent is injected to C1 to integrate after being amplified via M1, and then is read out to S/H circuit through M4. The function of M2 is to reset the voltage on C1. The DI benefits from simple structure and low power dissipation, but suffers from unstable bias voltage, poor linearity, and poor noise suppression. Figure 8b shows the structure of the GMI configuration. The photocurrent flows into a current-mirror to generate the mirror current toward C1 and then gets integrated. The GMI itself has a varying current gain depending on the background, therefore leading to the higher sensitivity, background suppression, and high dynamic range. Meanwhile, the circuit noise is suppressed by the current mirror structure. The disadvantage of GMI is that the linearity is still affected by the unstable bias voltage, while the current gain and injection efficiency are susceptible to the threshold voltage and process condition of the metal-oxide-semiconductor field-effect transistor (MOSFET), resulting to a negative influence on the circuit performance. Read-Out Circuit (ROC) In the ROC, the photocurrent generated from the pixel is amplified and accumulated by a capacitor during an integration time to form a stronger voltage signal, which is then read out into a sample-and-hold (S/H) circuit for the consequent digital conversion in ADC. The design of the ROC significantly affects the power dissipation and the quality of the analog output signal before converting. The most commonly used ROC configuration in microbolometer IRFPAs are direct injection (DI) [61,84,85], gate modulation input (GMI) [13,34,35], and capacitive transimpedance amplifier (CTIA) [20,31,65,86,87]. The design concepts involve the performance and the structural complexity; each designer may prefer a different design depending on the technical requirement and the process schedule. The structure of the DI configuration is shown in Figure 8a. The photocurrent is injected to C1 to integrate after being amplified via M1, and then is read out to S/H circuit through M4. The function of M2 is to reset the voltage on C1. The DI benefits from simple structure and low power dissipation, but suffers from unstable bias voltage, poor linearity, and poor noise suppression. Figure 8b shows the structure of the GMI configuration. The photocurrent flows into a current-mirror to generate the mirror current toward C1 and then gets integrated. The GMI itself has a varying current gain depending on the background, therefore leading to the higher sensitivity, background suppression, and high dynamic range. Meanwhile, the circuit noise is suppressed by the current mirror structure. The disadvantage of GMI is that the linearity is still affected by the unstable bias voltage, while the current gain and injection efficiency are susceptible to the threshold voltage and process condition of the metal-oxide-semiconductor field-effect transistor (MOSFET), resulting to a negative influence on the circuit performance. As the most popular configuration in microbolometer IRFPAs, the CTIA configuration is shown in Figure 8c, which is an integrator with the capacitor C1 is in the negative feedback loop of the As the most popular configuration in microbolometer IRFPAs, the CTIA configuration is shown in Figure 8c, which is an integrator with the capacitor C1 is in the negative feedback loop of the operational amplifier. M1 is the reset switch and M2 controls the output. The CTIA has low input impedance thus high injection efficiency, stable bias thus excellent linearity, controllable current gain, high sensitivity, and good jam-proof. However, it has relatively high power dissipation, large occupied area, and would introduce more noise due to the offset voltage. Compared to the DI configuration, the CTIA has higher current gain which provides higher sensitivity to detect weaker current, and it also has lower input impedance leading to higher injection efficiency. Compared to the GMI configuration, the CTIA provides more stable bias voltage for the detector, resulting in a better linearity in the output signal. The typical CTIA parameters for microbolometer IRFPAs are shown in Table 2. Analog-to-Digital Convertor (ADC) Generally, the high-speed ADC with a high dynamic range is required for the utility in the CMOS microbolometer IRFPAs. Although the on-chip ADCs using the pixel-level Sigma-Delta (Σ-∆) ADC [92][93][94], the monolithic pipeline ADC [95,96], and the column-parallel successive approximation register (SAR) ADC [97] are reported to be available to achieve the high sensitivity ROICs for microbolometer IRFPAs, there are no report about the on-chip ADC for readily available CMOS microbolometer IRFPAs. The CMOS microbolometer IRFPAs usually use external ADCs, due to the inadequate signal processing area in the monolithic FPA. The microbolometer IRFPAs raises the requirement to the ADC such as low power dissipation, high speed, low delay, low offset voltage, low noise, and high slew rate. Table 3 shows typical parameters of an on-chip monolithic pipeline ADC for the microbolometer IRFPA. Table 3. Parameters of a 14 bits on-chip pipeline analog-to-digital convertor (ADC) designed for microbolometer IRFPA [95]. Focal Plane Array (FPA) Microbolometer pixels are usually fabricated on the substrate with repeating arrangement to form a microbolometer array for imaging purpose. Each microbolometer absorbs the incident IR radiation and transforms it into electric output, which is read out and calibrated by the ROIC to produce a pixel in a two-dimensional image. A microbolometer FPA is the combination of the microbolometer array and the ROIC. Generally, the IRFPA could be sorted as hybrid and monolithic [98]. In the hybrid FPA, the detector pixels and the ROIC are fabricated in different substrates, which are combined using the flip-chip bonding via metal bumps. Since it has the advantages such as the independent optimization of detector material and multiplexer, near 100% fill factor, and sufficient signal processing area, it is widely used in the cooled IRFPAs and high-end uncooled IRFPAs [6]. The monolithic FPA integrates the ROIC and the detector pixels in the same substrate, and part of the column or row selecting circuit is integrated in the pixels. Since the silicon-based monolithic FPA technology is compatible with CMOS process, providing a mature approach with high uniformity and low cost, it is widely used in the microbolometer IRFPAs. The reduction of pixel size makes challenging tasks for the mechanical stability of the pixel structure, the ROIC, the signal to noise ratio, etc. Not only the thermal sensor material, but also the overall process becomes the limits of the final performance of the IRFPAs. Table 4 lists the performance of several commercial IRFPAs. The performance of SOI diode IRFPAs and the CMOS-compatible resistance microbolometer IRFPAs is still inferior to that of the VO x or Si derivatives microbolometer, but the gap between the two is small. This means the low sensitivity resulted from the low TCR of the thermal sensor material could be partly compensated by the small feature size and high uniformity provided through CMOS or Si LSI process. Vacuum Packaging Technology The thermal conduction via the atmosphere takes over a large fraction in the total thermal conduction, especially when the pixel size is small. Since the temperature change and thus the responsivity is proportional to the thermal conductance, the vacuum packaging of the microbolometer pixels is necessary to eliminate the thermal conduction through air. Unfortunately, the cost of the vacuum packaging is one of the major cost drivers for the microbolometer IRFPA. The typical vacuum level required here is below 1 Pa, which raises a challenge to the packaging technology [100]. Although such requirement could be achieved via one-by-one pumping through a fine-bore tube, the cost becomes a bottleneck in lowering the cost of uncooled IRFPAs. Figure 9 shows the concept of the wafer-level packaging (WLP) technology for IRFPA, which is a popular option for cost reduction [59,101,102]. In this technology an IR transparent cap wafer is bonded to the IRFPA wafer under vacuum and then the hermetical sealing is achieved using solders. Several steps are needed prior to the bonding to accomplish the cap wafer. The cavities for the pixels are formed via etching, and then both sides of the cap wafer are antireflection-coated, afterwards the vacuum getters are deposited inside the cavities. The WLP technology is a practical technology that is capable to reach an average seal yield > 95% with correct parameters [103]. Micromachines 2020, 11, x 12 of 19 monolithic FPA integrates the ROIC and the detector pixels in the same substrate, and part of the column or row selecting circuit is integrated in the pixels. Since the silicon-based monolithic FPA technology is compatible with CMOS process, providing a mature approach with high uniformity and low cost, it is widely used in the microbolometer IRFPAs. The reduction of pixel size makes challenging tasks for the mechanical stability of the pixel structure, the ROIC, the signal to noise ratio, etc. Not only the thermal sensor material, but also the overall process becomes the limits of the final performance of the IRFPAs. Table 4 lists the performance of several commercial IRFPAs. The performance of SOI diode IRFPAs and the CMOScompatible resistance microbolometer IRFPAs is still inferior to that of the VOx or Si derivatives microbolometer, but the gap between the two is small. This means the low sensitivity resulted from the low TCR of the thermal sensor material could be partly compensated by the small feature size and high uniformity provided through CMOS or Si LSI process. Vacuum Packaging Technology The thermal conduction via the atmosphere takes over a large fraction in the total thermal conduction, especially when the pixel size is small. Since the temperature change and thus the responsivity is proportional to the thermal conductance, the vacuum packaging of the microbolometer pixels is necessary to eliminate the thermal conduction through air. Unfortunately, the cost of the vacuum packaging is one of the major cost drivers for the microbolometer IRFPA. The typical vacuum level required here is below 1 Pa, which raises a challenge to the packaging technology [100]. Although such requirement could be achieved via one-by-one pumping through a fine-bore tube, the cost becomes a bottleneck in lowering the cost of uncooled IRFPAs. Figure 9 shows the concept of the wafer-level packaging (WLP) technology for IRFPA, which is a popular option for cost reduction [59,101,102]. In this technology an IR transparent cap wafer is bonded to the IRFPA wafer under vacuum and then the hermetical sealing is achieved using solders. Several steps are needed prior to the bonding to accomplish the cap wafer. The cavities for the pixels are formed via etching, and then both sides of the cap wafer are antireflection-coated, afterwards the vacuum getters are deposited inside the cavities. The WLP technology is a practical technology that is capable to reach an average seal yield >95% with correct parameters [103]. Although the wafer level packaging technology provides a significant cost reduction, it still takes a considerable proportion in the total cost of the uncooled IRFPA, especially for the low-end market. A pixel level packaging (PLP) technology has been developed to address this issue [104][105][106]. The PLP process consists in the manufacturing of IR transparent microcaps that cover each pixel in the Micromachines 2020, 11, 800 13 of 19 direct consequent step of the wafer level bolometer fabrication, i.e., no extra bonding process is needed. Figure 10 shows the schematics of a packaged pixel. To form this structure, first, a sacrificial layer with trenches around each pixel is formed above the microbolometer via deposition and etching. Then, an IR transparent material is deposited to form the microcap structure. After that, etch holes are formed through the IR transparent microcap and the sacrificial layer is removed. Finally, a sealing and anti-reflecting layer is deposited under high vacuum. The pixel using PLP keeps a stable vacuum level below 10 −3 mbar and shows nominal performance after one year of ageing, demonstrating the PLP to be a prospective novel vacuum packaging technology for the microbolometer IRFPAs. Micromachines 2020, 11, x 13 of 19 Although the wafer level packaging technology provides a significant cost reduction, it still takes a considerable proportion in the total cost of the uncooled IRFPA, especially for the low-end market. A pixel level packaging (PLP) technology has been developed to address this issue [104][105][106]. The PLP process consists in the manufacturing of IR transparent microcaps that cover each pixel in the direct consequent step of the wafer level bolometer fabrication, i.e., no extra bonding process is needed. Figure 10 shows the schematics of a packaged pixel. To form this structure, first, a sacrificial layer with trenches around each pixel is formed above the microbolometer via deposition and etching. Then, an IR transparent material is deposited to form the microcap structure. After that, etch holes are formed through the IR transparent microcap and the sacrificial layer is removed. Finally, a sealing and anti-reflecting layer is deposited under high vacuum. The pixel using PLP keeps a stable vacuum level below 10 −3 mbar and shows nominal performance after one year of ageing, demonstrating the PLP to be a prospective novel vacuum packaging technology for the microbolometer IRFPAs. Limitation and Future Trends The minimum resolvable size x decided by the diffraction limitation could be expressed by the F-number and the wavelength λ according to the Rayleigh Criterion, which is Here θ is the diffraction angle, and f is the focal length of the optical lens. In a LWIR detector the λ ranges from 8-14 μm, while the F-number for CMOS microbolometer IRFPAs is usually close to 1 to make the device compact, indicating the minimum resolvable size is 10-17 μm. When the pixel size is between 0.5λF and 1.22λF, the resolution still benefits from the oversampling but saturates quickly as the pixel size is smaller [79]. However, unlike the photon detectors which prefer a pixel size close to or even smaller than the diffraction limit to achieve the maximum performance, the reported CMOS microbolometer IRFPAs are still in the "detector limit" regime, i.e., still far from the potential limiting performance. The main factor that limits the pixel size reduction in CMOS microbolometer is the responsivity. As mentioned above, smaller pixel means less IR absorption, resulting in low responsivity. Meanwhile, the scale-down of circuits results in a lower applied bias voltage, which also means lower responsivity. The responsivity could be enhanced by adjusting the fill factor, the emissivity ε, the thermal conductance G, and the temperature coefficient TCR or dVf/dT. The fill factor and the emissivity in the state-of-the-art technology are already high, although the ε is still capable to increase to a certain extent via the metasurface technology. The thermal conductance could be decreased with thinner or longer support legs. The TCR is intrinsic to the material, but the resistance increase of the thermistor is able to raise the responsivity. On the other hand, the temperature coefficient of the diode type microbolometer is mainly determined by the number of diodes in series. In any case, the way to enhance the responsivity of the small pixels is related to a smaller feature size. Besides, the spatial resolution is also affected by the array size. Although the XGA (extended graphics array) format (1024 × 768) is popularized in the VOx and silicon derivatives microbolometer IRFPAs, the QVGA (quarter video graphics array) format (320 × 240) is still popular with the CMOS Limitation and Future Trends The minimum resolvable size x decided by the diffraction limitation could be expressed by the F-number and the wavelength λ according to the Rayleigh Criterion, which is x ≈ f θ = 1.22λF (6) Here θ is the diffraction angle, and f is the focal length of the optical lens. In a LWIR detector the λ ranges from 8-14 µm, while the F-number for CMOS microbolometer IRFPAs is usually close to 1 to make the device compact, indicating the minimum resolvable size is 10-17 µm. When the pixel size is between 0.5λF and 1.22λF, the resolution still benefits from the oversampling but saturates quickly as the pixel size is smaller [79]. However, unlike the photon detectors which prefer a pixel size close to or even smaller than the diffraction limit to achieve the maximum performance, the reported CMOS microbolometer IRFPAs are still in the "detector limit" regime, i.e., still far from the potential limiting performance. The main factor that limits the pixel size reduction in CMOS microbolometer is the responsivity. As mentioned above, smaller pixel means less IR absorption, resulting in low responsivity. Meanwhile, the scale-down of circuits results in a lower applied bias voltage, which also means lower responsivity. The responsivity could be enhanced by adjusting the fill factor, the emissivity ε, the thermal conductance G, and the temperature coefficient TCR or dV f /dT. The fill factor and the emissivity in the state-of-the-art technology are already high, although the ε is still capable to increase to a certain extent via the metasurface technology. The thermal conductance could be decreased with thinner or longer support legs. The TCR is intrinsic to the material, but the resistance increase of the thermistor is able to raise the responsivity. On the other hand, the temperature coefficient of the diode type microbolometer is mainly determined by the number of diodes in series. In any case, the way to enhance the responsivity of the small pixels is related to a smaller feature size. Besides, the spatial resolution is also affected by the array size. Although the XGA (extended graphics array) format (1024 × 768) is popularized in the VO x and silicon derivatives microbolometer IRFPAs, the QVGA (quarter video graphics array) format (320 × 240) is still popular with the CMOS microbolometer IRFPAs. Since the difficulty to achieve larger array size is much easier compared to that to the pixel size reduction, the status of the low spatial resolution could be considered as a trade-off between the production cost and the performance. It also implies that the market demand to the performance improvement in low-end IR detector is not eager. However, the merit of the pixel size reduction is significant. The small pixel provides low production cost, high spatial resolution, and small device size. Although the steps of the pixel size reduction in the CMOS microbolometer IRFPAs has slowed down in recent years because of insufficient market demand, the smaller pixels with lower costs and better performance will come sooner or later as the technology based on smaller feature size becomes practical.
10,799.6
2020-08-24T00:00:00.000
[ "Physics", "Engineering" ]
Free surface effects and the utility of a skim plate for experiments in a water towing tank at steady and unsteady model velocity A towing tank is utilized to investigate the flow field around a two-dimensional submerged foil model operating near the free surface. Free surface effects are analyzed for steady and unsteady model velocity. The model’s submergence depth and angle of attack are varied. Tests are conducted for the model facing upside-up and upside-down. The surface deflection is recorded and the experimental results are utilized to validate an analytic model that is deployed to predict wake wave patterns at arbitrary model velocity. The flow mechanism leading to load alterations when the foil is in the vicinity of the free surface is explored in detail using experimental and analytic results. The imposed wave-induced velocity perturbations alter the effective angle of attack experienced by the foil. Flow separation is delayed when the model is facing upside-up and promoted when facing upside-down. For test cases with unsteady sinusoidal model velocity, forward traveling waves are generated, leading to a time-varying change in the inflow condition of the submerged foil. Increasing the model’s submergence depth alleviates free surface effects. A skim plate is installed in-between the free surface and model. It shows similar wave alleviating effects as obtained when increasing the model submergence depth by locally blocking wave-induced velocities. The skim plate position is varied in the longitudinal direction to determine its most advantageous position. Surface wave effects at unsteady model velocity are alleviated most effectively when the skim plate protrudes upstream of the model. Introduction Towing tanks are a well-established mean for the experimental investigation of watercraft and their components (e.g., ships, submarines, or hydrofoils). In addition, it is a viable alternative to study the aerodynamic characteristics of land-and airborne bodies such as cars, trains, and airfoils (Schmidt et al. 2017;Tschepe et al. 2019;Kirk and Jones 2019). A towing tank may be the preferred choice over a wind tunnel if high Reynolds numbers or realistic boundary conditions (i.e., quiescent fluid and moving body) need to be obtained. Furthermore, unsteady experiments are conducted with more ease due to the increased time scales in water compared to air. Especially in recent years, there is a notable increase in experimental studies concerning surging airfoils (time-dependent variation in freestream velocity) (Smith and Jones 2020;Zhu et al. 2020;Müller-Vahl et al. 2020;Kirk and Jones 2019;Medina et al. 2018). Many of these studies are carried out in a towing tank or water tunnel due to the relative simplicity of generating the desired motion profile while achieving high non-dimensional parameters such as Reynolds number Re, reduced frequency k, and relative velocity amplitude . However, several challenges need to be overcome to obtain high-quality and unbiased experimental data (Jentzsch et al. 2021). Among these challenges, free surface effects may affect the hydrodynamic characteristic of the towed object near the free surface such that the desired transferability to aerodynamic forces is in question. Some researchers report utilizing a skim plate placed atop or just below the free surface to alleviate free surface effects in a towing tank Corkery et al. 2018). For a vertically mounted wing, this plate also acts as a symmetry plane and thus doubling the effective aspect ratio AR. Studies discussing the effectiveness of such devices are absent. Furthermore, any guidelines regarding the proper sizing and installation of a skim plate have not been reported either. Overall, the available literature is very limited. When an airfoil is tested in water rather than in air, it essentially becomes a hydrofoil. The basic shape of both foils is similar but they do differ in geometric details such as camber or thickness depending on the desired characteristic and application. Hydrofoils are utilized to lift the boat out of the water in order to reduce the hull drag leading to an increase in speed and fuel efficiency. Therefore, hydrofoils usually operate near the free surface. The steady lift coefficient C L shows a dependence on the submergence depth ratio h/c and submergence depth Froude number Fn h = u∕ √ gh where h stands for submergence depth, c for the chord length, and u for the velocity of the hydrofoil (see Fig. 1). Generally, the lift coefficient C L is reduced due to the presence of the free surface and shows the largest deviations in between 1 < Fn h < 5 (Hough and Moran 1969). At low Froude numbers Fn h < 0.4 the lift coefficient C L can reach values higher than the lift coefficient C L obtained in an unbounded fluid, where surface effects are absent. In such a case, the free surface acts as a rigid wall, and the foil experiences similar effects as a wing in ground proximity. Based on Airy's linear wave theory, it can be shown that foilgenerated surface waves become prominent for Fn h > 0.4 (Faltinsen 2006). Above this threshold value, the submergence depth of the foil is less than = 2 u 2 ∕g , which corresponds to the wavelength of the transversal waves in the wake. The orbital particle motion caused by the waves alters the effective velocity and angle of attack experienced by the foil section (van Walree 1999). Alterations of the effective angle of attack due to incident sea waves are reported in Faltinsen (2006) and Filippas et al. (2020). Wilson (1983) shows that the zero-lift angle 0 is affected considerably for low submergence depth ratios h/c and low chord-based Froude numbers Fn c = u∕ √ gc on a thin symmetrical hydrofoil of aspect ratio AR = 6 . At a Froude number of Fn c ≈ 1.2 , the zero-lift angle 0 is shifted by 0 ≈ 2.4 • at h∕c = 0.25 and 0 ≈ 0.5 • at h∕c = 1 due to the asymmetry below and above the foil. The change in zero-lift angle is negligible for Fn c > 3 according to this study. Based on the experimental and numeric studies carried out by Hough and Moran (1969) and van Walree (1999), free surface effects on a hydrofoil are insignificant as long as h∕c > 5 because the Froude number a hydrofoil usually operates at is Fn h > 3 . A hydraulic jump may occur at low submergence depth Froude number Fn h and low submergence depth ratio h/c. In such event, the pressure distribution around the hydrofoil is affected considerably and the minimum pressure coefficient C p may even be observed at the trailing rather than the leading edge (Parkin et al. 1956). While all these studies analyze the interaction between the submerged foil and self-induced free surface effects, the waves generated by a test rig that is required to mount the model have to be taken into account as well. The surface wave deflection may be altered significantly depending on the test rig design. Although wake waves have been extensively studied for various objects with a constant velocity, the reporting is limited for unsteady model velocities. The practical relevance is limited to ships de-or accelerating or performing a turning maneuver. However, knowledge about the wave system at unsteady model velocity is particularly crucial for Hough and Moran (1969) 164 Page 4 of 21 towing tank tests such as when investigating surging models. Especially at sinusoidal motion profiles, a complex wave system forms that changes in space and time. Due to the cyclic variation in velocity, waves are generated that pass the model when the instantaneous model velocity is low, and thus potentially affecting the flow over the submerged model. The resulting wake system depends on the parameters of the prescribed motion profile (i.e., mean velocity U 0 , frequency f, and relative velocity amplitude ). Towing tanks represent a viable alternative to wind tunnels to investigate the aerodynamic behavior of submerged models. However, asymmetric boundary conditions due to the deflection of the free surface may alter the forces acting on the submerged model. Waves introduce a fluid particle motion that change the effective angle of attack and pressure. Wake waves at unsteady (sinusoidal) model velocity have not been investigated in the past. This paper discusses the effect of free surface waves on a submerged airfoil at relatively low submergence depth ratios ranging from h = 1.3c to h = 2.5c at steady and unsteady model velocity. The applicability of a skim plate as a tool to alleviate surface wave effects is discussed. The general properties of Airy's linear wave theory are discussed in Sect. 3.1. A mathematical description of the pressure patch method is provided that is utilized to model the resulting steady and unsteady wake waves analytically. The analytic solution is compared to experimental results to demonstrate the validity of this approach (Secs. 3.1.1 and 3.1.2). Changes in pressure and effective angle of attack due to the wave-induced particle motion are addressed in Sect. 3.2 at steady (Sect. 3.2.1) and unsteady (Sect. 3.2.2) model velocity. In Sect. 3.3, the effect of utilizing a skim plate is discussed and the effectiveness assessed through surface pressure measurements on the submerged foil model. The effectiveness of a fixed-size skim plate is discussed when varied in the longitudinal direction relative to the submerged model. Setup and instrumentation The towing tank facility utilized to carry out the experiments is described in Sect. 2.1. It is one of the largest towing tanks in Europe and was built in 1903. The test rig, skim plate, and foil model that are towed through the water basin are described in Sect. 2.2. Towing tank facility The experiments are conducted in the large water towing tank at Technische Universität (TU) Berlin (Fig. 2). The water basin measures 250 m in length, 8.1 m in width, and the average water depth is 4.8 m. The basin depth is not constant along the entire length and drops from 3.4 m to 5.2 m at a longitudinal position of 60 m. Wave breakers are installed at one end of the water tank and along one side of Fig. 2 Towing tank and carriage train at Technische Universität Berlin the basin. It accelerates wave dissipation and reduces waiting times in between test runs. A preparation area is utilized to install the test rig, skim plate, and model, and has a length of 17 m, width of 2.1 m, and depth of 2.4 m. It is drained using an electric pump and is decoupled from the main water basin using a bulkhead. A carriage train with a weight of about 25 tons moves on two rails that are mounted along the walls of the towing tank. The rails are leveled with the water surface to retain a constant distance between the free surface and the measurement platform. The test rig and model are mounted to a truss that can be traversed in the vertical direction to change the submergence depth of the model. The maximum permitted loads in longitudinal (drag) and vertical (lift) direction are 1 and 2 tons, respectively. The application of side loads is avoided to prevent damaging the truss. Therefore, foil models are limited to tests with horizontal alignment. The carriage train is powered by eight 55.5 kW DC-motors and computer-controlled to generate velocity profiles of arbitrary shape. The maximum acceleration is limited to 1 m s −2 and the maximum velocity that can be reached is 12.5 m s −1 . The carriage train velocity is recorded using a rotary encoder with a resolution of 10 000 ticks per meter in combination with a frequency-to-current signal converter (WAS4 PRO Freq) by Weidmueller. Test rig setup, skim plate, and foil model A test rig is used to mount two-dimensional models with a maximum span of s = 1.0 m to the measurement platform (Fig. 3). Two splitter plates with a size of 1.25 m × 1.0 m × 0.035 m (l × w × t) are attached to both sides of the model to provide a quasi-two-dimensional flow field. The splitter plates are attached to four steel frameworks that consist of multiple welded steel plates with a thickness between 5 mm and 8 mm. All welded plates are oriented at an angle to each other but at zero incidence with respect to the longitudinal carriage train motion. The foil model is mounted onto angular steel mounts with a welded pin of 50 mm diameter at either side. The pins are inserted into bearings located on the outer part of either splitter plate such that the angle of incidence can be changed continuously. The point of rotation is located at x∕c = 0.5 . Additional end plates on the outer side are utilized to connect the pins with the test rig such that the angle of attack is fixed in place. The model's submergence depth h is controlled by traversing the truss, and thus the entire test rig in the vertical direction. The submergence depth of the model to the free surface is measured from the upper side of the foil at an angle of attack of = 0 • . The submergence depth h ranges from 1.3c to 2.5c, where c is the chord length of the model. A flat plate serves as a generic surrogate to investigate the effect of free surface waves onto a submerged foil model moving at steady and unsteady velocity. The foil has an aspect ratio of AR = 1.9 , where the span measures s = 0.95 m and the chord c = 0.5 m . The foil's leading edge is elliptical and the trailing edge is blunt. The length of the semi-major axis of the elliptical leading edge is a = 0.06 m and the thickness of the model is t = 0.03 m . The foil is a composite of coated wood and aluminum. To increase the structural integrity, two aluminum plates (0.01 m and 0.005 m thick) are affixed on top and bottom of the wood core. Multiple pressure sensors are embedded into the model. The pressure tubing length connecting the pressure sensor and tab is l tube = 0.15 m for all sensors. All pressure tabs incorporated into the model are on one side of the foil only. Thirteen pressure tabs are located at midspan and distributed along the chord. Two-dimensional flow around the model at steady velocity is confirmed at pre-stall angle of attack by utilizing two spanwise pressure tabs. Detailed information regarding the spanwise measurements is provided by Jentzsch et al. (2021). Differential pressure sensors made by Honeywell (26PCBFA6D) with a range of ±35 kPa and accuracy of 0.25% full scale are used together with custom-made amplifiers. The pressure reference side is connected to atmospheric pressure outside of the water. A pressure calibrator (KAL 84) can be connected to the tubing of the reference side to carry out a static sensor calibration. The other side of the differential pressure sensor is connected to the pressure tab via a silicon tube to record the surface pressure of the submerged foil. The tubing and sensor cavity are carefully filled with water. A triple-axis accelerometer (ADXL335) installed within the model is calibrated in situ, which is utilized to set the model's angle of attack. The free surface wave deflection is measured using one ultrasound sensor (Balluff BUS0052) while towing the model and test rig. All time-resolved signals such as pressure, model velocity, acceleration, and surface deflection are acquired with a data acquisition system with 32 synchronized channels at a sampling rate of f s = 10 000 Hz . The system consists of an NI 9188 mainframe by National Instruments equipped with eight analog input modules (NI 9215-BNC). A skim plate can be installed above the model. The skim plate is 2.44 m long, 0.96 m wide, and 0.02 m thick. The position can be traversed in longitudinal and vertical direction. The submergence depth of the skim plate below the undisturbed free surface is h sp = 0.2c . Three different longitudinal positions are tested within this study (i.e., front, center, and aft). In the center position, the skim plate protrudes equally over the leading and trailing edge. Measured from this position, the skim plate is moved either 0.8c to the front or 1.16c aft, which is the maximum translation possible in either direction for the given setup. A principle sketch showing the relative position of the submerged model and skim plate is provided in the inset in Fig. 3. Results Experiments that are conducted in an open channel water tunnel or towing tank are prone to the generation of water waves that travel on the density discontinuity of the two fluids (i.e., water and air). The general properties of water waves are discussed in Sect. 3.1. Often, the experimental investigation of air-and land-borne vehicles (i.e., airfoils, trucks, or trains) is performed in facilities where the working fluid is water rather than air in order to increase the Reynolds number. Assuming that the effects of compressibility are negligible, an important requirement for the transferability of results to extract the pure aerodynamic coefficients is that results are not affected by the boundary Test rig for two-dimensional models with skim plate and model installed. The inset shows a principle sketch of the three different skim plate positions (i.e., aft, center, and front) in longitudinal direction relative to the model conditions of the facility. A violation thereof exists if the model submergence is not sufficient such that free surface effects are apparent. The deflection of the free surface introduces a pressure and velocity perturbation. These perturbations attenuate with depth depending on the corresponding wavenumber and water depth. Knowledge regarding the wavenumber spectrum, including the correct amplitudes and phase, is required to obtain the perturbation values at the corresponding model submergence depth. However, it is a challenging task to obtain this information experimentally. The resulting wave field is a superposition of multiple waves generated by various sources distributed in three dimensions. Thus, an analytic approach is utilized to model the steady and unsteady wave pattern with pressure patches as described in Sect. Free surface wave profile When towing an object through the water basin of a towing tank waves are generated due to the pressure disturbance at and below the air-water interface. The two restoring forces are gravity and surface tension. Surface tension effects need to be considered for capillary waves that are of small wavelength ≲ 0.1m and often referred to as ripples. Due to their short wavelength compared to the submergence depth of the model in this study, changes to the hydrodynamic characteristic of the foil are negligible. For simplicity, capillary waves are neglected in the following. For all larger wavelengths, gravity is the dominant restoring force such that these waves are often referred to as gravity waves. The particles under a wave prescribe an orbital motion being elliptical in finite depth (shallow water and intermediate regime) and circular in deep water. A velocity and pressure oscillation is associated with the wave motion. Airy's wave theory (i.e., deep water approximation) is utilized to visualize the oscillatory perturbations in velocity u ′ and w ′ as well as pressure p ′ under a monochromatic wave in Fig. 4 ( Kundu et al. 2016). The magnitude of the perturbations introduced by the wave decline exponentially with depth in deep water. Additionally, Fig. 4 highlights that a submerged body such as a foil experiences an induced angle of attack ind and a change in pressure depending on its submergence depth but also relative position to the wave in the longitudinal direction. However, the wavefield generated during a test run in a towing tank may become complex. A broadband range of wavenumbers is excited depending on towing speed. The interaction between the submerged model and free surface is altered by the foils loading, which depends on the angle of incidence. Further, multiple wave-generating sources exist (i.e., test article and test rig) and their waves interact with each other. At unsteady towing velocity, an unsteady wake is generated and the steady assumption may not be applicable anymore. A suitable approach to model complex wake wave systems is the pressure patch approach. This method was first introduced by Havelock (1908). It is a computationally inexpensive method to predict resulting wake waves in order to highlight wave features in the far-field (Benzaquen ). An external pressure source with arbitrary shape and magnitude is imposed onto the free surface and the resulting free surface deflection is solved analytically. Li et al. (2019) utilize this method to predict the wavefield generated by a moving ship with shear currents of arbitrary direction. A complete set of derivations and formalism including the effect of shear currents is presented by Li and Ellingsen (2015). Their approach and formalism are adopted for the purpose of this study while omitting the effect of shear flow. In the following, only the most important equations utilized to model wake waves within this study are provided. The pressure patch shape and magnitude are modeled with an arbitrary function. Ship hulls are often modeled with an elliptical super-gaussian (Eq. 1) where m = 3 . Equations 1 and 2 are utilized to model the test rig and foil model (see Fig. 3) as external pressure sources in a very simplified manner by creating gaussian pressure patches that correspond to the actual length scales of the surface piercing parts and the submerged foil. Nine external pressure sources are utilized to model the experimental setup. The test rig is simulated using eight pressure sources. Four sources are used to model the most inward surface of the steel frameworks and another four sources to model the skim plate mounting struts. The exponent in Eq. 1 is chosen as m = 5 to create a fairly sharp pressure drop at the border of the pressure patch. The submerged foil is modeled using the same equation but with m = 2 to create a smoother pressure distribution. The magnitude of each pressure patch is chosen to scale with the dynamic pressure q ∞ of the instantaneous velocity q ∞ = 1∕2 u( ) 2 . Additionally, a scaling factor S exp is applied (Eq. 2) such that the resulting wave amplitudes from the pressure patch approach align with the experimentally determined amplitudes. Equation 3 is added to account for waves generated by a submerged body such as the foil model. Since all calculations are performed in the Fourier space, the external applied pressure p ext (r, t) needs to be transformed before being utilized in Eqs. 5 and 6. The definition of the Fourier transformation in the xy-plane for the quantities such as external applied pressure p ext and surface elevation is given by Eq. 4 where r = (x, y) = r(cos , sin ) and k = (k x , k y ) = k(cos , sin ). The surface deflection generated by a pressure source moving at constant velocity is readily obtained by Eq 5. The phase velocities + and − correspond to the wave components traveling in directions of k and −k with The term i is necessary to circumvent the poles in the complex plane with a small number approaching zero, also known as the radiation condition. Results at steady velocity are discussed in Sect. 3.1.1. Equation 6 is utilized to obtain the solution for unsteady model velocities as presented in Sect. 3.1.2. Implementing this equation results in ring waves created by a finite impulse pressure excitation in the laboratory reference frame. These ring waves spread out over time. All contributions at every time step since t = 0 are summed up over time, taking into account the distance the pressure source has traveled. Thus, any arbitrary motion profile can be constructed as a sum of finite impulses at different time steps and locations (Eq. 7). Obtaining the surface deflection allows to calculate the velocity and pressure perturbations at arbitrary submergence depth in xy-space. These findings are utilized to conceptually show how the pressure coefficients of a submerged foil model are affected by the free surface as presented in Sect. 3.2. Steady velocity Tests are carried out in the towing tank at constant velocity. A steady wake pattern forms, which moves with the model and test rig. The wave amplitudes are recorded along the centerline of the model using a single wave gauge. Multiple test runs are conducted and the wave sensor is repositioned after each measurement. The time-averaged data are acquired over 100s excluding the acceleration and deceleration phases. Figure 5 shows experimental results for three different angles of attack at a submergence depth of h = 1.3c and a towing velocity of U 0 = 1.0 m s −1 (Re 0 ≈ 440 000) corresponding to a depth-based Froude number of Fn h ≈ 0.4 . The wave amplitude in the vicinity of the model is affected by changing the model's angle of attack. Thus, an interaction between the submerged foil model and free surface is apparent. Therefore, a reciprocal action from the free surface to the model is expected. These results suggest that the model's submergence depth is not sufficiently deep for this test configuration to be free from surface wave effects. The analytic solution showing the wake in the xy-space generated by the submerged model at = 0 • and test rig is depicted in Fig. 6. A comparison of the analytic and experimental results in the vicinity of the model and along its centerline (i.e., y = 0 ) is provided in Fig. 5. Good agreement is achieved for an angle of attack of = 0 • from any upstream point until the trailing edge of the model, which corresponds to x = −0.25 m. For x < 0.25 m results differ when multiple wake waves superpose and interfere with each other. The deviations are caused by the simplified setup of pressure patches that generate the wake waves (as described in Sect. 3.1). Additionally, the superposition of the wake waves yields a pattern with many ripples over a short distance, as is seen in Fig. 6. The ultrasound sensor that transmits a signal in form of a beam is not suitable to accurately detect these small-scale features. Furthermore, the spatial resolution (i.e., measurement locations) selected with the wave gauge is not sufficiently high to capture these ripples of short wavelength. Having the analytic solution available allows calculating the corresponding fluctuation values w ′ and p ′ at the submergence depth of the model not only along the centerline but also in the entire xy-space. These qualitative results are utilized in Sect. 3.2.1 to conceptually show how the effective angle of attack and pressure on the submerged model are altered. It shall be noted that these analytic results intend to conceptually demonstrate how the free surface interacts with the submerged model. The intention is not to correct the experimental results, which is beyond the capability of the pressure patch approach. Unsteady velocity As opposed to the steady velocity case, the wake waves at unsteady velocity are not stationary with respect to the model. The sinusoidal velocity profile is defined by Eq. 8 where U 0 denotes the mean velocity of the model, the angular frequency, and the relative velocity amplitude. When performing an oscillatory (sinusoidal) motion, waves of various wavelengths, and thus different phase velocities c ph , are generated, which depend on the instantaneous towing velocity. The resulting wake is a superposition of all waves shed at different time instances and depends on the parameters of the sinusoidal motion (i.e., U 0 , f, and ). Therefore, the resulting wake wave pattern changes as a function of time. Similar to the measurements at a steady velocity, the wave deflection is measured using one wave gauge that is repositioned in the longitudinal direction after each test run. The recordings are synchronized with the carriage train velocity. The unsteady waves exhibit a repetitive cyclic behavior in accordance with the periodic velocity profile. Therefore, each measurement is phase averaged. Fig. 7a. The x-axis contains the longitudinal coordinates at y = 0 in the model reference frame. The location of the model's leading and trailing edge is confined between −0.25m ⩽ x ⩽ 0.25m , which is identical to the region highlighted in Fig. 5. Experimental results obtained at an angle of attack of = 0 • (Fig. 7b) agree well, at least qualitatively, with the analytic solution (Fig. 7c). However, quantitative differences in the wave amplitude are apparent. The wave amplitude is underestimated by the pressure patch approach, especially upstream of the model. Similar to the steady results presented in Sect. 3.1.1, a change in angle of attack to = +8 • (Fig. 7d) affects the deflection of the free surface. Therefore, a reciprocal action between submerged model and free surface is apparent. Independent of the angle of attack , all results (experimental and analytic) show a free surface deflection upstream of the model and test rig. The solution of the analytic approach corresponding to Fig. 7c is utilized to plot the wave deflection in the xy-space. Four different phase angles are presented to visualize the evolution of the unsteady wake in space at different time instances. Contrary to steady model velocities, the wake angle that forms is not fixed at ≈ 19.47 • and changes as a function of time. ig. 8 Unsteady wave pattern (analytic) at four phase angles at a submergence depth of h = 1.3c , = 0 • , and velocity profile parameters U 0 = 1.0 m s −1 ( Re 0 ≈ 440 000 ), f = 0.155 Hz ( k ≈ 0.24 ), and = 0.5 The mean velocity of the model for this particular test case is U 0 = 1.0 m s −1 , yet the phase velocity c ph of some waves shed in the first half of the motion cycle (i.e., 0 • < < 180 • ) is larger than U 0 . These waves eventually pass the model, which explains the wave deflections upstream of the model and test rig. As a consequence, tests with unsteady sinusoidal velocity always generate waves that travel ahead of the model. Therefore, the water upstream of the model is set in motion before the model arrives. It may affect the inflow conditions experienced by the submerged model leading to a change in the effective angle of attack and introduces pressure fluctuations, which is discussed in detail in Sect. 3.2.2. Waves shed at higher velocity do not only travel faster but also exhibit larger wave amplitudes due to the increased pressure perturbation exerted onto the free surface. When passing the model while the instantaneous velocity and therefore the dynamic pressure q ∞ is low, these waves affect the flow field around the model even more than it would be the case for any (quasi-) steady scenario. Wave-induced angle of attack and pressure offset The geometric angle of attack g of a foil is defined as the angle between the foils chord line and the freestream velocity vector. When towing a model through the basin of a towing tank water waves are generated, setting the water particles into an orbital motion. The particle motion is superposed onto the towing velocity leading to a possible alteration of the effective angle of attack. Additionally, a pressure variation associated with the wave's motion may be experienced by the submerged model. Free surface effects may lead to discrepancies between results obtained in a towing tank and wind tunnel. The wave-induced angle of attack at steady velocity is revealed experimentally and discussed in Sect. 3.2.1. At unsteady velocity, the complexity is increased due to the time dependence of the wake. These results are presented and discussed in Sect. 3.2.2. Complementary results are provided in both sections utilizing the pressure patch approach as discussed in Sect. 3.1. Once the free surface deflection at the air-water interface is obtained analytically, the vertical velocity component at arbitrary depth is readily derived by applying the linearized kinematic boundary condition at the free surface (Eq. 9) (Arzhannikov and Kotelnikov 2016). Thereafter, the scaling factor (Eq. 3) is multiplied to Eq. 9 to obtain the effective wave profile at arbitrary submergence depth. The scaling factor takes into account the attenuation of each wavenumber at the given depth, yielding the effective vertical velocity (Eq. 10). The validity of this approach can be verified using Airy's linear wave theory with the definition of provided in Sect. 3.1. The induced angle of attack is calculated based on Eq. 11. For measurements at a steady velocity, the freestream velocity is constant and replaced by u = const . Similarly, the induced vertical velocity is constant relative to the model such that w � (x) z=h . For tests with an unsteady model velocity, the time-varying freestream velocity is used to obtain the instantaneous induced angle of attack ind ( ) . Wave-induced perturbations in longitudinal direction u ′ are neglected in calculations since u � << u( ) . This approximation for the induced angle of attack is valid for a single point in the flow field. Additional effects such as virtual camber of the foil occur if the vertical velocity perturbation along the chord is not constant (Sedky et al. 2020). Since the induced angle of attack calculations are based on the pressure patch approach that highlights effects conceptually, no correction is applied to account for chordwise varying vertical velocities. Steady velocity Free surface effects are analyzed for the steady velocity case. The leading parameter (i.e., wave-induced effective angle of attack or hydrostatic pressure offset) responsible for biased aerodynamic results is determined. Experimental data are presented and compared to the solution of XFOIL (Mark Drela 2000) as well as the analytic pressure patch approach to reinforce the observations made. Subsequently, the obtained knowledge of the underlying flow mechanisms is applied to the more complex unsteady velocity case as discussed in Sect. 3.2.2. The gained knowledge is useful in developing a fundamental understanding regarding the working principle of a skim plate that has not been widely utilized nor investigated for the application in towing tank tests. Surface pressure measurements on the foil model are conducted at a steady velocity of U 0 = 1.0 m s −1 (Re 0 ≈ 440 000) . Various angles of attack are tested. Since only one side of the symmetric model is equipped with pressure tabs, the model is tested upside-up and upside-down to check for symmetry. Angles of attack denoted with the subscript ' + ' refer to measurements conducted with the pressure tabs on the top surface facing toward the free surface. The subscript '−' refers to the configuration where the model is flipped such that the pressure tabs are located on the bottom of the model facing toward the towing tank floor. For each configuration (i.e., model upside-up and upside-down), a positive change in the angle of attack is achieved by pitching the model in the opposite direction such that the pressure tabs are located on the suction side of the model. The steady pressure coefficients C p = (p x − p (atm+hyd) )∕q ∞ are obtained by time averaging the pressure reading of each sensor over a time interval of 100s excluding the acceleration and deceleration phases. Figure 9 shows the pressure coefficients C p measured along the chord for various angles of attack at a submergence depth of h = 1.3c . Solid lines represent measurements with model upside-up ( + ) and dashed lines with model upsidedown ( − ). Comparing the coefficients at an angle of attack of = ±8 • (red), a large deviation between both configurations is noted. For the configuration with model upsideup, a suction peak at the leading edge is apparent, reaching minimum pressure coefficients of less than C p = −3 . When flipping the model, no distinct suction peak is determined. The minimum C p -value is C p ≈ −1.2 . It indicates a premature onset of flow separation compared to the model facing upwards. Reducing the angle of attack to − = 7 • (green) shows much better agreement to + = 8 • . The pressure reading matches well at the leading edge but differences are apparent in between 0.1 ⩽ x∕c ⩽ 0.6 . An exact match of the pressure coefficient curves at an angle of attack of + = 8 • is not achieved. It could be caused by not finding the appropriate angle of attack iteratively or the pressure readings could be affected by the free surface. In Fig. 5, it was shown that the free surface elevation is affected differently for model upside-up and model upside-down. The pressure coefficients obtained at an angle of attack of = ±0 • are also presented in Fig. 9 (blue lines). Since the foil model is symmetric, these measurements combined yield the pressure distribution of the foil on the suction and pressure side. If symmetry is unaffected and if the effective angle of attack is zero, both curves would collapse onto each other. However, the pressure coefficients obtained on the bottom surface (blue dashed) yield lower pressure coefficients compared to the values measured on the top surface (blue solid). It suggests that the model is producing a negative lift at = 0 • . The angle of attack is changed to − = −2 • with model upside-down to achieve a good match with the pressure readings obtained with model upside-up at + = 0 • . These results indicate that the zero-lift angle 0 is shifted by approximately ≈ 2 • due to non-symmetric boundary conditions introduced by the free surface. Evidence that the shift in the effective angle of attack is induced by the free surface is obtained by keeping the angle of incidence fixed but varying the submergence depth of the model. Thereby, any differences in the pressure readings obtained are solely precipitated by the change in the distance of the model to the free surface. Ground effects are neglected since at deepest submergence the ground is still h ≈ 8c distanced to the model. Figure 10 shows pressure coefficient deltas C p at an angle of attack of + = 8 • . The pressure coefficients obtained at the deepest submergence of h = 2.5c serve as reference values since free surface effects are the least pronounced at this depth. These pressure coefficients are subtracted from those obtained at a submergence depth of h = 1.3c and h = 1.9c . The overall differences in pressure coefficient increase the shallower the submergence depth of the model. Further, the highest difference in the pressure coefficient is obtained at the leading edge in the suction peak region. The deltas decrease gradually toward the trailing edge of the model where the data of all submergence depths collapse onto each other. It reinforces the conclusion that changes in the flow field imposed by the free surface are dominated by the effects of an induced angle of attack ind rather than wave-induced pressure offset. The change in effective angle of attack introduced by the free surface becomes apparent when comparing these findings to results obtained with XFOIL. It is an open-source tool based on a panel method algorithm. The geometric foil parameters from the experimental study are embedded into XFOIL and the pressure coefficients calculated neglecting the effects of viscosity. The trailing edge of the foil geometry passed to XFOIL is sharpened in order to satisfy the Kutta condition. The reference case in this scenario is the data obtained at = 8 • , which then is subtracted from the results obtained at = 8.55 • and = 8.2 • , respectively, and plotted in Fig. 10. These angles of attack are chosen to match the same suction peak delta at the leading edge as for the experiments. The overall trend of the delta plots obtained experimentally and numerically is similar. The highest deltas are observed at the leading edge and gradually decrease toward the trailing edge. However, in the experiment, the deltas originate from the deflection of the free surface and its effects onto the submerged model attenuate with increasing submergence depth. Numerically, these similar trends are obtained by changing the geometric angle of attack of the foil. Alterations in the effective angle of attack due to the presence of the free surface are verified via XFOIL. According to XFOIL, the difference in effective angle of attack between shallowest ( h∕c = 1.3 ) and deepest submergence ( h∕c = 2.5 ) tested is rather small for the test case presented and yields ≈ 0.55 • . However, it elucidates the underlying flow mechanism, which is detected with more ease at steady rather than unsteady velocity. Additionally, it highlights that wave-induced changes in the effective angle of attack dominate over the changes introduced by the pertubation pressure p'. The analytic approach is utilized in order to verify these findings conceptually. The results obtained in Sect. 3.1.1 are used to determine the wave-induced vertical velocity w ′ at different model submergence depths. Results are discussed based on a model's angle of attack of = 0 • because this test case is validated experimentally. It reveals the impact of the wave-induced effects of the test rig that superpose to the free surface effects when the angle of attack of the model is changed. Equation 11 is then utilized to calculate the effective angle of attack under the wave. Figure 11 shows a case study of different submergence depths and the corresponding induced angle of attack along the centerline of the model (i.e., y = 0 ) below the wave. Changes in the angle of attack and pressure coefficient C p at a velocity of U 0 = 1.0 m s −1 are small. This is expected since the Froude number is Fn h ≈ 0.4 at h = 1.3c , which is the threshold value at which surface waves at steady velocity become negligible. For Fn h < 0.4 , often the biplane approximation is applied, in which the free surface is replaced by a rigid surface such that the foil experiences effects similar to a wing in ground proximity. These results also suggest that the pressure patch approach underestimates the waveinduced effects when compared to the solution obtained with XFOIL or the experimental results provided in Fig. 9. However, the overall trends are consistent. Increasing the submergence depth of the foil alleviates free surface effects. Further, the attenuation is not linear since the submergence depth is changed in equidistant steps but the deltas obtained are not constant. The results presented in Fig. 11 also show that changes introduced by the wave motion are not constant along the chord. As a result, the induced angle of attack ind may be underestimated when utilizing the flow field information of a single point. Note that at higher Fn h , deviations along the chord reduce due to the increased wavelength of the waves generated. Unsteady velocity The steady lift coefficient C L of a hydrofoil is a function of the depth based Froude number Hough and Moran (1969) and van Walree (1999) (see Fig. 1). In the range of 0.4 < Fn h < 1.5 the gradient of d[C L (h∕c)∕C L (h∕c = ∞)]∕dFn h is large such that small changes in Fn h may introduce large changes in the lift coefficient C L . Hydrofoils usually operate at Froude numbers Fn h > 3 such that these large gradients in the low Froude number range are not of importance (Faltinsen 2006). When experiments are performed in a towing tank, especially for the investigation of aerodynamic properties, the submergence depth Froude number Fn h is rather small. For example, in the large towing tank that is utilized for the investigation presented, the maximum velocity reached (steady and unsteady) is U 0 = 5.0 m s −1 . With a foil chord of c = 0.5m and a minimum submergence depth of h = 0.65m , the maximum Froude number obtained during this study is Fn h ≈ 2. In the case of unsteady surge experiments, the airfoil performs a sinusoidal motion and the instantaneous Froude number becomes a function of such that Fn h ( ) . Therefore, even under quasi-steady conditions (i.e., low reduced frequency k) the resulting forces acting on the foil are changing dynamically if the foil is not submerged sufficiently deep. Figure 12 shows the instantaneous Froude number Fn h ( ) as a function of the phase angle for two different mean velocities (i.e., U 0 = 1.0 m s −1 and U 0 = 2.5 m s −1 ) and a relative velocity amplitude of = 0.5 at three different submergence depth ratios. The black line at Fn h ≈ 0.4 denotes the limit below which free surface effects become negligible and the free surface acts as a rigid wall. Therefore, according to the findings as presented in Fig. 1, large deviations in the forces acting on the foil are expected due to the change in Froude number even under quasi-steady conditions. At high reduced frequencies k, the wake becomes unsteady, leading to the observed effects of waves passing the model. Upstream traveling waves as discussed in Sect. 3.1.2 alter the inflow condition such that the foil may experience an additional pitch motion. Unsteady loads induced by sea waves, which are experienced by a hydrofoil translating at steady velocity, are described by van Walree (1999) and Faltinsen (2006). In the following, unsteady results for two test cases are presented and discussed (i.e., model upside-up at + = 8 • and upside-down at − = −7 • ). These angles of attack are chosen because the static polars at Re 0 ≈ 440 000 yield a close match at steady velocity as presented in Fig. 9. Experimental data at U 0 = 1.0 m s −1 ( Re 0 ≈ 440 000 ), f = 0.155 Hz ( k ≈ 0.24 ), and = 0.5 at shallow submergence depth of h = 1.3c are presented in Fig. 13. The phase averaged pressures obtained with ten pressure tabs distributed along the chord are presented. Comparing both configurations (i.e., Figs. 13a, b), significant discrepancies are detected at unsteady model velocities. Figure 13a shows the phase averaged pressure for the model upside-up at + = 8 • . A noticeable pressure undulation of all sensors is detected in the phase angle range of 160 • < < 300 • . It occurs at all pressure tab locations simultaneously without significant phase shift. The magnitude of the undulation is declining toward the trailing edge of the foil because the surface pressure close to the leading edge is affected by the superposing effects of the induced angle of attack and wave-induced pressure changes. The surface pressure closer to the trailing edge is not susceptible to changes in the effective angle of attack as long as the flow remains attached. No convective behavior of the local pressure minima at ≈ 200 • and ≈ 280 • is observed. Therefore, it is precluded that flow separation and the shedding of a vortex are causing this pressure undulation. Further, the difference in the pressure coefficient between different pressure tab locations remains almost constant throughout the entire motion cycle. Only the pressure coefficient C p obtained at x∕c = 0.04 shows a sudden pressure coefficient decrease starting at ≈ 180 • . Steady and quasi-steady experiments, which are not presented, reveal that the surface pressure at this x/c-location is affected considerably by the occurrence of a laminar separation bubble whose location and extend is Reynolds number dependent. Even though the reduced frequency for the surge motion is k ≈ 0.24 and marks a high degree of unsteadiness, the flow remains nominally attached throughout most parts of the motion cycle. On the contrary, flow separation is observed for the model upside-down at − = 7 • at identical velocity profile parameters and submergence depth (Fig. 13b). Up to approximately ≈ 150 • the flow remains attached. The phase averaged pressure coefficients obtained closest to the leading edge (i.e., x∕c = 0.01 and x∕c = 0.02 ) reach pressure coefficient values of C p ≈ −3.6 . A sudden increase in the pressure coefficient at both tab locations is observed at ≈ 150 • . For ≥ 180 • , the four following pressure tab readings show the occurrence of a successive pressure minimum. It is identified as the footprint of a vortex that rolls up at the leading edge, grows in size, and convects downstream. The remaining pressure sensors closer to the trailing edge do not experience a vortex-induced pressure drop. It indicates that the vortex detaches from the foil surface. Flow reattachment occurs for > 270 • once the acceleration of the model is turning from negative to positive (see Fig. 13c). In order to assess if these results obtained at a submergence of h = 1.3c are influenced by free surface effects, the submergence depth of the model is increased to h = 2.5c while keeping all other parameters fixed. Any differences occurring by changing the model's submergence depth are tied back to free surface effects. The effect of submergence is studied for model upside-up at + = 8 • and upside-down − = 7 • since flow topological differences (i.e., attached and separated flow) at shallow submergence depth of h = 1.3c are identified even though steady measurements are in good agreement. The phase averaged pressure coefficient of only one representative pressure tab location at x∕c = 0.01 is presented in Fig. 14 to increase the readability of the plot. Results at shallow ( h = 1.3c ) and deep ( h = 2.5c ) model submergence are provided. Blue lines depict the phase averaged pressure for the model upside-up. Up to a phase angle of ≈ 160 • , the phase averaged pressure coefficient for both submergence depths is akin. However, apart from minor deviations in the pressure coefficient magnitude, a different trend in the pressure coefficient gradient dC p ∕d is observed between 60 • < < 160 • . At shallow submergence, the phase averaged pressure curve exhibits an inflection point that is not apparent at deep submergence. Furthermore, the pressure undulation in the range of 160 • < < 300 • observed at a submergence depth of h = 1.3c is absent when increasing the submergence depth to h = 2.5c . For model upside-up free surface effects are significant in the second half of the motion cycle. The largest deviation in the pressure coefficient is C p ≈ 0.6 at ≈ 200 • . Based on these findings and the results presented in Fig. 13a, it is concluded that for nominally attached flow, free surface effects alter the magnitude of the phase averaged pressure. These changes are imposed by affecting the effective angle of attack and pertubation pressure. The results for the model upside-down at − = 7 • are also presented in Fig. 14 and highlighted with red line plots. The pressure coefficient obtained for both submergence depths is identical up to ≈ 160 • . A deviation is observed for 160 • < < 300 • . Both curves coincide again for > 300 • . At shallow foil submergence depth premature, flow separation at ≈ 160 • is observed compared to the tests carried out at deep model submergence. The pressure coefficient at the foils leading edge increases from C p ≈ −3.6 to C p ≈ −1.3 during the process of flow separation. A similar trend is obtained at deep model submergence of h = 2.5c but flow separation is delayed by ≈ 30 • . In comparison with the results obtained with the model upside-up, the occurrence of an inflection point within the first half of the motion cycle is absent. Further, a pressure undulation in the second half is not apparent since the critical angle of attack at which flow separation occurs is reached at earlier phase angles. After flow separation, the flow field is dominated by the vortex dynamics. Imposed changes in the effective angle of attack by the free surface are ineffective in the second half of the cycle. Hence, no pressure undulation is observed for > 180 • at this angle of attack and performed velocity profile. One may question if the differences in the underlying flow physics (i.e., attached vs. separated flow) are due to the orientation of the model. While it is true that free surface effects are responsible for these discrepancies since the foil operates near static stall angle, flipping the model does not necessarily lead to flow separation and the rollup of a vortex. Results at − = 3 • (model upside-down), which are not presented, show similar pressure undulation at similar phase angles as is the case for model upsideup. However, the gradient of the phase averaged pressure coefficient is adverse for the model upside-up compared to upside-down. Whether flow separation takes place or not depends on the effective angle of attack that is imposed during the unsteady motion. Changes in the effective angle of incidence at unsteady model velocity are elaborated in more detail in the following paragraph utilizing the analytic approach. The findings obtained in the experimental study correlate well with the findings obtained by the analytic approach as presented in Fig. 15. The induced angle of attack ind and changes in pressure over one cycle of oscillation are calculated analytically at the leading edge location of the model at = 0 • . This angle is chosen since it elucidates the underlying mechanisms with more ease. However, note that the majority of the induced effects are attributed to the test rig design. Model-induced effects are superposed once the angle of attack is altered. The angle of attack and pressure fluctuations introduced by the wave, calculated with the pressure patch approach, are rather small. However, conceptually it captures the physics and agrees with the experimental data. At shallow submergence depth of h = 1.3c , the induced free surface effects are largest. With increasing submergence depth, the magnitude of the imposed free surface effects attenuates due to the exponential decay with depth and dependency on the wavenumber. The induced free surface effects as depicted in Fig. 15 are discussed on the basis of the shallow submergence depth of h = 1.3c because the occurring effects are more pronounced. Within the first half of the cycle, changes in the induced angle of attack occur gradually. The induced angle of attack ind is monotonically decreasing until ≈ 180 • reaching a local minimum. For the remaining 180 • of the cycle, the induced angle of attack imposed by the gravity waves is undulating reaching two more local maxima at ≈ 250 • and ≈ 315 • and a local minimum around ≈ 275 • . Pressure deviations C p introduced by the free surface are almost constant within the first 180 • . A pressure undulation is observed in the second half of the cycle featuring three local extrema similar to the induced angle of attack ind . However, the phase angles at which these extrema occur are different. The two local maxima are obtained at ≈ 220 • and ≈ 300 • . The local minimum is apparent at ≈ 265 • . A superposition of both effects is experienced by the foil model and verifies the experimental findings. Wake waves that pass the model in the second half of the cycle introduce an unsteady change in the instantaneous induced angle of attack ind and pressure C p . Especially for > 180 • , when the instantaneous model velocity is low, wave-induced effects are dominant exhibiting an undulation similar to the experimental results. Following intermittent conclusions for model upside-up and upside-down are drawn. When the model is oriented upside-up, the effective angle of attack is continuously decreasing for < 180 • leading to a stabilizing effect that delays flow separation and leads to the formation of an inflection point in the phase averaged pressure coefficient curve. For > 180 • , the undulation of the pressure coefficient C p (see Fig. 13a) is the result of free surface effects that introduce changes in the effective angle of attack and pressure. Since the flow remains attached at this incidence, the largest differences are obtained at pressure tab locations closest to the leading edge. The surface pressure measured in the trailing edge region is not susceptible to large magnitude fluctuations introduced by a change in the effective angle of attack because of the Kutta condition that has to be satisfied for attached flow. Therefore, any undulation observed close to the trailing edge is caused by changes in the pertubation pressure only. Increasing the submergence depth from h = 1.3c to h = 2.5c alleviates the magnitude of free surface effects substantially (Fig. 15). The delay of flow separation for a hydrofoil operating close to the free surface and facing up is confirmed by findings presented by Ni et al. (2021) utilizing experimental and numerical data. However, their explanation for the delayed stall characteristics is the increase of the boundary layer's momentum on the suction side due to the wave-induced velocities. While this may be true, Fig. 18 in their publication also suggests that the effective angle of attack is increased by increasing the submergence depth of the model. Comparing the streamlines of the foil at = 21 • reveals that the stagnation point at the leading edge is moving aft when increasing the submergence depth. Additionally, comparing the lift coefficients at different submergence depths as depicted in Fig 1 shows that the lift coefficient is generally low when the model is close to the free surface possibly caused by a decrease in the effective angle of attack. Contrary, for the model upside-down the induced angle of attack ind is acting in the adverse direction leading to a destabilizing effect promoting flow separation for < 180 • . Therefore, as presented in Fig. 14 and compared to the model upside-up, no inflection point is apparent and the pressure coefficient is generally lower for the model facing upsidedown. The critical condition at which flow separation occurs is reached at a lower geometric angle of attack compared to model upside-up (Fig. 14). Increasing the submergence In summary, it is observed that free surface effects are most pronounced in the second half of the motion cycle for a sinusoidal velocity profile. Wake waves pass the model while the instantaneous model velocity is low. Changes in the effective angle of attack act in adverse directions or are at least altered in magnitude for non-zero angles of incidence for model upside-up and upside-down. A stabilizing effect for model upside-up is observed because the effective angle of attack is reduced and flow separation delayed. Concomitantly for model upside-down, flow separation is promoted. Contrary to the steady velocity tests as discussed in Sect. 3.2.1, at unsteady model velocity the wave-induced pressure offset is not negligible. These findings are verified through experimental and analytic results. The utility of using a skim plate Free surface effects are apparent in facilities where the top boundary is a layer of density discontinuity (e.g., air-water interface) rather than a rigid surface. Examples of such facilities are towing tanks and water tunnels with a free surface. In some experiments, it is reported that a skim plate is utilized to alleviate wave-induced effects (e.g., Stephens et al. 2016;Stevens et al. 2016;Corkery et al. 2018). In these experiments, the skim plate is placed onto the air-water interface. Within the current study, the amplitudes of waves generated during a test run exceed the thickness of the skim plate by an order of magnitude. Therefore, the skim plate is submerged below the air-water interface to ensure submergence at all times. Studies regarding the proper installation and size of a skim plate are not available. Furthermore, the effectiveness of such a device has not been validated. The idea of a skim plate is to change the boundary condition on the top of the submerged body by introducing a rigid wall similar to wind tunnel tests. It is believed to locally block water waves and prevent wave-induced particle motion and pressure fluctuations. Tests are carried out at steady and unsteady velocities to quantify the effectiveness of a skim plate. For the steady test cases considered, the impact onto the foil's surface pressure when installing the skim plate is generally small. A change in the effective angle of attack is observed with skim plate installed. It agrees well with the findings obtained at steady model velocity as discussed in Sect. 3.2.1 where small changes in the effective angle of incidence are observed when changing the model's submergence depth. An increased effect is observed at unsteady model velocities due to the nature of waves passing the model as described in Sect. 3.1.2. Thus, the applicability of a skim plate is discussed based on unsteady measurements only. The velocity profile of the results presented is identical to those already discussed in the previous sections where U 0 = 1.0 m s −1 ( Re 0 ≈ 440 000 ), f = 0.155 Hz ( k ≈ 0.24 ), and = 0.5 . The Froude number varies in the range of 0.2 ≲ Fn h ≲ 0.6 (see Fig. 12). The presented study yields some insight into the effects of a fixed-size skim plate. The submergence depth of the model is varied between 1.3c ⩽ h ⩽ 2.5c . The skim plate is varied in the longitudinal direction while keeping the relative distance to the free surface and the model constant. Three longitudinal positions are tested (i.e., front, center, and aft) to explore the relative location of the skim plate to the model. The exact positions of the skim plate and model are described in Sect. 2.2. The question arises if it is possible to alleviate surface effects at shallow submergence of h = 1.3c where the effects are most prominent. Alleviating wave effects at shallow submergence is important because sufficiently deep submergence of the foil cannot be guaranteed in every setup (e.g., due to integration of measurement equipment or optical access). Figure 16 shows the unsteady phase averaged signal of a pressure sensor located at x∕c = 0.01 identical to Fig. 14 at an angle of attack of + = 8 • . The pressure sensor close to the leading edge is chosen since the surface pressure in the suction peak is most sensitive to changes. The unsteady pressure coefficients of the baseline data without skim plate mounted in the shallowest (i.e., h = 1.3c ) and deepest depth (i.e., h = 2.5c ) are shown again for comparison. As discussed in Sect. 3.2.2, the results without the skim plate installed are influenced by the free surface showing large deviations between the two submergence depths tested, especially in the second half of the cycle. Installing the skim plate, irrespective of the longitudinal mounting position, eliminates the hump, which is apparent for the baseline case at h = 1.3c . The pressure hump has its pressure minimum at ≈ 200 • . However, the configuration with the skim plate mounted aft (green line) exhibits a dip in the pressure coefficient curve in the same phase angle range of 180 • < < 235 • . The configurations front and center do not show such behavior. Further, moving the skim plate in the longitudinal direction introduces an offset. Albeit a pressure offset is apparent for the unsteady pressure signals, comparing the phase averaged pressure signal of all configurations with the skim plate installed at h = 1.3c to the baseline configuration at deepest submergence h = 2.5c shows good agreement. It indicates that a skim plate is capable of alleviating free surface effects at shallow submergence. Results that are usually obtained at deeper depth can be replicated. It is postulated that the skim plate position front and center perform better compared to the installation aft since the former two do not show a pressure dip at the beginning of the second half of the motion cycle. Because forward traveling waves are generated each cycle, it is suggested that the skim plate needs to be positioned sufficiently front to be effective. Additionally, comparing the deep submergence test case at h = 2.5c (black dashed line) without skim plate installed to the front (red solid line) and center (blue solid line) configuration, data alignment is observed for partial phase angle sections. In the phase angle range of 0 • < < 130 • , the black dashed and blue solid lines collapse onto each other. In the range of 270 • < < 360 • , the black dashed and red solid line coincide. Consequently, since the phase averaged pressure obtained at h = 2.5c collapses partially with the configuration at shallow submergence and the skim plate mounted front or center, it indicates that the longitudinal position of the fixed size skim plate does matter. It is believed that the dimension of the skim plate will affect the wave-alleviating characteristics. However, no study has been performed to determine the optimal size. It is assumed that a larger skim plate will perform better, especially for tests with unsteady model velocity. Note that at deep submergence of h = 2.5c installing the skim plate has a negligible effect on the phase averaged surface pressure irrespective of the skim plate position (not shown). Therefore, it is concluded that free surface effects are negligible for the specific test case with the underlying velocity profile at h = 2.5c. Measurements at unsteady velocity are conducted for the model upside-down at = −7 • with the skim plate mounted in the front position. As discussed in Sect. 3.2.2, flow separation and vortex shedding are detected for this configuration. The phase averaged pressures obtained at x∕c = 0.06 at shallow and deep submergence with and without skim plate are presented in Fig. 17. This sensor location is chosen because it shows a distinct vortex footprint. A sudden pressure drop is detected at ≈ 160 • at shallow submergence depth of h = 1.3c when no skim plate installed. Installing the skim plate at h = 1.3c and in its front position or increasing the submergence depth to h = 2.5c delays the onset of the pressure drop as well as the location of the pressure minimum by approximately = 20 − 30 • . Even though deviations are apparent, the pressure reading at shallow submergence of h = 1.3c and skim plate installed agrees better with the results obtained at deep submergence where free surface effects are naturally reduced. The remaining discrepancies between the deep submergence test case (blue dashed lines) and the configuration where the skim plate is installed at shallow submergence (red solid line) may be reduced even further if a larger skim plate is utilized. In summary, the application of a skim plate at shallow submergence shows similar effects on the instantaneous phase averaged surface pressure as increasing the submergence depth of the model. The skim plate locally blocks the gravity waves such that velocity perturbations are hindered to influence the flow field around the foil. Thus, the effects of the induced angle of attack are diminished. Additionally, a fixed-size skim plate is tested. Based on the results presented, it is assumed that a larger skim plate will be more independent of the longitudinal position and the overall performance to diminish the effects of gravity waves increases. It is expected that the proper size of the skim plate is dependent on the parameters of the foil model, submergence depth, and parameters of the velocity profile. Conclusion When towing a submerged body through the basin of a towing tank, waves are generated by the submerged parts of the test rig and the model. A complex wave system forms due to the superposition of wake waves from these various wave-making sources. The analytic pressure patch approach is a useful tool to model wake waves generated at steady and unsteady model velocity. It can be utilized to obtain wave-induced velocity and pressure fluctuation below the free surface of a complex setup and for arbitrary motion histories. The resulting unsteady wake pattern differs from the steady wake pattern and depends on the parameters of the velocity profile. During the sinusoidal motion, waves with large amplitude, phase velocity, and wavelength are generated in the first half of the cycle as long as the model velocity is high (i.e., above U 0 ). These waves travel faster than the model within the second half of the motion cycle and thus pass it. Therefore, these waves alter the inflow condition upstream of the model and locally affect the flow field around the model. Free surface effects introduce pressure and velocity perturbations. At steady velocity, changing the foil submergence depth shows effects that are similar to changes in the effective angle of attack for a foil in an unbounded fluid. These findings are validated using experimental, numerical (XFOIL), and analytic (pressure patch approach) data. At steady velocity, changes in the effective angle of attack are the dominant effect that are imposed by the free surface. At unsteady velocity, free surface effects are more pronounced compared to (quasi-) steady experiments. The largest surface pressure coefficient deviations are introduced by the free surface in the second half of the cycle when the instantaneous dynamic pressure is low. For model upside-up, wave-induced effects are stabilizing the flow at unsteady model velocity. The wave-induced motion reduces the effective angle attack that the foil experiences and flow separation is delayed. On the contrary, wave-induced effects are destabilizing the flow when the model is facing downwards. Premature separation is observed when flipping the model upside-down due to the adverse wave-induced effects. A skim plate is utilized to alleviate free surface effects. Installing a skim plate at shallow submergence depth achieves similar effects as increasing the submergence depth of the model. It locally blocks the wave-induced velocity fluctuations, and thus the magnitude of the wave-induced angle of attack ind is diminished. Due to the forward traveling waves at unsteady model velocity, it is postulated that the skim plate needs to protrude sufficiently upstream from the leading edge of the model. Thus, the skim plate is an effective tool to alleviate free surface effects and justifies the usability of a towing tank to investigate the aerodynamics of airfoils, wings, ground vehicles, or wind turbines. Subjects for future studies are the influence of free surface waves and the applicability of a skim plate on finite wings at steady and unsteady model velocity. Obtaining detailed flow information experimentally is challenging. Therefore, a joint numerical and experimental study shall be conducted to gain a more in-depth understanding of the underlying flow physics. Funding Open Access funding enabled and organized by Projekt DEAL. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
16,710.4
2022-10-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Voltage source control of offshore all‐DC wind farm The fast growth of offshore all-DC wind farm will bring problems such as the weakening of grid frequency stability and the increase of equivalent grid impedance. To overcome this, a coordination control strategy for offshore all-DC wind farm is proposed in this study with two salient features: better performance under weak grid condition, and real-time frequency support from wind farm. This strategy consists of three parts: the inertia synchronising control of receiving end converter, the constant ratio control of DC transformer and the frequency response of wind farm. With the proposed strategy, the all-DC wind farm operates like a synchronous generator to onshore grid, which provides fast frequency support when the onshore grid frequency changes. The effectiveness of the proposed method is validated in PSCAD/EMTDC. Introduction The rapid growth of offshore wind farms and the increasing need for long distance transmission accelerate the development of all-DC offshore wind farm [1,2]. Compared with conventional a HVAC transmission system, VSC-HVDC has several superiorities such as lower losses and less capacitor charging effects; moreover, the flexibility of active and reactive power flow control makes VSC-HVDC an attractive means for offshore wind farms integration [3,4]. However, the conventional AC collection of wind turbines is still causing higher power losses, particularly where the distance between the HVDC platform and turbines is likely to be increased. Accordingly, the concept of all-DC wind farm, which utilises both DC collection and DC transmission, becoming the focus of recent studies [5,6]. However, wind farms are immune to grid frequency variation under normal control method due to the decoupled feature of VSC-HVDC and DC transformer; such immunity has no inertial response toward grid, which will deteriorate frequency stability if wind power penetration is large. Therefore, to replace conventional power plants, the all-DC wind farms are required to provide ancillary services such as primary frequency regulation and inertia response to help maintain the stability of power grids. Recent studies in terms of frequency support usually focused on the AC collection wind farm. Inertial response and primary frequency regulation of wind turbine is discussed in [7,8]. A communication-free coordinate control strategy was proposed in [9] to allow the frequency support of both wind farm and HVDC system. However, few attempts have been made to achieve the frequency control service of all-DC wind farm. On the other hand, the increasing penetration of wind power has also increased the equivalent grid impedance, thus weakening the grid. Under this circumstance, the control ability may deteriorate when conventional vector-controlled receiving end converter (REC) is utilised to integrate wind power [10,11], thus resulting in the stability issues such as grid voltage distortions and harmonic oscillations. Applying voltage-source control is an effective way to solve this problem. A typical example is the virtual synchronising generator (VSG) [12], which imitates the rotor motion equation of synchronous generator (SG) to realise self-synchronisation to replace PLL. However, it is not suitable for REC which delivers wind power, since the output power of wind turbines are always changing. Hence, a coordination control strategy of offshore all-DC wind farm is proposed in this paper, including the inertial synchronising control (ISC) of REC, the constant ratio control of DC transformers, and the frequency response of wind turbines. With this strategy, the grid frequency information is transmitted to DC wind turbine with a little time delay. Therefore, the DC wind turbine may realise rapid inertial response and primary regulation. Finally, the all-DC wind farm performs as an SG to the onshore grid. A simulation model of all-DC wind farm is constructed based on PSCAD/EMTDC and the effectiveness of proposed control strategy is validated. Benchmark of all-DC wind farm and its voltagesource control A typical offshore all-DC wind farm is shown in Fig. 1. The system mainly consists of three parts: the DC wind turbines, the DC transformers and the onshore REC. The DC turbines are built up with directly driven permanent magnetic and AC/DC converter. Usually, its output DC voltage is 30-60 kV. A cluster of DC turbines are parallel connected to the low-voltage side a DC transformer. The high-voltage sides of DC transformers are connected to a DC bus (usually ± 150-500 kV). Finally, the wind power are collected by DC transformers and transmitted to onshore converter station by HVDC line. In the proposed coordination control strategy, a DC voltage is chosen as the medium to transmit frequency information. The variation of onshore grid frequency is reflected on the HVDC voltage by the inertia synchronising control of REC. The equivalent DC capacitor is controlled to simulate rotors of SG by utilising its nature response, achieving the capability of selfsynchronising and accomplishing the real-time link between HVDC voltage and grid frequency. Moreover, the selfsynchronising characteristic of ISC endows it with enhanced performance under weak grid condition. At the same time, the constant ratio control is applied to DC transformers. The voltage of DC collection bus is regulated by DC transformers according to HVDC voltage, delivering the grid frequency information from HVDC side to DC collection bus. Therefore, the wind turbines are able to obtain grid frequency variation by detecting the voltage of DC collection bus. For wind turbines, a combined frequency supporting strategy is proposed, including the rotor-speed-based inertial response and the pitch-angle-based primary frequency regulation. Therefore, the all-DC wind farm operates like an SG, which has better performance under weak grid condition, and real-time frequency support to onshore grid. Inertial synchronising control of REC Neglecting the loss of DC cable, the natural response of HVDC bus voltage to power variation can then be described as where P WF is the wind power, and P rec_grid is the REC output power. U dc is the DC voltage, and ω is the equivalent DC bus capacitance. U rec is the output RMS voltage of REC (line-to-line), and U g is the RMS voltage of the AC grid (line-to-line). m is the modulation ratio, and δ is the power angle. X is the sum of grid synchronous reactance, the leakage reactance of the transformer, and the transmission line reactance, whereas resistance is neglected because of high voltage and power level conditions. Results indicated that (1) is similar to the motion equation of SG rotors whereas (2) is similar to SG output power equation As observed from (1)-(4), U dc is equivalent to rotor speed m . E f is the electromotive force of SG. Modulation ratio m is equivalent to air-gap flux P WF and P rec_grid are equivalent to SG's mechanical power P m and electrical power P e . To simulate the natural relationship between rotor speed m and electrical frequency e of SG, a link between U dc and REC output frequency rec is established where U dc_nom and nom are the nominal values of U dc and sec , respectively. K is introduced to scale the coupling strength between DC bus voltage U dc and REC output frequency rec . The substitution of (5) into (1) combined with (2) presents the dynamics of REC with ISC by the following set of equations: Considering a small grid frequency variation, i.e. U dc →U dc_nom has following correlation: Thus grid and U dc are intrinsically coupled. As observed from (6) and (7), several beneficial characteristics are achieved in REC: (i) Similar to SG rotor, DC bus voltage U dc and REC output frequency rec tend to track grid frequency autonomously. Given that the inertia of an equivalent DC bus capacitor is usually minimal, this tracking can be rapid. (ii) The impedance of REC seen from PCC is purely inductive, and thus, the loop circuit has no resonance as the current-vector-based control method, i.e. enhanced stability performance under weak grid. Because of the utilisation of DC capacitor physical inertia for synchronising as SG uses its rotor inertia, this control strategy is called inertial synchronisation control (ISC) in this paper. With the developed model in (6), the overall control blocks of REC are presented in Fig. 2. The reactive power is controlled by manipulating the output AC voltage. In addition, a damping compensation unit is designed to improve the dynamic response performance of the proposed strategy. Constant ratio control of DC transformer In the conventional control of all-DC wind farm, the voltage reference of DC collection bus is constant. Therefore, it is decoupled with HVDC voltage, which represents the variation of grid frequency when inertial synchronising control is applied in REC. The proposed constant ratio control of DC transformer will regulate the voltage reference of DC collection bus according to HVDC voltage. Hence, the grid frequency information is delivered to wind turbines. The block diagram of constant ratio control is shown in Fig. 3. The DC voltage of HVDC side is measured as U* dc , in order to eliminate the influence of wind power fluctuation. The voltage drop on the HVDC transmission line is calculated based on the product of output current I dc and line resistance and line resistance R L , and added to U* dc . The result is then divided by a constant ratio n and become the reference of DC collection bus voltage U dc2 . The control of U dc2 is realised by the bottom control of DC transformer, which is mainly decided by the topology of converter, e.g. two-level or MMC structure. The bottom control will not be introduced in detail since it is not the focus point of this paper. Frequency response of wind turbine When the variation of the DC voltage is detected by DC wind turbine, the grid frequency information can be derived by while n is the constant ratio of DC transformer, U dc2 is the DC voltage of DC wind turbine, U dc is the DC voltage of HVDC line. Hence, the inertial response and primary regulation can be realised by DC wind turbines. The capability of wind turbines to provide inertial response is investigated in [7]. An additional value associated with the rate-ofchange-of-frequency (RoCoF) is attached to the active power reference (P MPPT ) given by the MPPT control. Additional power P add is provided by accelerating or decelerating the wind turbine and utilising the kinetic energy stored in rotating blades. Assuming that the virtual inertia of wind farm is H WF , the value of additional power P add is Substituting (8) into (9), there is Given that the kinetic energy stored in rotating blades is limited, if primary regulation of the wind farm is needed, then power source such as energy storage should be added. Another option is the utilisation of de-loading strategies by preserving a generation margin. Since the extra cost brought by additional power source, especially for offshore all-DC wind farms, de-loading strategies is applied in proposed strategy. Two de-loading strategies have been discussed in [13,14]: pitch-angle-based deloading and rotor-speed-based deloading. In order to decouple with inertial response, which is a rotor-speedbased, the pitch-angle-based deloading strategy is utilised for primary frequency regulation. The deviation of grid frequency can be achieved by (8). The control block diagram of wind turbines with frequency response capability is shown in Fig. 4. Inertial response is realised by the variation of rotor speed, an extra power P add is calculated by (10) and added to the original power reference PMPPT which is given by the MPPT control. On the other side, the primary frequency regulation is realised by pitch angle control. Under normal circumstances, the wind turbine works under a small pitch angle to reserve part of the wind power. The amount of reserved power is decided by the demand of local grid (usually 5-10%). When grid frequency variation is detected, wind turbines will change the pitch angle according to the deviation of grid frequency to its rated value. Simulations To validate the effectiveness of proposed coordination control strategy, an all-DC wind farm mode is constructed in PSCAD/ EMTDC based on Fig. 1. The parameters of this model are shown in Tables 1-4. The onshore power grid is equivalent to a single SG with a load of 500 + j100 MVA. There are four clusters in this model and each cluster consists of five permanent magnet SGs (PMSGs). The rated power of a wind turbine is 10 MW. 10% of the rated power is reserved for primary frequency regulation. Therefore, the total output power of wind turbines is 180 MW, i.e. 36% of the total load. Neglecting the voltage drop on the collection lines, wind turbines in each cluster can be equivalent to an aggregated 50 MW PMSG model. The system simulation diagram is shown in Fig. 5. Based on this model, the capability to provide inertial response and primary frequency regulation of proposed control strategy is validated under two different scenarios. Scenario I: A step increase of 20% active power and 5% reactive power of Load 1 is simulated at 2.0 s to cause a grid frequency drop. The simulation results are shown in Fig. 6. Scenario II: A step decrease of 20% active power and 5% reactive power of Load 1 is simulated at 5.0 s to cause a grid frequency increase. The simulation results are shown in Fig. 7. It can be observed from Figs. 6a and 7a that with the proposed control strategy, the grid frequency variation led by the sudden change of load can be delivered to wind turbines timely and precisely through HVDC voltage and DC collection bus voltage. After detecting the deviation of grid frequency, wind farm may realise fast inertial response and primary frequency regulation with proposed control strategy [see Figs. 6b and 7b]. At the beginning of grid frequency variation, the support power from wind farms is dominated by inertial response and is proportional to RoCoF. After t = 5 s, the grid frequency becomes stable. The support power is dominated by primary frequency regulation, and is proportional to the deviation of grid frequency. From Figs. 6a and 7a, the frequency nadir/peak and the RoCoF of onshore grid is improved with proposed control strategy, which has validated its effectiveness. However, with conventional control strategy, wind turbines cannot sense grid frequency deviation, thus providing no frequency support. Conclusion A multi-timescale coordination control strategy is proposed in this paper, including the inertial synchronising control of REC, the constant ratio control of DC transformers, and the frequency response of wind turbines. With the proposed control strategy, the grid frequency deviation is delivered to wind turbines through DC-link timely and precisely. Meanwhile, the rotor-speed-based inertial response and pitchangle-based primary frequency regulation is applied in wind turbines. Therefore, the all-DC wind farm operates like an SG to onshore grid, which provides fast frequency support when the onshore grid frequency changes. The effectiveness of the proposed method is also validated by the simulation results. Acknowledgments This work was supported by National Natural Science Foundation of China (51707118) and State Key Laboratory of Operation and Control of Renewable Energy & Storage Systems (NYB51201801481).
3,470.6
2019-03-15T00:00:00.000
[ "Engineering" ]
Hadamard Upper Bound (HUB) on Optimum Joint Decoding Capacity of Wyner Gaussian Cellular MAC This paper presents an original analytical expression for an upper bound on the optimum joint decoding capacity of Wyner Circular Gaussian Cellular Multiple Access Channel (C-GCMAC) for uniformly distributed Mobile Terminals (MTs). This upper bound is referred to as Hadamard Upper Bound (HUB) and is a novel application of the Hadamard inequality established by exploiting the Hadamard operation between the channel fading H and channel slow gain Ω matrices. This paper demonstrates that the theoretical upper bound converges to the actual capacity under constraints like low range of signal to noise ratios and limiting channel slow gain among the MTs and the Base Station (BS) of interest. The behaviour of the theoretical upper bound is critically observed when the inter-cell and the intra-cell time sharing schemes are employed. In this context, we employ an approximation approach to evaluate the effect of the MT distribution on optimal joint decoding capacity for a variable user-density in C-GCMAC. This paper demonstrates that the analytical HUB based on the proposed approximation approach converges to the theoretical upper bound results for medium to high range of signal to noise ratios and shows a comparable tighter bound on optimum sum-rate capacity. recent milestones in the information theory of wireless communication systems with multiple antenna and multiple users have offered additional newfound hope to meet this demand [3]- [11].Multiple Input Multiple Output (MIMO) technology provides substantial gains over single antenna communications systems, however the cost of deploying multiple antennas at the mobile terminals (MTs) in a network can be prohibitive, at least in the immediate future [3], [8].In this context, distributed MIMO approach is a means of realizing the gains of MIMO with single antenna terminals in a network allowing a gradual migration to a true MIMO network. This approach requires some level of cooperation among the network terminals which can be accomplished through suitably designed protocols [4]- [6], [12]- [16].Towards this end, in the last few decades, numerous papers have been written to analyze various cellular models using information theoretic argument to gain insight into the implications on performance of the system parameters.For an extensive survey on this literature, the reader is referred to [5], [6], [17]- [19] and references there in. The analytical framework of this paper is inspired by analytically tractable model for multicell processing (MCP) as proposed in [7] where Wyner incorporated the fundamental aspects of cellular networks into the framework of the well known Gaussian multiple access channel (MAC) to form Gaussian cellular MAC (GCMAC).The majority of the multi-cell decoding cellular models preserves a fundamental assumption which has initially appeared in Wyner's model, namely (i) only interference from two adjacent cell is considered; (ii) random user locations and therefore path loss variations are ignored; (iii) the interference intensity form each neighboring base station (BS) is characterized by a single fixed parameter 0 ≤ Ω ≤ 1 i.e. the collocation of MTs.Although this model produces more tractable mathematical models, but still it is unrealistic with respect to current practical cellular systems. A. Background and Related Work Wyner model was first used in [7] to derive the capacity of uplink cellular networks with MCP, where it is shown that intra-cell time division multiple access (TDMA) is optimal and achieves the capacity.It was generalized in [20] to account for flat-fading.It is proved that wideband transmission is advantageous over intra-cell TDMA and that fading increases capacity when the number of users is sufficiently large.In [5] and [6], the Wyner model is used to analyze the throughput of cellular networks under single-cell processing (SCP) and two-cell-site processing (TCSP).Later on, scaling results for the sum capacity were derived under the Wyner model with MIMO links in [21].Recently, the Wyner model is extended to incorporate shadowing in [22]. Despite the fairly large amount of literature based on the Wyner model, to our knowledge no effort has ever been made to validate this simple model for realistic cellular environment.Namely, the information theoretic bounds on Wyner model by exploiting the variable-user density across the cells for finite number of cooperative BSs.A similar kind of attempt has been made in [23] where random matrix theory (RMT) has been used to derive sum-rate capacity where as the main contribution of our paper is to offer non-asymptotic based approach to derive information theoretic bounds on Wyner C-GCMAC model. B. Contributions In this paper, we consider a circular version of Wyner GCMAC (by wrap around the linear Wyner model to form a circle) which we will refer to as Circular GCMAC (C-GCMAC).We consider an architecture where the Base Stations (BSs) can cooperate to jointly decode all user's data (macro-diversity).Thus, we dispense with cellular structure altogether and consider the entire network of the BSs and the users as a network-MIMO system.Each user has a link to each BS and BSs cooperate to jointly decode all user's data.In first part of this paper, we study the derivation of Hadamard inequality and its application to derive Hadamard upper bound (HUB) on optimum joint decoding capacity which we referred to as theoretical Hadamard upper bound throughout this paper.The theoretical results of this paper are exploited further to study the effect of variable path gains offered by each user in adjacent cells to the BS of interest (due to variableuser density) and derive analytical form of upper bound.The performance analysis of first part of this paper includes the presentation of capacity expressions over multi-user and single user decoding strategies with and without intra-cell and inter-cell TDMA schemes to determine the existence of such upper bound.In the second part of this paper, we approximate the probability decoding capacity.This followed by the several simulation results for single user and multi-user scenarios that validate the analysis and illustrate the effect of various time sharing schemes on the performance of the optimum joint decoding capacity for the system under consideration. While in section V, the theoretical results on Hadamard upper bound are further exploited to derive a novel analytical expression for upper bound on optimum joint decoding capacity by using Hadamard inequality.This is followed by numerical examples and discussions in section VI that validate the simulation and analytical results, and illustrate the accuracy of the proposed approximation based approach for realistic cellular network-MIMO systems.Conclusions are presented in section VII. Notation: Throughout the paper, R N ×1 and C N ×1 denote N dimensional real and complex vector spaces respectively.Furthermore, P N ×1 denotes N dimensional permutation vector which has 1 at some specific position in each column.Moreover, the matrices are represented by upper boldface letters, as an example, th N ×M matrix A with N rows and M columns are represented A. System Model Consider a Circular Gaussian Cellular MAC (C-GCMAC) where N = 6 cell are arranged in a circle as shown in Fig. 1 [24], [25].Assuming each cell contains K users such that there are M = NK users in the network-MIMO system.Wyner's model of cellular network used a single parameter to represent the signal strength of inter-cell interference where the path gain is to the closed BS is 1, and the path gain to the adjacent BSs is Ω and it is zero elsewhere [7].Wyner considered optimal joint precessing of all BSs by exploiting BS cooperation.Later, Shamai and Wyner considered a similar model with frequency flat fading scenario and more conventional decoding schemes [5] and [6].Thus, assuming a perfect symbol, frame synchronization, and K users in each cell, at a given time instant the received signal at each of the BS is [12] 1 where {B j } N j=1 are the BSs in each cell and {T j } N j=1 are the T j Mobile Terminals (MTs) located in j th cell for j = 1, 2, . . ., N MTs in each cell and x l j is the transmitted complex symbol form K th transmitter in j th cell and each z j ∽ CN (0, σ2 z ).Each transmitted symbol is subject to the average power constraint E[ x l j 2 ] P for all (j, i) = (1, . . ., N) × (1, . . ., K).Also, h B j T j is the intra-cell channel gain between the l th MT and T j and BS B j in j th cell and h B j T j+i is the inter-cell channel gain between the l th MT T j+i in (j + i) th cell for i = ±1 and BS B j .In general, we model the intra-cell and inter-cell channel gains as a Hadamard product of two terms where Ω B j T j+i ∈ U(0, 1) denotes frequency flat-path gain that depends on distance between the BSs and the MTs which are calculated according to the normalized path loss model where d B j T j and d l B j T j+i are the distances along the line of sight of the transmission path between the intra-cell and inter-cell MTs to the respective BS of the interest respectively and η is the path loss exponent and we assumed it is 4 for urban cellular environment [2].The gain g l B j T j+i ∈ CN (0, 1) is the small scale fading coefficient that depends on the local scattering environment around the MTs such that the fading coefficients are assumed to have unit power.It is to note that these tw components of the resultant composite channel are mutually independent as they are due to different propagation effects.Therefore, the C-GCMAC model in (1) can be extended to account for fading as follows, For notation convenience the entire signal model over C-GCMAC can be more compactly expressed as a vector memoryless channel of the form 2 where is defined as the Hadamard product matrix of the channel fading and channel slow gain matrices given by where The modeling of channel slow gain matrix Ω N,K for single and multi-user environments can be well understood form following Lemma. Lemma 2.1: (Modeling of Channel Slow Gain Matrix) Let S be the circular permutation operator, viewed as N × N matrix relative to the standard basis for R N .For given circular cellular setup where initially we assumed K = 1 and N = 6 such that there are M = NK = 6 users in system.Let {e 1 , e 2 , . . ., e 6 } be the standard row basis vector for R 6 such that e i = Se i + 1 for i = 1, 2, . . ., N. Therefore, the circular shift operator matrix S relative to the defined row basis vectors, can be expressed as [26], [27]. The matrix S is real and orthogonal, hence S −1 = S T and also the basis vectors are orthogonal for R 6 .In symmetrical Wyner model, the variable slow gain between the MTs T j+i for i = 0, ±1 and the respective BSs B j can be viewed as a row vector of the resultant N ×M circular channel slow gain matrix Ω and can be expressed as Ω (1, :) = Ω B j T j , Ω B j T j+i 0, 0, 0, Ω B j T j−i where Ω B j T j is the slow gain between the intra-cell MT and the respective BS and Ω B j T j+i and Ω B j T j+i for i = ±1 are the channel slow gain between the MTs T j+i for i = ±1 in the adjacent cells on the right side and left side of the BS of the interest respectively.In this setup, it is known that the circular matrix Ω can be expressed as a linear combination of powers of the shift operator S [26], [27].Therefore, the resultant circular symmetrical channel slow gain matrix in this scenario can be expressed as Similarly, the channel slow gain model can be extended for unsymmetrical scenario as follows where I N is N ×N identity matrix; S is the shift operator; Ω B j T j±1 ∈ U (0, 1) and ΩN,1 ∈ U (0, 1) Furthermore, for multi-user scenario the symmetric model may be formulated as Similarly, the channel slow gain model can be extended for unsymmetrical scenario as follows where 1 K denotes 1 × K all ones vector and (⊗) denotes the Kronecker product. B. Terminology In this paper, we consider different system settings, which are explained as follows: i. Intra-cell TDMA: once user per cell is allowed to transmit at any time instant while the users in different cells can transmit simultaneously. ii. Inter-cell TDMA: one cell is active at any time instant and all the local users inside that cell are allowed to transmit simultaneously while the users in different cells are inactive at that time instant. III. INFORMATION THEORY AND HADAMARD INEQUALITY In this section, a novel expression for the upper bound on the sum-rate capacity based on Hadamard inequality is derived [12].The upper bound is referred to as Hadamard upper bound (HUB) throughout the paper in discussions and analysis.Let us assume firstly that the receiver has perfect channel state information (CSI) while the transmitter knows neither the statistics nor the instantaneous CSI.In this case, a sensible choice for the transmitter is to split the total amount of power equally among all data streams and, consequently, an equal-power transmission scheme takes place [4]- [6].The justification for adopting this scheme, though not optimal, originates from the so-called maxmin property which demonstrates the robustness of the above mentioned technique for maximizing the capacity of the worst fading matrix [3]- [6] .Under these circumstances, the most commonly used figure of merit in the analysis of MIMO systems is the normalized total sum-rate constraint, which in this paper is referred to as the optimum joint decoding capacity.Following the argument in [8], one can easily show that sum-rate capacity of the system of interest is where p (H) signifies that the channel matrix is ergodic with density p (H); I N is a N ×N identity matrix and γ is the SNR.Here, the BSs are assumed to be able to jointly decode the received signals in order to detect the transmitted vector x.Applying the Hadamard decomposition (5), The Hadamard form of (12) may be expressed as Theorem 3.1: (Hadamard Product) Let G and Ω be an arbitrary N × M matrices.Then [28] and [29] G where P N and P M are N 2 × N and M 2 × M partial permutation matrices respectively.The j th column of P N and P M has 1 in its ((j − 1) N + j) th and ((j − 1) M + j) th position respectively and zero elsewhere.The following corollary lists several useful properties of the partial permutation matrices P N and P M . Corollary 3.2: (Hadamard Product) For brevity, the partial permutation matrices P N and P M will be denoted by P unless it is necessary to emphasize theory order.In the same way, the partial permutation matrices Q N and Q M , defined below, are denoted by Q3 . i. P N and P M are the only matrices of zeros and onces that satisfy ( 14) for all G and Ω. ii. P T P = I and PP T is a diagonal matrix of zeros and ones, so 0 PP T 1. iii.There exists a zeros and onces such that π (PQ) is the permutation matrix.The matrix Q is not unique but for any choice of Q, following holds; iv.Using the properties of a permutation matrix together with the definition of π in (iii); we have Theorem 3.3: (Hadamard Inequality) Let G and Ω be an arbitrary N × M matrices.Then [28]- [30] GG where P N and we called it the Gamma equality function.From 15, we can obviously deduce [28] This inequality is referred to as the Hadamard inequality which will be employed to find the theoretical Hadamard upper bound on the capacity (13). Proof: Using the well known property of the Kronecker product, , and Corollary (3) Multiply each term by partial permutation matrix P of appropriate order to ensure Theorem 3.1. IV. THEORETICAL HADAMARD UPPER BOUND (HUB) In this section, we first introduce the theoretical upper bound by employing the Hadamard inequality (16).Later on, in this section we demonstrate the behaviour of the theoretic upper bound when various time sharing schemes are employed.The simple upper bound on optimum joint decoding capacity using the Hadamard inequality (Theorem 3.3) is derived as Now, in the following sub-sections we analyze the validity of the HUB on optimum decoding capacity w.r.t single and multi-user environments under limiting constraints. i. Low Inter-Cell Interference For a single user case, as the inter-cell interference levels among the MTs and the BSs is negligible i.e.Ω → 0, the Hadamard upper bound on the optimum joint decoding capacity approaches the actual capacity since G and Ω becomes diagonal matrices and ( 16) holds equality results such that Proof: To arrive at (19), we first notice from (15) that P T N (G ⊗ Ω) Q M Q T M = 0 when G and Ω are diagonal matrices.Using corollary 3.2, QQ T = I − PP T , subsequently, we have P T N (G ⊗ Ω) I − PP T = 0 such that Multiply both sides by G ⊗ Ω H P N , we have Using the well property of Kronecker product AC H ⊗ BD H = (A ⊗ B) (C ⊗ D) H , we have Using Theorem 3.1, finally we arrived at Therefore, by using (19) we have The summary of the theoretical HUB on optimum joint decoding capacity over flat faded C-GCMAC for K = 1 is shown in Fig. 2. The curves are obtained over 10,000 Monte Carlo simulation trials of channel matrix H .It can be seen that the theoretical upper bound is relatively lose at medium to high SNRs as compared bound at low SNRs for Ω U (0, 1)(compare the solid curve using (13) with the red dashed curve using (18)).This upper bound using ( 16) is the consequence of the fact that the determinant is increasing in the space of semi-definite positive matrices G and Ω .It can be seen that in the limiting environment such as when Ω → 0, the theoretical upper bound approaches to the actual optimum joint decoding capacity (compare the curve with square markers and the dasheddotted curve).It is to note that the channel slow gain Ω among the MTs in the adjacent cells and BS of interest may be negligible when users are located at the edge of the adjacent cells. ii. Tightness of HUB -Low SNR Regime In this sub-section, we will show that the theoretical HUB on optimum joint decoding capacity converges to the actual sum-rate capacity at lower range of signal to noise ratios whereas at higher range of signal to nose ratios, the offset from the actual sum-rate capacity is almost constant.In general, if ∆ is the absolute gain inserted by the theoretical upper bound on C opt which may be expressed as and asymptoticly tends to zero as γ → 0, given as Proof: Using ( 22), we have where we have made use of property det (I + γ A) = 1 + γ trA + O (γ 2 ) [31] 4 , hence using (15), the tightness on the bound becomes In limiting case, using Taylor series expansion and ignoring higher order terms of γ, we have It is demonstrated in Fig. 2 that as γ → 0, the gain inserted by upper bound ∆ ≈ 0. It can be seen from the figure that the theoretical HUB on optimum capacity is loose at higher range of signal to noise ratios and comparably tighter at lower range of signal to noise ratios and hence Copt (p (H) , γK) ≈ C opt (p (H) , γ). iii.Inter-Cell TDMA Scheme Note that ( 19) holds if and only if Γ PQ = 0, which is mathematically equivalent to and Ω N,1 are diagonal matrices for single user case i.e.K = 1, when intra-cell TDMA is employed i.e.Ω = 0.This is considered as a special case in Circular-GCMAC decoding when each BS only decodes own local users (intra-cell users) and there is no inter-cell interference from the adjacent cells.Hence, the resultant channel matrix is a diagonal matrix such that for the given G N,1 and Ω N,1 (19) holds such that The same has been shown in Fig. 2 (compare the curve with square markers, the curve with plus markers and the dashed-dotted curve). B. Multi-User Environment In this sub-section, we demonstrate the behaviour of the theoretical HUB when two implementation forms of time sharing schemes are employed.One is referred to as inter-cell TDMA, intra-cell narrowband scheme (TDMA, NB) and other is inter-cell TDMA, intra-cell wideband scheme.We will refer this scheme as inter-cell time sharing, wideband scheme, (ICTS, WB) throughout the discussions.It is to note that SCP is employed to determine the application of new bound for realistic cellular environment. i. Inter-Cell TDMA, Intra-Cell Narrow-band scheme (TDMA, NB) In multi-user case, when there are K active users in each cell, then the channel matrix is no longer diagonal and hence (19) is not valid and Γ PQ = 0.However, the results of single user case is still valid when intra-cell TDMA is employed in combination with inter-cell TDMA (TDMA, NB).If the multi-user channel matrix H N,K is expressed as (5), then by exploiting the TDMA, NB scheme the rectangular channel matrix H N,K may be reduced to H N,1 and can be expressed as where G N,1 and Ω N,1 are exactly diagonal matrices as discussed earlier in single user case.The capacity in this case becomes Using the Hadamard inequality, the upper bound on TDMA, NB sum-rate capacity is equal to the actual sum-rate capacity offered by this scheduling scheme.The scenario is simulated and shown in Fig. 3.It is to note that the capacity in this figure is normalized with respect to number of users and number of cells.It can be seen that the actual sumrate capacity and the upper bound on the optimum capacity are identical for K = 5 and K = 10 (compare the curves with circle markers with the black solid curves). ii. Inter-Cell Time Sharing, Wide-band scheme, (ICTS, WB) It is well known that the increase in number of users to be decoded jointly increases the channel capacity [5], [6], [13]- [16].Let us consider a scenario without intra-cell TDMA i.e. there are K active users in each cell and they are allowed to transmit simultaneously. Mathematically, the local intra-cell users are located along the main diagonal of a rectangular channel matrix H N,K .The capacity in this case when only inter-cell TDMA scheme (ICTS, WB) is employed becomes The capacity by employing ICTS, WB scheme for K = 5 and K = 10 is shown in Fig. 3(a) and Fig. 3(b) respectively.The theoretical upper bound on the capacity using Hadamard inequality by employing ICTS, WB is also shown in this figure (compare the solid curve with the dashed curve).It is observed that the theoretical upper bound on ICTS, WB capacity increases with the increase in number of intra-cell users to be jointly decoded in the multi-user case.An an example, at γ = 20 dB and for K = 5 the relative increment in sum-rate due to Hadamard upper bound is 6.5 % and similarly for K = 10 the relative increment is raised to 12 %.Thus, using an inequality ( 16), multi-user decoding offers log 2 (K) times higher non-achievable capacity as compared to actual capacity offered by this scheme.Also, the overall performance of ICTS scheduling is superior to the TDMA scheme due to wideband intra-cell scheme (compare the dashed-dotted curves with the solid curves).The results are summarized in Table I to clearly validate the existence of HUB. V. ANALYTICAL HADAMARD UPPER BOUND (HUB) In this section, we approximate the PDF of Hadamard product of channel fading matrix H and channel slow gain matrix Ω as the PDF of the trace of the Hadamard product of two matrices. The closed form expression of the new bound HUB is expressed in the form of Meijer's G-Function representation.Recall from (18) (section IV), the simple upper bound on optimum joint decoding capacity (13) using the Hadamard inequality (Theorem 3.3) is derived as where we have made use of property det (I + γ A) = 1 + γ trA + O (γ 2 ).Ignoring the terms with higher order of γ; tr G • Ω denotes the trace of the Hadamard product of the composite channel matrix G • Ω and is the Shannon transform of a random square Hadamard composite matrix G • Ω and distributed according to the cumulative distribution function (CDF) denoted by Also, where γ = γ N 2 and γ = P/σ 2 z is the mobile terminal transmit power over receiver noise ratio.Using trace inequality ( [32]),we have an upper bound on (32) as If u = x y; where x = tr G and y = tr Ω then (34) can be expressed as where f G• Ω (u) is the joint PDF of the tr G and tr Ω which is evaluated as follows in the next sub-section. VI. NUMERICAL EXAMPLES AND CONCLUSION In this section, we present Monte Carlo simulation results in order to validate the accuracy of the analytical analysis based on approximation approach of upper bound on optimum joint decoding capacity for C-GCMAC with Uniformly distributed MTs.In the context of Monte Carlo finite system simulations, the MTs gains toward the BS of interest are randomly generated according to the considered distribution and the capacity is calculated by the evaluation of capacity formula (13).Using (18) the upper bound on the optimum capacity is calculated using [12].It can be seen from Fig. 4 that the theoretical upper bound converges to the actual capacity under constraints like low SNRs (compare dashed curve with sold curve) [12].In the context of mathematical analysis which is the main contribution of this paper, (45) is utilized to compare the upper bound on optimum joint decoding capacity based on proposed analytical analysis with upper bound based on simulations.It can also be seen from Fig. 4 that the proposed approximation shows comparable results over the entire range of SNR (compare dotted and dashed curves).However, it is to note that the new HUB on optimum joint decoding capacity of multi-cell setup is tighter for higher range of SNRs as compare to low range of SNRs. The proposed approximation based approach is useful to represent the sum-rate capacity for the realistic multi-cell setup i.e. variable user-density and therefore variable channel slow gain towards the BS of interest. A figure of merit utilized in cellular communication which is referred to as mean area spectral efficiency (ASE) averaged over a large number of fading realizations g l B j T j+i and channel slow gain Ω l B j T j+i for all (j, i) = (1 . . .N) × (0, ±1) and K users [34].Further, we assumed that the range of cell radius R is 0.1 − 1 Km for the system under consideration.(13); the solid curve illustrates capacity using (13); the dashed curve illustrates capacity using (18); the curves with square markers and plus markers illustrate capacity using (19) when and inter-cell TDMA is employed respectively. October 19, 2010 DRAFT density function (PDF) of Hadamard product of channel fading matrix G and channel slow gain matrix Ω and derive the analytical form of HUB.The closed form representation of HUB is represented in form of Meijer's G-Function.The performance and comparison analysis of analytical work includes the presentation of information theoretic bounds over the range of signal to noise ratios (SNRs) and calculation of the mean area spectral efficiency (ASE) over the range of cell radii for the system under considerations.The summary of main contributions of this paper are 1) to derive an theoretical and analytical upper bounds on the optimum joint decoding capacity of Wyner C-GCMAC by exploiting Hadamard inequality for finite cellular network-MIMO setup and 2) to alleviate the Wyner's original assumption by assuming variable-user density across the cells i.e. the MTs are uniformly distributed across the cells in C-GCMAC model.This paper is organized as follows.In section II, system model for Wyner C-GCMAC is recast in Hadamard matrix framework.Next in Section III, the Hadamard inequality is derived as Theorem 3.3 based on Theorem 3.1 and Corollary 3.2.While in Section IV, a novel application of Hadamard inequality is employed to derive the theoretical upper bound on optimum joint iii.Channel Slow Gain (Ω): normalized path loss offered by MTs in adjacent cells to the BS of interest.iv.Multi-cell Processing (MCP): for the uplink, a joint receiver has access to all the received signals and an optimal decoder decodes all the transmit signals jointly; while for the downlink, the transmit signal from each BS contains information for all users.October 19, 2010 DRAFT v. Single-cell Processing (SCP): for the uplink, BSs only process transmit signals from their own cells and treat inter-cell interference as Gaussian noise; while for the downlink, BSs transmit signals with information only intended for their local users. Fig. 5 (Fig. 1 :Fig. 2 : Fig. 1: Uplink of C-GCMAC where BSs are cooperating to decode all user's data; (the solid line illustrates intra-cell users and the dotted line shows inter-cell users).For simplicity, in this figure there is only K = 1 user in each cell. Fig. 3 : Fig. 3: Summary of optimum joint decoding capacity and theoretical upper bound on the optimum capacity for the multi-user case when TDMA, NB and ICTS, WB schemes are employed. Fig. 4 : Fig.4: Summary of Hadamard Upper Bound (HUB) on optimum joint decoding capacity of C-GMAC for variable user-density across the cells; solid curve illustrates actual capacity using(13) obtained by Monte Carlo simulations; dashed and dotted curves illustrate HUB obtained by Monte Carlo simulations and analytical analysis using (18) and (32) respectively.The simulation curves are obtained after averaging 10,000 Monte Carlo trials of the composite channel H. ×M.Similarly, vectors are represented by lowercase boldface italic version of the original matrix, as an example, a N × 1 column vector a is represented as a N ×1 .An element of matrix or a vector is represented by the non-boldface letter representing the respective vector structure with subscripted row and column indices, as an example a n,m refers to the element referenced by row n and column m of a matrix A N ×M .Similarly, a k refers to element k of the vector
7,155.4
2010-10-15T00:00:00.000
[ "Computer Science", "Mathematics", "Engineering" ]
Investigation of transmembrane proteins using a computational approach Background An important subfamily of membrane proteins are the transmembrane α-helical proteins, in which the membrane-spanning regions are made up of α-helices. Given the obvious biological and medical significance of these proteins, it is of tremendous practical importance to identify the location of transmembrane segments. The difficulty of inferring the secondary or tertiary structure of transmembrane proteins using experimental techniques has led to a surge of interest in applying techniques from machine learning and bioinformatics to infer secondary structure from primary structure in these proteins. We are therefore interested in determining which physicochemical properties are most useful for discriminating transmembrane segments from non-transmembrane segments in transmembrane proteins, and for discriminating intrinsically unstructured segments from intrinsically structured segments in transmembrane proteins, and in using the results of these investigations to develop classifiers to identify transmembrane segments in transmembrane proteins. Results We determined that the most useful properties for discriminating transmembrane segments from non-transmembrane segments and for discriminating intrinsically unstructured segments from intrinsically structured segments in transmembrane proteins were hydropathy, polarity, and flexibility, and used the results of this analysis to construct classifiers to discriminate transmembrane segments from non-transmembrane segments using four classification techniques: two variants of the Self-Organizing Global Ranking algorithm, a decision tree algorithm, and a support vector machine algorithm. All four techniques exhibited good performance, with out-of-sample accuracies of approximately 75%. Conclusions Several interesting observations emerged from our study: intrinsically unstructured segments and transmembrane segments tend to have opposite properties; transmembrane proteins appear to be much richer in intrinsically unstructured segments than other proteins; and, in approximately 70% of transmembrane proteins that contain intrinsically unstructured segments, the intrinsically unstructured segments are close to transmembrane segments. Background Membrane proteins account for roughly one third of all proteins and play a crucial role in processes such as cellto-cell signaling, transport of ions across membranes, and energy metabolism [1][2][3], and are a prime target for therapeutic drugs [2,[4][5][6]. One important subfamily of membrane proteins are the transmembrane proteins, of which there are two main types: • α-helical proteins, in which the membrane-spanning regions are made up of α-helices, and • β-barrel proteins, in which the membrane-spanning regions are made up of β-strands. β-barrel proteins are found mainly in the outer membrane of gram-negative bacteria, and possibly in eukaryotic organelles such as mitochondria, whereas α-helical proteins are found in eukaryotes and the inner membranes of bacteria [7]. Given the obvious biological and medical significance of transmembrane proteins, it is of tremendous practical importance to identify the location of transmembrane segments. There are difficulties with obtaining the three dimensional structure of membrane proteins using experimental techniques: • Membrane proteins have both a hydrophilic part and a hydrophobic part, and hence are not entirely soluble in either aqueous or organic solvents; this makes them difficult to crystallize, and hence difficult to analyze using Xray crystallography, which requires crystallization of the sample. • Membrane proteins tend to denature upon removal from the membrane, making their three-dimensional structure difficult to analyze. Another interesting class of proteins are the intrinsically unstructured proteins, proteins that need not be folded into a particular configuration to carry out their function, existing instead as dynamic ensembles in their native state [21][22][23][24]. Intrinsically unstructured proteins have been associated with a wide range of functions including molecular recognition, molecular assembly/disassembly and protein modification [21,22,25]. We are interested in investigating the physicochemical properties of various classes of protein segments. In particular, we are interested in determining which properties are useful for discriminating transmembrane segments from non-transmembrane segments in transmembrane proteins, and for discriminating intrinsically unstructured segments from intrinsically structured segments in transmembrane proteins. We are further interested in any similarities or differences in physicochemical properties across these four classes of segments. We will then apply the results of this analysis to construct classifiers to discriminate transmembrane from non-transmembrane segments in transmembrane proteins. Physicochemical properties We are interested in determining which physicochemical properties are most useful for discriminating transmembrane segments from non-transmembrane segments in transmembrane proteins, and for discriminating intrinsically unstructured segments from intrinsically structured segments in transmembrane proteins. We are further interested in any similarities or differences in physicochemical properties across these four classes of segments. Certain properties, such as hydropathy and polarity, can be measured in different ways; this results in different scales. We are also interested in determining which scales are the most effective in discriminating transmembrane segments from non-transmembrane segments, and in discriminating intrinsically unstructured from intrinsically structured segments in transmembrane proteins. Our interest is in properties that can be easily computed given only a sequence of amino acids; we therefore considered properties that depend only on the type of each amino acid in a sequence, including: • Hydropathy, a measure of the relative hydrophobicity of an amino acid. There are four hydropathy scales in common use -the Kyte-Doolittle [26], Eisenberg-Schwarz-Komaromy-Wall [27], Engelman-Steitz-Goldman [28], and Liu-Deber [29] scales. • Polarity, a measure of how charge is distributed over an amino acid, affects how amino acids interact, and helps to determine protein structure. There are two polarity scales in common use-the Grantham [30] and the Zimmerman-Eleizer-Simha [31] scales. • Flexibility, a measure of the amount to which an amino acid residue contributes to the flexibility of a protein. • Polarizability, a measure of the extent to which positive and negative charge can be separated in the presence of an applied electric field. • van der Waals volume, a measure of the volume occupied by an amino acid. • Bulkiness, a measure of the volume occupied by an amino acid, is correlated with hydrophobicity [32]. • Electronic effects, a measure that takes into account steric factors, inductive effects, resonance effects, and field effects [33]. • Helicity, the propensity of an amino acid to contribute to the formation of helical structures in proteins [34]. Given a sequence of amino acids, the "pointwise" property value associated to a particular position in the sequence depends only on which of the 20 amino acids occurs at that position. To increase the robustness of our results, we work with average property values instead of pointwise property values. The average of a given property associated to a particular amino acid A in the sequence is the average of the pointwise property values associated to the amino acids contained in a window of length L centered at A. The effectiveness of each property at discriminating transmembrane from non-transmembrane segments and intrinsically unstructured from intrinsically structured segments was assessed based on two criteria: (1) For a given property X, the degree to which the classconditional distributions for the two classes overlap, that is, the degree to which p X (x|class 1) and p X (x|class 2) overlap. The less these two probability distributions overlap, the more easily the two classes can be separated. Knowledge of these probability distributions forms the basis for a Bayesian classifier, which classifies an instance having a value x for property X to "class 1" if and only if where P{class 1} is the probability of observing a class 1 instance and P{class 2} is the probability of observing a class 2 instance. The class-conditional probability distributions for the above properties are plotted in Figures 1,2,3. (2) The Overlap Ratio, defined in the Methods section, is a numerical measure of the overlap between the conditional probabilities P{class 1|X = x} and P{class 2|X = x}. The smaller the Overlap Ratio, the more easily the two classes can be discriminated. The Overlap Ratios for discriminating transmembrane from non-transmembrane segments are shown in Table 1, while the Overlap Ratios for discriminating intrinsically unstructured from intrinsically structured segments are shown in Table 2. It turns out that the discriminating power of a given property depends on the length L of the window over which property values are averaged; Overlap Ratios are given in Tables 1 and 2 for all odd values of the window length L between 9 and 31. Our conclusions were as follows: • Whereas all four hydropathy scales can be used for discriminating transmembrane segments for non-transmembrane segments in transmembrane proteins, the Liu-Deber scale is the best scale for this task. • Whereas all four hydropathy scales can be used for discriminating intrinsically unstructured segments from intrinsically structured segments in transmembrane proteins, the Eisenberg-Schwarz-Komaromy-Wall scale is the best scale for this task. • Whereas both polarity scales can be used for discriminating transmembrane from non-transmembrane segments and for discriminating intrinsically unstructured from intrinsically structured segments in transmembrane proteins, the Grantham scale is slightly better for these tasks. • For both classification problems (discriminating transmembrane from non-transmembrane segments and discriminating intrinsically unstructured from intrinsically structured segments), flexibility provided some degree of discriminating power, and bulkiness provided still less; neither property was as effective as hydropathy or polarity at discriminating between the two classes. • For both classification problems, polarizability, van der Waals volume, electronic effects, and helicity did not discriminate well between the two classes. Transmembrane segment classifiers We tested four classification techniques on the problem of discriminating transmembrane segments from non-transmembrane segments in transmembrane proteins: • C4.5 [35], a decision tree algorithm. • Two variants of the Self-Organizing Global Ranking (SOGR) algorithm [37], SOGR-I [38,39] and SOGR-IB [38,39], which are described in detail in the Methods sec- tion. These algorithms depend on a number of parameters: the length L of the window used to extract features, the number of neurons m, the learning rate η t , and the neighborhood size R. The performance of these algorithms depends on the choice of these parameters: For example, the performance of the SOGR-I algorithm as a function of the length of the window used to extract features is shown in Figure 4. Based on a series of experiments, we settled on feature window length L of 10, a network size m of 16 neurons, a fixed learning rate η t of .05, and a neighborhood size R of 2. Since the length of the window used to extract features was chosen to maximize the performance of the SOGR-I algorithm, the results will be slightly biased in favor of the SOGR-I and SOGR-IB algorithms. Designing a classifier also involves selecting the features that are most useful for the problem of interest. Based on our investigations of physicochemical properties, we based the classification on three features: • Hydropathy (Liu-Deber scale) • Polarity (Grantham scale) Reproduced with permission from [38] The performance of the above four classification techniques under ten-fold cross-validation when hydropathy (Liu-Deber scale), polarity (Grantham scale), and flexibility are used as features is shown in Table 3, while the performance when only polarity (Grantham scale) and flexibility are used as features is shown in Table 4. It is interesting that performance drops only slightly when two features are used instead of three. All four classifiers exhibited good performance, with out-of-sample accuracies of approximately 75%. While this may seem low, the substantial overlap of the transmembrane and non-transmembrane classes seen in Figures 1,2,3 makes this a nontrivial classification problem. Filtering strategies can be used to improve the performance of these classifiers [38,39]. Conclusions We determined that the most useful properties for discriminating transmembrane segments from non-transmembrane segments and for discriminating intrinsically unstructured segments from intrinsically structured segments in transmembrane proteins were hydropathy, polarity, and flexibility, and based on these properties, constructed a number of classifiers to identify transmembrane segments with an out-of-sample accuracy of approximately 75%. Several interesting observations emerged from our study: Performance of the SOGR-I classifier as a function of the length of the window used to extract features, based on threefold cross-validation (fixed learning rate η t = .05, neighborhood size R = 2, number of neurons = 16). Reproduced with permission from [38] Figure 4 Performance of the SOGR-I classifier as a function of the length of the window used to extract features, based on threefold cross-validation (fixed learning rate η t = .05, neighborhood size R = 2, number of neurons = 16). Reproduced with permission from [38]. • Intrinsically unstructured segments and transmembrane segments tend to have opposite properties, as summarized in Table 5. For example, unstructured segments tended to have a low hydropathy value, whereas transmembrane segments tended to have a high hydropathy value. These results are in agreement with previous work that found that transmembrane segments tend to be more hydrophobic than non-transmembrane segments, due to the fact that transmembrane α-helices require a stretch of 12-35 hydrophobic amino acids to span the hydrophobic region inside the membrane [26]. • Transmembrane proteins appear to be much richer in intrinsically unstructured segments than other proteins; about 70% of transmembrane proteins contain intrinsically unstructured regions, as compared to about 35% of other proteins. • In approximately 70% of transmembrane proteins that contain intrinsically unstructured segments, the intrinsically unstructured segments are close to transmembrane segments. These observations may provide insight into the structural and functional roles that intrinsically unstructured segments play in membrane proteins, and may also aid in the identification of intrinsically unstructured and transmembrane segments from primary protein structure. Physicochemical properties The Overlap Ratio, a quantitative measure of how well two classes (referred to generically as "class 1" and "class 2") can be discriminated based on a property X, was calculated as follows. 1. We construct a graph such that: (a) The horizontal axis corresponds to the property X. We divide this axis into bins. (b) The y-value associated with the bin corresponding to X values between x and x + ∈ is the fraction of all instances in the training set that belong to class 1 and have a value for the feature X in the range [x, x + ∈), where ∈ > 0 is small. The graph represents an approximation to the function P{class 1|X = x}. We define the complementary function P{class 2|X = x}using Let Then the Overlap Ratio is then defined as: The smaller the Overlap Ratio, the more easily the two classes can be discriminated. The SOGR-I and SOGR-IB classification algorithms Overview The Self-Organizing Global Ranking (SOGR) algorithm [37] was inspired by Kohonen's Self-Organizing Map (SOM) algorithm [40]. In the SOM algorithm, each neuron has associated with it a topological neighborhood, and the algorithm is such that neighboring neurons in the topological space tend to arrange themselves over time into a grid in feature space that mimics the neighborhood structure in the topological space. The SOGR algorithm differs from the SOM algorithm by dropping the topological neighborhood and replacing it with the concept of a global neighborhood generated by ranking. We consider two variants of the SOGR algorithm: • The first variant, SOGR-I [38,39], modifies the initialization scheme of SOGR. • The second variant, SOGR-IB [38,39] ("B" stands for "Batch update"), removes the dependence on the order in which instances are presented by only updating the weights after each cycle, where a cycle involves presenting the entire training set to the network, one instance at a time. This variant also uses the modified initialization procedure described above. Before we describe the above modifications in detail, we describe the SOGR algorithm itself. The SOGR classification algorithm We assume that m neurons are used; each neuron j has a weight vector Reproduced with permission from [38] • Updating Weights: Adjust the positions of each of the R winning neurons using the update rule where j ∈ Γ and η t is the learning rate. The learning rate is chosen to decrease with time in order to force convergence of the algorithm. In [37] it is suggested that the learning rate be decreased at an exponential rate, and that it should be smaller for larger neighborhood sizes R. 3. Assigning Classes to Neurons: Associated with each neuron j is a count of the number of instances belonging to each class that are closer to neuron j than any other neuron. This count is calculated as follows: • For each neuron, initialize the counts to zero. • For each instance ( , y i ) in the training set, find the closest neuron to the feature vector , that is, find the neuron with the index j * , where and increment the count in neuron j * corresponding to class y i by 1. • After all instances in the training set have been considered, each neuron is assigned to the class corresponding to the largest count for that neuron. After the training process has been completed, a test instance can be classified by assigning it the class label of the nearest neuron. The SOGR-I classification algorithm The first variant, SOGR-I [38,39], modifies the initialization scheme of SOGR. Specifically, assume that the feature space is d dimensional, so that the feature vectors belong to . For each feature k, we find the largest and smallest value of that feature over the entire training set, which are respectively L k and U k : where x ik is the k th element of the feature vector . Then the initial positions of the m neurons are chosen as: Thus the m neurons are evenly distributed along the line connecting (L 1 , L 2 , … L d ) to (U 1 , U 2 , … U d ). This approach has several advantages over other initialization methods: • It guarantees that the neurons will be in some sense evenly distributed throughout the feature space. Random initialization, on the other hand, does not guarantee this. If one has a large feature space, say of 60 dimensions, and comparatively few neurons, say 50, then with random initialization those neurons will with high probability not be evenly distributed throughout the feature space. • Even a small number of neurons can be used to populate the feature space. If we consider an alternate initialization procedure in which one populates the feature space with a d-dimensional grid of neurons, and there are q grid points along each feature space axis, then the total number of neurons required to populate this grid is q d . For example, if q = 3 and the feature space has 60 dimensions, then the number of neurons required is which is clearly infeasible. The SOGR-IB classification algorithm The second variant, SOGR-IB [38,39], addresses two problems with the original SOGR algorithm: • The SOGR algorithm updates the weights after each new instance is presented to the network; as a result, the neuron trajectories can oscillate wildly. • The SOGR algorithm specifies that the learning rate should be decreased during the course of training, for example at an exponential rate. The problem is that if the learning rate is decreased too rapidly, then the neurons may get stuck before they have reached their optimal positions. SOGR-IB ("B" stands for "Batch update") addresses these problems in two ways: • It uses a "batch update" strategy for updating the positions of the neurons in feature space. This eliminates the dependence of the results on the order in which instances are presented to the network, and also stabilizes the trajectories of the neurons. • The batch update strategy allows the use of a fixed, but small, learning rate η t , which eliminates the problem of
4,450.6
2008-03-20T00:00:00.000
[ "Biology", "Computer Science" ]
Widespread use of incorrect PCR ramp rate negatively impacts multidrug-resistant tuberculosis diagnosis (MTBDRplus) The scale-up of rapid drug resistance testing for TB is a global priority. MTBDRplus is a WHO-endorsed multidrug-resistant (MDR)-TB PCR assay with suboptimal sensitivities and high indeterminate rates on smear-negative specimens. We hypothesised that widespread use of incorrect thermocycler ramp rate (speed of temperature change between cycles) impacts performance. A global sample of 72 laboratories was surveyed. We tested 107 sputa from Xpert MTB/RIF-positive patients and, separately, dilution series of bacilli, both at the manufacturer-recommended ramp rate (2.2 °C/s) and the most frequently reported incorrect ramp rate (4.0 °C/s). Mycobacterium tuberculosis-complex DNA (TUB-band)-detection, indeterminate results, accuracy, and inter-reader variability (dilution series only) were compared. 32 respondents did a median (IQR) of 41 (20–150) assays monthly. 78% used an incorrect ramp rate. On smear-negative sputa, 2.2 °C/s vs. 4.0 °C/s improved TUB-band positivity (42/55 vs. 32/55; p = 0.042) and indeterminate rates (1/42 vs. 5/32; p = 0.039). The actionable results (not TUB-negative or indeterminate; 41/55 vs. 28/55) hence improved by 21% (95% CI: 9–35%). Widespread use of incorrect ramp rate contributes to suboptimal MTBDRplus performance on smear-negative specimens and hence limits clinical utility. The number of diagnoses (and thus the number of smear-negative patients in whom DST is possible) will improve substantially after ramp rate correction. The latest iteration of MTBDRplus (version 2) was designed to have improved sensitivity on specimens, irrespective of smear status, and culture isolates. MTBDRplus's follow-on test for second-line resistance (MTBDRsl; Hain Lifescience, Germany) is based on similar principles and also WHO-endorsed 9,16,21,22 . MTBDRplus requires thermocycling to amplify DNA. The manufacturer recommends a ramp rate (speed of temperature change between cycles) of ≤ 2.2 °C/s 8 , which the thermocycler they sell (the GTC-cycler) is capable of. Laboratories can use their own thermocyclers, however, these thermocyclers may have different default ramp rates or, in cheaper models, may not permit ramp rate to be changed. None of the studies in a recent systematic review and meta-analysis of MTBDRplus report ramp rate and few studies reported rates of TUB-band positivity 16,23 . If an assay is TUB-band-negative, susceptibility results cannot, per the manufacturer's recommendation, be reported 8 and studies that do not report TUB-band positivity rates do not provide a complete characterisation of test performance. We hypothesised that suboptimal sensitivities and high indeterminate rates reported for MTBDRplus on smear-negative specimens [12][13][14][15][16][17] were partly associated with incorrect ramp rate. If this phenomenon is widespread, it may explain a major limitation in the routine diagnosis of MDR-TB, for which MTBDRplus is the only commercially available molecular assay. This could result in large numbers of possible MDR-TB diagnoses being missed, exacerbate diagnostic delay, and will have implications for diagnostic algorithms (e.g., confirmation of Xpert-indicated rifampicin-resistance, detection of rifampicin or isoniazid mono-resistance), clinical practice (e.g., detection of acquired resistance during treatment monitoring), and research studies (e.g., MDR-TB drug trials that need to rapidly screen patients). Ethics statement. This study was approved by the Health Research Ethics Committee of Stellenbosch University (N09-11-296) and done in accordance with these relevant guidelines and regulations. Permission was granted by the institutional review board (IRB) to access anonymised residual specimens collected as part of routine diagnostic practice and thus patient informed consent was waived. Survey of diagnostic and research laboratories. An invitation to an online survey was sent to 74 laboratories using MTBDRplus identified from a recent systematic review and meta-analysis 16 , expert consultation, the Global Laboratory Initiative, the Global Health Delivery network, and FIND. We placed no restrictions on the type of facility or country that could respond. Initial non-responders were emailed at least a further three times. Questions included country, average number of MTBDRplus assays per month, primary purpose of the assay, specimen smear status, models of thermocyclers, whether the thermocycler permitted ramp rate to be changed, and the MTBDRplus ramp rate used (the full questions are listed in the supplement). Permission was obtained from respondents to use their anonymised data for publication. Specimen collection and decontamination. 107 de-identified sputa consecutively submitted to an accredited government quality-assured (South African National Accreditation System) laboratory in Cape Town, South Africa were collected. Sputa were from patients with symptoms of TB who were, using a separate paired specimen, Xpert MTB/RIF (Xpert)-positive for TB and rifampicin-susceptible or -resistant. Sputa were decontaminated with NaOH-N-Acetyl-L-Cysteine (1% final concentration) 24 . Each decontaminated sediment had ~50 µl used for Auramine-O 25 smear microscopy and, if the paired specimen was Xpert rifampicin-resistant, ~500 µl used for culture for DST. 52 sputa were smear-positive (26 Xpert-rifampicin resistant, 26 Xpert-rifampicin susceptible) and 55 smear-negative (39 Xpert-rifampicin resistant, 16 Xpert-rifampicin susceptible). The sediments were stored at 4 °C for 2-3 days prior to transport to Stellenbosch University for DNA extraction. Impact of thermocycler ramp rate on MTBDRplus performance in clinical specimens. DNA was extracted from sediments using the GenoLyse kit (Hain Lifescience, Germany) 8 . DNA was amplified using two ramp rates: the manufacturer-recommended ramp rate (2.2 °C/s), and 4.0 °C/s, the most frequently used incorrect ramp rate in the survey, using a CFX96 (Bio-Rad, United States), which was the only machine available with a customisable ramp rate. This instrument undergoes annual servicing and calibration by the manufacturer. Hybridisation was done with the GT-Blot 48 (Hain Lifescience, Germany) 26 . An experienced reader interpreted bands in a blinded manner. Impact of thermocycler ramp rate on MTBDRplus performance in a dilution series. A drug-susceptible strain (H37Rv, ATCC 25618) and a phenotypically-confirmed clinical MDR strain (with known rpoB, katG, and inhA promoter SNPs) were grown to mid-exponential phase in Middlebrook 7H9 media (Becton Dickinson, United States) supplemented with Middlebrook Oleic Albumin Dextrose Catalase supplement (Becton Dickinson, United States). Colony counts after incubation on Middlebrook 7H10 media (Becton Dickinson, United States) for 21 days at 37 °C were done. This experiment was done in triplicate. MTBDRplus was done on dilutions of 10 2 , 10 3 and 10 4 CFU/ml in phosphate buffer with 0.025% Tween 80. 10 4 CFU/ml corresponds approximately to smear-positivity 27 and the lower concentrations in the dilution series correspond to paucibacillary smear-negative disease (i.e., the patients we hypothesise ramp rate to impact the most). The CFX96 machine with ramp rates of 2.2 °C/s or 4.0 °C/s was used. An experienced reader interpreted bands in a blinded manner. Assessment of inter-reader variability. MTBDRplus strips from the dilution series were interpreted by two experienced technicians in a blinded manner. Variability between readers (individual banding patterns, final diagnostic classifications) was assessed. When a strip is interpreted, a banding call determination is made if a specific band is present or absent; whereas a diagnostic call (susceptibility or resistance to rifampicin and/or isoniazid) is based on the overall banding pattern. Hence, banding patterns may change but not the diagnostic call. as the presence of the TUB-band with the amplification and conjugation control bands. Sensitivity for M. tuberculosis-complex DNA was calculated using a paired MGIT960 liquid culture (Becton Dickinson, United States) result from the national laboratory as a reference standard. A strip was classified as indeterminate if the amplification or conjugate control bands were absent but any other bands were present. A drug indeterminate result was defined as the absence of any locus control band (rpoB, katG and inhA) on a TUB-band-positive strip. A result was classified as actionable if the strip was TUB-band-positive and not indeterminate for any drugs. Statistical analyses. The two sample test of proportion was used for comparisons between proportions, and McNemar's test was used to calculate differences in sensitivity or indeterminate rates across ramp rates for paired data. We used the percent improvement in actionable results (calculated from our clinical specimen experiment, 21%) to estimate the number of additional TUB-band-positive diagnoses (and MDR-TB diagnoses) in survey respondents who said they tested smear-negative specimens. For this calculation, we assumed 1) the volume of assays done by the respondent was evenly spread across input material types (e.g., a respondent doing 100 MTBDRplus assays per month does ~33 smear-positive, smear-negative, and isolates; we unfortunately did not retrieve specific data on the monthly volume of smear-negative specimens only), 2) the MDR-TB prevalence in smear-negative specimens corresponded to the overall WHO estimate for the respondent's country, and 3) ramp rate changes would equally affect resistance and susceptibility detection. We used GraphPad Prism version 6.0 (GraphPad Software) and Stata version 14 (StataCorp) software. All statistical tests are 2-sided at α = 0.05. Data availability. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Performance of MTBDRplus at different ramp rates on dilution series of bacilli. Each of the three technical replicates for each strain in the dilution series (10 2 , 10 3 and 10 4 CFU/ml) were TUB-band-positive and there were no indeterminate results, irrespective of ramp rate. At 4.0 °C/s, the drug-susceptible strain gave a false-positive rifampicin-resistance result in a 10 2 CFU/ml replicate, however, at higher concentrations all results were true-susceptible. Overall, bands at 2.2 °C/s were subjectively interpreted as being darker, clearer, and more distinct than those at 4.0 °C/s by the experienced readers. Assessment of inter-reader agreement on dilution series. Banding and diagnostic calls differed between readers and were most pronounced at 10 2 CFU/ml (Table 3) Discussion Our key findings are: 1) the vast majority of survey respondents, who are globally diverse and do a large volume of MTBDRplus assays, use an incorrect ramp rate and this, 2) decreases sensitivity for TB (and hence precludes resistance detection), 3) increases indeterminate rates in smear-negative specimens, and 4) likely increases false-resistance calls and banding pattern disagreement between readers. These findings are of clinical relevance as most respondents used this assay routinely, indicating that incorrect ramp rate usage is likely affecting patient diagnoses. To the best of our knowledge, ours is the first evaluation of ramp rate on commercial assay performance in the clinical diagnostics literature. Ramp rate has been previously-documented to be important: techniques such as "slowdown PCR", which are optimised to amplify GC-rich regions with complex secondary structures, use different rates for heating and cooling to improve primer annealing and amplification. Here, ramp rate is critical for the performance of this technique 28 . As M. tuberculosis is GC-rich and rpoB can form secondary structures 29 , it is possible that slower ramp rates help reduce secondary structure formation (e.g., during the transition from denaturation to annealing phases) and thereby result in better detection. Our survey found the majority of laboratories to use an incorrect ramp rate, despite a lower ramp rate being recommended. About half of respondents could change the ramp rate. Together, this illustrates that incorrect ramp rate usage is likely widespread but, importantly, easily fixable without the purchase of new thermocyclers, which may be prohibitively expensive in high burden settings. TUB-band detection on smear-negative sputa failed more frequently at incorrect ramp rates. As this band is required before a susceptibility result is reported, drug resistant diagnoses are more likely to be missed at the incorrect ramp rate. Differences in ramp rate may hence partly explain previously reported variation in performance in smear-negative specimens [12][13][14][15]17 however, we only received responses to our queries regarding ramp rate from two studies in the systematic review who used smear-negative specimens. Although Xpert is often the initial first-line test for rifampicin-resistance, MTBDRplus is used for MDR-TB in several high TB-burden countries and to confirm isoniazid-susceptibility. Isoniazid can be included in the new WHO-endorsed MDR-TB second-line regimen 30 . In response to the WHO's endorsement of the regimen, laboratories are scaling-up MTBDRsl capacity for second-line drug resistance testing. MTBDRsl is thus of increasing importance, however, we did not include MTBDRsl for reasons of cost and feasibility. MTBDRsl is nevertheless similar to MTBDRplus, has the same recommended ramp rate, and is hence likely similarly adversely impacted. We will validate this in future. We did not assess the impact of several ramp rates or thermocyclers for reasons of cost and limited clinical specimens, but chose to use the most frequently reported incorrect ramp rate and a machine commonly used in our setting (the survey results showed a large diversity in thermocycler models used, with no predominant . We did not spike sputa with bacilli as clinical specimens from patients, which we also included, are more suitable (bacilli from patients in sputum are suspended in a mucous matrix rather than bubbles as they are in spiked sputa). Furthermore, spiking was not done at very low concentrations ( < 10 2 CFU/ml), where the incorrect ramp rate might have more of an impact, however, such concentrations of bacilli are clinically rare and often Xpert-negative (and hence unlikely be tested by MTBDRplus). Examination of the impact of ramp rate at lower concentrations might be required for tests that succeed MTBDRplus and have higher sensitivity. Finally, despite repeated attempts to survey a wide range of laboratories, it is possible that non-respondents may have different ramp rate usage patterns (e.g., due to less TB or research expertise), which may limit generalisability. This implies our estimated extent of incorrect ramp rate usage is an underestimate. Our study is the first to investigate ramp rate as a cause of suboptimal MTBDRplus performance. We recommend 1) laboratories switch to the manufacturer-recommended ramp rate, 2) the manufacturer makes the recommended ramp rate more prominent in the documentation accompanying the assay, and 3) studies on the line probe assays publish the ramp rate used. Furthermore, we suggest that diagnostic laboratories who have conducted pilot evaluations of MTBDRplus on smear-negative specimens and found MTBDRplus to have unsatisfactorily high rates of non-actionable results repeat the evaluation if an incorrect ramp rate was originally used. In conclusion, incorrect ramp rate usage is a widespread problem that negatively affects the diagnostic accuracy of potentially thousands of MTBDRplus assays each month. New molecular tests for drug-resistance are critical, however, if they are not done using the correct manufacturer-recommended conditions, performance is compromised and recent promising technical advances (e.g., ability to test smear-negative specimens) will not be fully capitalised upon. Laboratories doing MTBDRplus should hence ensure they use the correct thermocycler ramp rate of ≤ 2.2 °C/s.
3,283
2018-02-16T00:00:00.000
[ "Biology", "Medicine" ]
Quark cluster expansion model for interpreting finite-T lattice QCD thermodynamics We present a unified approach to the thermodynamics of hadron-quark-gluon matter at finite temperatures on the basis of a quark cluster expansion in the form of a generalized Beth-Uhlenbeck approach with a generic ansatz for the hadronic phase shifts that fulfills the Levinson theorem. The change in the composition of the system from a hadron resonance gas to a quark-gluon plasma takes place in the narrow temperature interval of $150 - 185$ MeV where the Mott dissociation of hadrons is triggered by the dropping quark mass as a result of the restoration of chiral symmetry. The deconfinement of quark and gluon degrees of freedom is regulated by the Polyakov loop variable that signals the breaking of the $Z(3)$ center symmetry of the color $SU(3)$ group of QCD. We suggest a Polyakov-loop quark-gluon plasma model with $\mathcal{O}(\alpha_s)$ virial correction and solve the stationarity condition of the thermodynamic potential (gap equation) for the Polyakov loop. The resulting pressure is in excellent agreement with lattice QCD simulations up to high temperatures. I. INTRODUCTION Since continuum extrapolated lattice QCD (LQCD) thermodynamics results for physical quark masses became available [1][2][3][4] it has been a major goal to construct an effective low-energy QCD model that would reproduce them in the finite temperature and low chemical potential domain to high accuracy as a basis for extrapolations to the region of low temperatures and high baryochemical potentials where the sign problem still prevents LQCD obtaining benchmark solutions.To this end we construct here a cluster expansion model which reproduces the hadron resonance gas at low temperatures and the quark-gluon plasma (QGP) with O(α s ) virial corrections at high temperatures. We postulate a generic behaviour of the scattering phase shifts in these channels which are temperature dependent and embody the main consequence of chiral symmetry restoration in the quark sector: the lowering of the thresholds for the two-and three-quark scattering state continuous spectrum which triggers the transformation of hadronic bound states to resonances in the scattering continuum.The phase shift model is in accordance with the Levinson theorem which results in the vanishing of hadronic contributions to the thermodynamics at high temperatures. We suggest a Polyakov-loop quark-gluon plasma model with O(α s ) virial correction in order to obtain a satisfactory agreement with lattice QCD simulations up to high temperatures and solve the stationarity condition of the thermodynamic potential (gap equation) for the Polyakov loop. II. CLUSTER VIRIAL EXPANSION TO QUARK-HADRON MATTER The main idea for unifying the description of the quark-gluon plasma (QGP) and the hadron resonance gas (HRG) phase of low-energy QCD matter is the fact that hadrons are strong, nonperturbative correlations of quarks and gluons.In particular, mesons and baryons are bound states (clusters) of quarks and should therefore emerge in a cluster expansion of interacting quark matter as new, collective degrees of freedom. For the total thermodynamic potential of the model, from which all other equations of state can be derived, we make the following ansatz where Ω QGP (T ; φ) = Ω PNJL (T ; φ) + Ω pert (T ; φ) describes the thermodynamic potential of the quark and gluon degrees of freedom with a perturbative part Ω pert (T ; φ) and a nonperturbative mean field part Ω PNJL (T ; φ) = Ω Q (T ; φ) + U(T ; φ) that can be decomposed into the quark quasiparticle contribution Ω Q (φ; T ) and the gluon contribution that is approximated by a mean field potential U(T ; φ).Note that all these contributions to the QGP thermodynamic potential are intertwined by the traced Polyakov loop φ as the order parameter for confinement.The correlations beyond the mean field approximation which correspond to the hadronic bound states and their scattering state continuum are described by the Mott-HRG pressure P MHRG (T ).This is a HRG pressure that takes into account the dissociation of hadrons by the Mott effect, when their masses would exceed the mass of the corresponding continuum of unbound quark states.A detailed description and numerical evaluation of these contributions will be given in the following. A. Beth-Uhlenbeck model for HRG with Mott dissociation For the MHRG part of the pressure of the model, we have P MHRG (T ) = −Ω MHRG (T ) where the sum extends over all mesonic (M) and baryonic (B) states from the particle data group (PDG), comprising an ideal mixture of hadronic bound and scattering states in the channel i that are described by a Beth-Uhlenbeck formula.Then the partial pressure of the hadron species i reads where d i is the degeneracy factor.For the phase shift of the bound states of N i quarks in the hadron i we adopt the simple model that is in accordance with the Levinson theorem Inserting ( 4) into (3) results in The temperature dependent threshold mass of the 2-(3-) quark continuum for mesonic (baryonic) bound state channels i is where N s = 0, 1, ..., N i is the number of strange quarks in hadron i.The factor √ 2 originates from quark confinement in the following way. in the confining vacuum, the quarks are not simple plane waves with arbitrarily long wavelength, but due to the presence of bag-like boundary conditions their wavelength shall not exceed a certain length scale.Therefore, a minimal quark momentum applies to the quark dispersion relations E q,min (T ) = m 2 q (T ) + p 2 q,min , which for the choice p q,min = m q (T ) results in E q,min (T ) = √ 2m q (T ).For details, see [5].The chiral condensate is defined as where m l (m s ) is the current-quark mass in the light (strange) quark sector, l = u, d.It is an order parameter for the dynamical breaking of the chiral symmetry that is reflected in the corresponding temperature dependence of the dynamical quark masses m q (T ). In our present model, we do not treat the dynamical quark mass as an order parameter that should follow from the solution of an equation of motion (gap equation) that minimizes the thermodynamic potential like in the case of the Polyakov-loop variable φ, but we will use the quantity ∆ l,s (T ) from simulations of 2+1 flavor lattice QCD as an input.This quantity has been introduced in [6] with the definition and was used later on, e.g., in [1,2].Further, we assume for the temperature-dependent light quark mass with m l = 5.5 MeV being the current-quark mass, and for the strange quark mass we adopt with m s = 100 MeV.The LQCD result for the temperature dependence of the chiral condensate [1,2] can be fitted by where T c = 154 MeV is the common pseudocritical temperature of the chiral restoration transition of both LQCD Collaborations and δ T = 26 MeV is its width for the data from Ref. [1], while δ T = 22.7 MeV for those from Ref. [2], see Fig. 1.For our present applications in modelling the QCD thermodynamics, we will use the fit of the chiral condensate ( 11), but with the modern value of T c = 156.5 ± 1.5 MeV [7].We have checked that the results for the total pressure of our model are practically inert against changing the value of δ T within the above range of variation.Inserting ( 9) and ( 10) into ( 6) we get and using ( 9) results in The underlying quark and gluon thermodynamics is divided into a perturbative contribution Ω pert (T ) which is treated as virial correction in two-loop order following Ref.[8] and a nonperturbative part described within a PNJL model in the form where the quark quasiparticle contribution is given by and the Polyakov-loop potential U(φ; T ) takes into account the nonperturbative gluon background in a meanfield approximation using the polynomial fit of Ref. [9] where the temperature-dependent coefficient b 2 (T ) is given by and the coefficients are given in Table I. C. Perturbative contribution It is well known that the lattice QCD thermodynamics at high temperatures T ∼ 1 GeV does follow a Stefan-Boltzmann like behaviour ∝ T 4 , but with a 15 − 20% reduction of the effective number of degrees of freedom.It has been observed, e.g., in Ref. [8], that this deviation can be described by the virial correction to the pressure due to the quark-gluon scattering at O(α s ) shown in Fig. 3.Here we modify the standard expression [10] of the form by introducing the modified integral where the generalized Fermi distribution function of the PNJL model for the case of vanishing quark chemical potential considered here is defined as and Λ = m l (T ) is the momentum range below which nonperturbative physics dominates and is accounted for by the dynamically generated quark mass.We use here a temperature dependent, regularized running coupling [11][12][13] where r = 3.2T , c = 350 MeV and N c = N f = 3. FIG. 3. Two-loop diagram for the contribution of the one-gluon exchange interaction to the thermodynamic potential of quark matter. III. STATIONARITY CONDITION FOR THE POLYAKOV LOOP The pressure follows from the thermodynamic potential under the condition of stationarity w.r.t.variations of the order parameters.Since the chiral condensate is fixed by the fit (11) to the numerical result from lattice QCD, the Polyakov loop φ is the only free order parameter in the system to be varied this condition means It is realized by demanding where the separate contributions come from the variations of the Polyakov loop potential the quark quasiparticle pressure with Y q = exp[− p 2 + m 2 q (T )/T ], and the O(α s ) quark loop contribution where and The equation resulting from the stationarity condition ( 23) can be dubbed "gap equation" for φ since it has a similar structure as the quark mass gap equation, known from Nambu-Jona-Lasinio models.The solution of this gap equation gives the temperature dependence of the traced Polyakov loop φ that is shown in Fig. 4 in comparison to the lattice QCD data for the renormalized Polyakov loop from the TUMQCD Collaboration [14] and the Wuppertal-Budapest Collaboration [1].IV.RESULTS A. Pressure The main result of this work is a unified approach to the pressure of hadron-quark-gluon matter at finite temperatures that is in excellent agreement with lattice QCD thermodynamics, see Fig. 5.The nontrivial achievement of the presented approach is that the Mott dissociation of the hadrons described by the MHRG model pressure conspires with the quark-gluon pressure described by the Polyakov-loop quark-gluon model with O(α s ) corrections in such a way that the resulting pressure as a function of temperature yields a smooth crossover behaviour.By virtue of the Polyakov-loop improved perturbative correction, the agreement with the lattice QCD thermodynamics extends to the high temperatures of T = 1960 MeV reported in Ref. [15], see Fig. 6. B. Quark number susceptibilities In the present work we did not yet consider the generalization of the approach to finite chemical potentials which would then allow to evaluate the (generalized) susceptibilities as derivatives of the pressure with respect to the corresponding chemical potential in appropriate orders.On that basis ratios of susceptibilities can be formed as they indicate different aspects of the QCD transition between the limiting cases of a HRG and a QGP.Here we would like to discuss as an outlook to these extensions of the approach one of the simplest susceptibility ratios, namely the dimensionless ratio of quark number density to quark number susceptibility where n q (T ) = ∂P (T, µ q )/∂µ q | µq=0 and χ q (T ) = ∂ 2 P (T, µ q )/∂µ 2 q | µq=0 .This ratio (29) has two well-known limits.At low temperatures, in the hadron resonance gas phase it is given by while in the QGP phase for massless quarks it approaches An evaluation of (29) for the present model for the QCD pressure would require its extension to finite µ q which we FIG. 7. The dimensionless ratio of quark number density to quark number susceptibility R12(T ) = nq(T )/(µqχq(T ))|µ q =0 as a function of temperature for µq/T = 0.4 (red solid line) and µq/T = 0.8 (blue dash-dotted line) compared to the lattice QCD data [16] µq/T = 0.4 (red band), µq/T = 0.8 (blue band).For details, see text. will perform in a subsequent work.In the present model we will use our knowledge of the composition as a function of temperature to define a proxy for (29) by interpolating between the two known limits ( 29) and ( 29) with the partial pressure of the HRG, x HRG (T ) = P MHRG (T )/P tot (T ), as The result is shown in Fig. 7 for two values of µ q /T for which lattice QCD results in the two-flavor case [16] are shown for a comparison. V. DISCUSSION AND CONCLUSIONS The main result of the present work is a unified approach to the thermodynamic potential of hadron-quark-gluon matter at finite temperatures that is in excellent agreement with lattice QCD thermodynamics on the temperature axis of the QCD phase diagram.The key ingredient to this approach is the quark cluster decomposition of the thermodynamic potential within the Beth-Uhlenbeck approach [17] which allowed to implement the effect of Mott dissociation to the hadron resonance gas phase of low-temperature/low-density QCD.Such a MHRG model description includes, in principle, the information about the spectral properties of all hadronic channels with their discrete and continuous part of the spectrum, encoded in the hadronic phase shifts.Instead of solving the equations of motion, a coupled hierarchy of Schwinger-Dyson equations in the one-, two-, and many-quark channels selfconsistently (a formidable task of finite-temperature quantum field theory!), we applied here a very schematic model for the inmedium phase shifts that is in accordance with the Levinson theorem and sufficiently general to be applicable for all multiquark cluster channels.This phase shift model requires just the knowledge of the vacuum mass spectrum which can come from the particle data group tables, or from relativistic quark models, and the medium dependence of the multi-quark continuum threshold. The latter requires the knowledge of the quark mass (i.e. the chiral condensate) with its medium dependence as an order parameter of the chiral symmetry breaking and restoration.Since a quark mean field model of the (P)NJL type is not sufficient as it lacks the backreaction from the hadron resonance gas on the quark propagator properties, we employ here the chiral condensate measured in continuum-extrapolated, full lattice QCD with physical current quark masses as an input.This procedure restricts the applicability of the present model to small chemical potentials only, where lattice QCD data for the chiral condensate are available.In a further development of the model, a beyond-mean-field derivation of the quark selfenergy shall be given.Furthermore, at the same level of approximation the corresponding sunset-type diagrams for the Φ functional of the 2PI approach should be derived and evaluated.This allows to calculate the generalized polarization-loop integrals which determine the analytic properties of the multi-quark states.These can be equivalently encoded in the corresponding medium-dependent phase shifts of the generalized Beth-Uhlenbeck approach, as has been demonstrated in particular examples for pions, diquarks [18,19] and nucleons [20] within the Polyakov-loop generalized NJL model. Another important aspect of the present approach is that it leads to a relativistic density functional theory for QCD matter in the QCD phase diagram, with the known limits of the HRG and pQCD manifestly implemented.Such an approach allows to predict the existence and location of critical endpoints in the QCD phase diagram, as it had been demonstrated, e.g., in Ref. [21], where in dependence on a free parameter could have besides the critical endpoint of the liquid-gas transition in the nuclear matter phase another endpoint for the deconfinement transition or none.This "crossover all over" case of the QCD phase diagram is impossible to address with two-phase approaches that use a Maxwell construction for the phase transition.Other models that are in use for analyses of the critical behaviour of QCD (see, e.g., [22,23]) do impose it by assuming a so-called "switch function" between HRG and QGP phases.They are valuable tools but do not have a predictive power. With these perspectives for the further development of the approach developed here, we conclude this work. FIG. 1 . FIG. 1.Comparison of the fit(11) for the temperature dependence of the chiral condensate ∆ l,s (T ) and the Lattice QCD data for it from the Wuppertal-Budapest Collaboration[1] and the hotQCD Collaboration[2]. . FIG. 2 . FIG. 2. Pressure as a function of the temperature for the hadron resonance gas (HRG) model with stable hadrons (red line) and for the HRG with Mott dissociation of hadrons (MHRG) according to the simple phase shift model (4) employed in the present work.These results are compared to the lattice QCD data from the [HotQCD Collaboration] [4] (green band) and the [Wuppertal-Budapest Collaboration] [3] (blue band). FIG. 4 . FIG. 4. The traced Polyakov loop φ from the solution of the stationarity condition (23) on the thermodynamical potential as a function of temperature (magenta solid line) compared with the lattice results for the renormalized Polyakov loop the TUMQCD Collaboration [14] (green band) and the Wuppertal-Budapest Collaboration [1] (blue symbols).
4,026.4
2020-12-23T00:00:00.000
[ "Physics" ]
Textured multilayered piezoelectric structures for energy conversion Piezoelectric materials are essential for the conversion between mechanical and electrical energy, for example in ultrasound imaging and vibrational energy harvesting. Here, we are making and exploring the effects of a new design: co-sintered multilayers with texture (grains of a preferential crystallographic direction). The motivation is the combination of increased piezoelectric response in certain crystallographic directions; multilayer structures where thick films rather than bulk materials can allow higher frequency operation and large area; and co-sintering to avoid detrimental effects from gluing layers together. Samples of the lead-free piezoelectric material Li0.06(K0.52Na0.48)0.94Nb0.71Ta0.29O3 with 0.25 mol% Mn (KNNLTM) were made by tape casting and co-sintering. NaNbO3 platelets with (100) orientation which were used as templates to introduce texture, and polymethyl methacrylate (PMMA) was used as a pore forming agent for making porous substrates. The electrical impedances of the co-sintered samples were recorded and analyzed by equivalent electrical circuit modelling. A texture up to 85% in the [100] crystallographic direction was obtained. The samples displayed ferro- and piezoelectricity, with a maximum thickness coupling coefficient (kt = 0.18) between mechanical and electrical energy in the most textured sample. This demonstrates that the introduction of texture in multilayered, co-sintered piezoelectrics shows promise for improving devices for ultrasound imaging or energy harvesting. Introduction Energy conversion between the mechanical and the electrical domain is important in many existing, as well as upcoming technologies. Piezoelectric materials, due to their coupling between mechanical deformation and charge generation, are well suited for this purpose. Many piezoelectrics based on the perovskite-structured Pb(Zr,Ti)O 3 ceramics [1,2] have been developed with high efficiency for the energy conversion [3]. Since these materials are also ferroelectric, alignment of the polarization found in each part of the polycrystalline materials is possible, causing an efficient transduction between mechanical and electrical energy: electrical surface charge in response to mechanical deformation, and strain in response to electric field. For example, in energy harvesters, the piezoelectric material can convert mechanical energy to electricity with a high energy density [3,4], and in ultrasound transducers, the rapid generation of ultrasound waves and the conversion of back-reflected waves to an electrical signal can occur with limited energy loss to heat [5,6]. A few challenges are encountered in the area of piezoelectric transducers for energy conversion. By far, the most applied and highest performing materials are based on Pb(Zr,Ti)O 3 and contain a significant amount of lead oxide, toxic to humans and the environment [7][8][9][10]. With the prospect of the widespread use of piezoelectrics as energy harvesters for consumer wearables, internet-of-things devices etc, with low anticipated rate of recycling and proper disposal, the use of lead-containing materials is especially critical. The use of lead oxides in consumer electronics is also addressed in upcoming legislation e.g. in the European Union [11]. However, lead-free alternatives generally suffer from lower performance than those that are lead-based [9]. The drop in performance can to some extent be overcome by materials engineering, e.g. alignment of the crystallographic directions of the grains in the ceramics (texturing) [12][13][14], such that the direction of largest piezoelectric response is utilized [15,16]. For bulk materials, texturing does however require several extra, cumbersome processing steps compared to simple pressing and sintering of powder. The processing must now introduce shear forces (e.g. via tape casting) that align template particles and thereby steer the growth into the desired crystallographic direction (templated grain growth) [13], followed by the lamination of many thin layers to form the desired bulk material thickness. For certain systems, such as energy harvesters [3,17,18], large-scale structural health monitoring [19], high-frequency ultrasound transducers [5,20], large area and low dimensionality in the form of a thick film (1-100 μm thickness) is the desired shape rather than bulk. For these shapes, the more complex geometry already requires complex shaping, and achieving texture should only require the addition of template particles for texture development. However, thick films have limited strength and are usually not self-supporting, such that a support layer (substrate) is required. So far, the co-sintering of textured piezoelectric thick films and supports have not been realized. This work therefore develops the synthesis of, and investigates energy conversion in this new type of textured, multilayer piezoelectric transducers made by tape casting. The motivation behind this, is that tape casting makes it possible to induce texture in the thick film by adding templates that steer the direction of grain growth [13]. Furthermore, porosity can be introduced by adding pore formers [21], such that porous backing layers can be made to adjust the density, acoustic impedance and attenuation of sound waves, important for applications in ultrasound transducers. Several layers of tape can be laminated (or potentially co-cast) to the desired layer configuration or to increase the thickness. Co-sintering this entire structure could make it possible to avoid weak interfaces and limited temperature stability from gluing layers together after sintering. The glue layers (typically epoxy resins) can significantly modify the behavior of the device, in particular the electroacoustic response of an ultrasound transducer if the thickness of this layer is in the same order of magnitude as the thickness of the thick film [22]. This work shows how such textured, multilayered transducers can be synthesized and how the texture affects the energy conversion between the electric and mechanical domain and the physical properties of the transducers. The test material is the lead-free piezoelectric composition Li 0.06 (K 0.52 Na 0.48 ) 0.94 Nb 0.71 Ta 0.29 O 3 with 0.25 mol% Mn (KNNLTM). This is a modified version of the promising lead-free system based on K 0.5 Na 0.5 NbO 3 (KNN) [9,23,24]. The specific KNNLTM composition was recently found to have large coupling coefficients for mechanical-electrical energy transfer when made as (100)-oriented single crystals [25]. We observe that a high degree of texture can be obtained in KNNLTM thick films co-sintered on a porous KNNTLM support after careful process control. Four multilayered, co-sintered samples are characterized in detail in this work, two textured and two non-textured, showing the highest piezoelectric response in the most textured sample. Although the coupling coefficient between the mechanical and electrical energy in these first samples is low, this work shows the potential for introducing texture in co-sintered multilayered piezoelectric transducers. Multilayer transducer fabrication Slurries for tape casting were produced following a procedure by Foghmoes et al [26]. [26] was added and the slurry was rolled carefully for 16-24 h before filtering, evacuation and tape casting. To introduce texture, platelet-shaped (100)-oriented NaNbO 3 (made in-house according to [27]) corresponding to 10 vol% of the KNNLTM was added to the slurry after filtering, and homogenized by 10 min mixing with a magnetic stirrer and 10 min of ultrasonication before evacuation and tape casting. For the slurries for the porous support, 50 vol% of the KNNLTM was replaced by polymethyl methacrylate (PMMA) (type 10G (d 50 of 9 μm), Esprix, USA) or graphite (type UF1 (d 50 of 3 μm), Graphit Kropfmühl AG, Germany) as pore formers [28][29][30]. The pore formers were added together with the KNNLTM powder at the start of the process. The tape casting speed was 20 cm min −1 for all tapes. Dense layer tapes were cast with a doctor blade gap of 200-400 μm, porous layer tapes 500-1000 μm and air-dried overnight. Dried tape cast sheets were screen printed with Pt paste (product #5542, ESL ElectroScience, PA, USA) and laminated at 110°C while the electrode paste was wet. Debinding and sintering were performed in air with the samples kept flat by two alumina plates. Characterization Debinding and densification of the tapes were studied with thermogravimetric analysis (TGA) and differential thermal analysis (DTA) (STA 409PC/PG, Netzsch, Germany), contact dilatometry (DIL 402, Netzsch, Germany) and optical dilatometry (TOMMI, Fraunhofer Institut für Silicatforschung ISC, Würzburg, Germany). Scanning electron microscopy (SEM) was conducted with a Hitachi TM3000 (Hitachi High-Technologies Europe GmbH, Germany) to investigate the microstructure of the sintered multilayers. The samples were mounted in epoxy with the cross-section up and polished with a 0.25 μm diamond suspension before SEM. The software ImageJ (version 1.48v, Wayne Rasband, National Institute of Health, USA) was used to analyze the porosity in the multilayers from the micrographs of their cross-sections. X-ray diffraction (XRD) was conducted on as-sintered top surfaces of the multilayers with a Bruker D8 (Bruker, USA) to investigate the structure, phase purity and texture. Texture was calculated based on the intensity of the {h00} reflections relative to the other reflections according to the Lotgering equation [31]. The sintered multilayer samples were contacted by Ag paste (product AGG3790, Agar Scientific, UK) painted as one electrode on the top, and one electrode on the side in contact with the middle, co-sintered Pt electrode. Four co-sintered, multilayered samples were characterized in detail, two textured and two nontextured. Ferroelectric response was measured with a TF2000 ferroelectric characterization system (aixACCT, Germany), then samples were poled at 3 kV mm −1 for 30 min at room temperature. The electromechanical thickness mode properties of the thick films were deduced from the measurements with a spectrum analyzer (Agilent 4395 A, Palo Alto, CA, USA) of the complex electrical impedance as a function of frequency. The onedimensional KLM equivalent electrical circuit [32][33][34] was used to calculate the theoretical behavior of the electrical impedance of the multilayer structure. From the experimental data, a fitting process was implemented to deduce several thickness mode parameters of the piezoelectric thick films [22,35] The studied structures are composed of four layers with porous substrate, bottom Pt and top Ag electrodes and the piezoelectric thick film. Parameters of the electrodes were taken from Selfridge [36] and acoustic parameters of the porous substrate were measured with two ultrasound transducers according to the procedure described in Bakarič et al [37]. All the physical and dimensional properties of the three inert layers were considered as constant, along with thickness and density of the thick films in the KLM model. Finally, five thickness mode parameters of the thick films were deduced: the longitudinal wave velocity C L , the dielectric constant at constant strain e 33 S /ε 0 , the effective thickness coupling factor k t and the loss factors (mechanical: tan δ m , electrical: tan δ e ). Fabrication of slurries, tapes and multilayers The results of the thermal analysis (TGA and DTA) are shown in figure 1. The dense tape contains no pore former and therefore has the lowest mass loss (35 wt%). Its mass loss occurs gradually from 100°C to 400°C, with a marked exothermic DTA signal around 400°C from the binder burning out. The tape with graphite as a pore former also shows a similar exothermic reaction, although a little shifted to higher temperatures, followed by another exothermic signal as the graphite starts to burn at around 550°C. The total mass loss of the graphitebased tape is very high (60 wt%). Using PMMA as a pore former results in an endothermic debinding as the PMMA decomposes around 400°C, masking the exothermic signal from the binder. This tape has a total weight loss of 50 wt%. Figure 2 shows the shrinkage and shrinkage rate of the tapes during debinding and sintering, measured with both contact and optical dilatometry. In the contact dilatometer (figure 2(a)) all samples shrink significantly already at 50°C-150°C, then more at 700°C. The dense tape shows a further shrinkage at 1100°C. The second shrinkage step, around 700°C, is a little delayed in the PMMA-based tape compared to the dense tape, and spans over a wider temperature range in the graphite-based tape (easiest to observe from the shrinkage rates in figure 2(b)). The final shrinkage around 1100°C is also less pronounced in these two tapes with pore formers than the dense tape. All samples have a high total shrinkage: 39% in the dense tape, 66% in the PMMA tape and 79% in the graphite tape. The largest factor in these different total shrinkages is the extent of shrinkage at low temperature (50°C-150°C), which is strongly enhanced in the tapes with pore former tapes compared to the dense tape. Shrinkage at such low temperatures cannot occur due to densification, but can be an effect of the load from the contact dilatometer pushrod on the soft tapes. Measurements with an optical dilatometer (only available up to 1000°C) were therefore performed to avoid the samples being mechanically deformed. Here, we see that the overall shrinkage is lower, and especially the significant shrinkage at low temperatures is avoided (figure 2(c)). Still, the overall order of total shrinkage is the same as for the measurements with the contact dilatometer. The shrinkage observed with real samples after the debinding part only, and after the complete debinding and sintering program (see Methods section) are in good agreement with the predictions from the optical dilatometer. Microstructures of co-sintered transducers of dense layers on porous supports are shown in figure 3. Please note that the right hand side column is a higher magnification version of the image in the left hand side column. Figure 3(a) shows the textured, dense layer on the PMMA-based porous support. From the top to the bottom in this micrograph is the outer Ag electrode, the thick KNNLTM film, the inner Pt electrode and the porous KNNLTM substrate. The outer Ag electrode (applied after sintering to avoid diffusion into the KNNLTM) is 10-20 μm thick, quite porous (55%-75% dense) and adheres well to the dense KNNLTM layer. The textured KNNLTM layer is 50-75 μm thick, 81%-86% dense and has some distinct, large (∼10 μm) brick-shaped grains from the templated grain growth process. The Pt electrode is close to 100% dense and ∼5 μm thick, with a few discontinuities in the shown 2D section of the sample. The adherence between the KNNLTM dense tape and the inner Pt electrode is very good, while in samples made on very thick supports (985 μm, not shown), a tiny gap (<1 μm) between the KNNLTM dense tape and the Pt could be observed. Laminating samples after the Pt electrode had dried caused a macroscopically visible delamination in all samples (not shown). The microstructure of the PMMA-based porous support contains the typical cube grains (<10 μm) of KNN-based piezoelectrics, with some irregular, spherical pores (∼10 μm) after pyrolysis of the PMMA spheres. The porous PMMA-based support has a density of around 55%. Figure 3(b) shows a sample of the same layer configuration as (a), but without texture in the dense layer. The overall structure is similar, but the dense KNNTLM layer is denser (92%). Initial studies with co-sintering the dense KNNLTM tapes with porous tapes made with graphite as the pore former resulted in completely delaminated and curled porous tape, as seen in figure 4(a). From the microstructure image in figure 3(c)) (only graphite-based layer visible), we can also see that the graphite pore former resulted in less pronounced pore morphology, smaller pores (<5 μm) and very low porosity (83% dense). PMMA-based tapes were therefore chosen as the porous support for the remainder of this study, due to their higher porosity and compatibility in co-sintering. An example of a successful co-sintering and contacting for electromechanical testing is shown in figure 4(b). Figure 5 shows the XRD patterns of co-sintered KNNLTM multilayer transducers recorded from the sample top surface. Some peaks from parts of the Ag electrode are visible in one of the samples at 38°and 44°2θ. The main phase of all samples is KNNLTM. Some reflections between 20°and 30°2θ are visible to various extents, typical for Nb-rich secondary phases (e.g. K 2 Nb 4 O 11 or K 4 Nb 6 O17) [38,39]. The largest differences between the samples are the relative intensities of the KNNLTM reflections. An increase in the {h00} type reflections versus the other {hkl} indicates that the desired (100)-type texture has been successfully introduced in the sample (the quantified volume of texture given by the Lotgering calculation is shown in the figure). Characterization of co-sintered multilayers The results of the electromechanical performance are summarized in table 1 at the end of this section. When analyzing the properties, it should be kept in mind that the non-textured samples have significantly higher density (92%-95%) than the textured samples (81%-84%), as can be seen in figure 6. The ferroelectric response of the multilayers is shown in figure 7. From the hysteresis loops in figure 7(a), we can see that the polarizationelectric field response of the samples shows ferroelectric hysteresis. The magnitudes of the saturated (P s ) and remanent (P r ) polarization, the coercive field (E c ) and the gap between the start and end polarization values differ strongly between the samples. For example, P r varies from 6 to 20 μC cm −2 , which is in the typical range for KNN-based samples [8] while the specific KNNTLM composition is expected to have ∼17 μC cm −2 in nontextured KNNLTM [40], and ∼7 μC cm −2 in (100)-oriented single crystal KNNLTM [25]. The polarization decreases in the textured compared to non-textured samples, expected from both due to the alignment of the polar axis off the direction of the electric field and the lower density in these samples compared to the nontextured [41]. Unexpected high polarization response is a typical sign of leakage current, contributions to the polarization signal [41]. From the current signal in figure 7(b) it is easier to distinguish the contribution of leakage current and ferroelectric switching current, as the latter is a peak occurring at the coercive field, while the former occurs at all field strengths, sometimes increasing with the field strength magnitude. All samples display signs of ferroelectric switching, but it is obvious that sample 4-non-textured is dominated by leakage current rather than switching current. We can also observe a strong asymmetry between the positive and negative field direction, which is common for systems of asymmetric samples and electrode configuration, like the different top and bottom electrodes used in this work [41]. The real and imaginary parts of the electrical impedance (Z), recorded from the poled samples and fitted to the KLM model, are shown in figure 8 (for two samples). The thickness of the porous substrate was in the same range as the ratio of longitudinal wave velocity in this substrate to the fundamental resonance frequency of the piezoelectric thick film (in free resonator conditions), which leads to several peaks (coupled resonance) [42]. Moreover, the bottom platinum electrode must be taken into account since the thickness is also of the same order as the piezoelectric thick films. The results from the KLM model applied to the impedance spectra are shown in figure 9 as a function of the degree of texture in the samples. The dielectric constant at constant strain ( figure 9(a)) is around 200-400 for non-textured samples, and decreases to 250 in the textured samples. This is in agreement with both their higher porosity and previous studies on textured KNN-based samples [43]. The elastic constant C 33 D ( figure 9(b)), deduced from the longitudinal wave velocity and density, is decreasing with texture, probably related to their higher porosity compared to the non-textured samples. Figure 9(c) shows the thickness coupling coefficient, k t , of the samples. Here, we can see that the highest k t value (18%) is in the sample with the highest degree of (100) texture. The KLM model also gives values for the mechanical and dielectric loss, but both the values of the mechanical losses (20%-30%) and the electrical loss values (2.2%-3.7%) are unreliably high in our materials, probably related to inhomogeneous thickness that the 1D model cannot account for. The acoustic impedance, Z ac , of the samples can be calculated from the longitudinal wave velocity, C L , from the KLM model combined with the density, ρ, from Z ac =ρ C L . The acoustic impedance of the multilayer samples is in the range of 16-21 MRayl (table 1). Fabrication of multilayered KNNLTM transducers The thermal analysis shows several aspects that need to be considered for successful multilayer fabrication. TGA results ( figure 1(a)) show that the co-sintering, especially the porous part, requires careful control of the thermal program and access to air, such that the debinding and mass loss can occur without damaging the samples. This was successfully achieved with the debinding program presented in the Methods section with isothermal holds, where the mass loss is strongest, and the use of a furnace with flowing air. The flowing air is especially crucial since the samples had to be kept between alumina plates to stay flat. Differences in sintering shrinkage is also a potential cause of bending and delaminations during sintering [44], which is observed to some extent by the slight curvature of the multilayers (direction of bending agrees with a higher shrinkage of the porous layer versus the dense, figure 3). The overall shape of the tapes' dilatometry curves resemble that of powder of the same composition [40], but the differences in the magnitude of shrinkage is related to the softness and compressibility at low temperature, and extra shrinkage related to the use of pore formers [21]. Here, the choice of PMMA as the pore former made it possible to limit the differences in total shrinkage to ∼10% between the dense and the porous layers, and successfully co-sinter this combination, in contrast to the delamination seen with graphite (20% shrinkage difference) (figures 3(c) and 4(a)). Since the dense and the porous layers are both KNNLTM, no cracking or delamination due to differences in thermal expansion coefficients (TEC) are expected. Although the TEC of Pt is quite low (∼9 * 10 −6 K −1 ) [45] compared to other metals, the difference from the TEC of KNNLTM is probably more than 1 * 10 −6 K −1 (the TEC of KNN is ∼7 * 10 −6 K −1 ) [46]. In our system, the stresses due to sintering or TEC mismatches might be somewhat alleviated by the porosity (decreasing stiffness and thus increasing crack resistance) [47] and the low layer thickness (decreasing the total energy) of the Pt layer [48]. Still, the TEC mismatch or sintering stress could be the origin of the minor delaminations between Pt and KNNLTM observed if the support layer thickness was increased. Densification of KNN-based ceramics is generally challenging [40,49], which is also reflected in the density of the textured multilayer transducers made in this work. A density of 81%-84% in the textured layers is significantly lower compared to what can be expected in bulk ceramics [24,40]. Low density is known to cause a reduction of the dielectric permittivity and piezoelectric coefficients such as d 33 in the transducers, but does not have such a strong effect on the k t coupling coefficient [50]. In the non-textured samples, the density is higher, implying that the templated grain growth into texture hinders the densification. Combined with the development of a sufficient degree of texture in the thick films, without any limitations in the support thickness, these are important contributions to the research on textured ceramics. First of all, it challenges the conception that templated grain growth only occurs at >95% density [51]. Secondly, it demonstrates that templated grain growth also can occur in thick films, where the feature size is lower than in bulk samples, and also when cosintered in a multilayered structure. This is promising for the introduction of texture in other novel geometries with fine feature size, such as 3D printed structures. Electromechanical performance of the multilayered KNNLTM transducers The results from the KLM model based on the electrical impedance measurements show that the texture, as expected, improves the piezoelectricity, in the form of the thickness coupling coefficient k t . Due to the manufacturing technique (several manual process steps and thin layers), a higher sample-to-sample variation can be expected for thick film multilayered samples, compared to bulk samples made by pressing powders and sintering pellets followed by machining and polishing surfaces. We do observe some variations in performance between otherwise similar samples (3-non-textured and 4-non-textured) especially in the dielectric permittivity and coupling coefficient (figure 9). Despite the higher density in 4-non-textured, this sample has lower dielectric permittivity than 3-non-textured. When evaluating the ferroelectric response, it is obvious that sample 4-nontextured has a higher conductivity than 3-non-textured ( figure 7). This could indicate chemical impurities, secondary phases, variations in thickness and electrode coverage, or other microstructural features that promote lower dielectric permittivity and pathways of higher conductivity throughout the thick film. The small differences between samples 1-textured and 2-non-textured can be ascribed to the small difference in the degree of texture, but also here, microstructural or chemical effects could come into play. Still, such defects are expected to be possible to limit if the multilayered synthesis is carefully controlled, as all the techniques utilized (with the exemption of the synthesis of NaNbO 3 particles for templated grain growth) are up-scalable and shown to be ideal for fabrication of e.g. multilayered ceramic capacitors [52] once transferred to industrial environments. This is not the scope of this study, but rather it demonstrates that the introduction of texture increases the performance for energy conversion, and that tape casting and co-sintering of multilayers with textured piezoelectrics are possible. As a lead-free material system, the KNNLTM multilayers have an advantage for energy harvesting for consumer electronics. The most promising application would be as large-area harvesters at elevated temperature, in order to best utilize the scalability of the processing techniques applied and the higher temperature stability of the co-sintered system compared to glued transducers, which can mechanically deform above 70°C. The low mechanical quality factor corresponds to a large frequency bandwidth, advantageous for energy harvesting from environments with a large variation in vibration sources [3]. The low permittivity is also an advantage for use in a non-resonant application, where d ij 2 /ε ii,r is the figure of merit [9]. Still, the low value of the k t is usually connected with a low transverse coupling coefficient, k 31 , and transverse piezoelectric charge coefficient d 31 , important for energy harvesting in a cantilever structure. Higher coupling coefficients and charge coefficients could potentially be obtained by optimizing the poling parameters, for example by increasing time, electric field strength, and especially the temperature during poling. Furthermore, increasing the density of the dense layer could be very important for an energy harvester operating at resonance, where the figure of merit is k 2  * Q m (Q m =1/tan δ m ) [3]. Since a denser material has lower mechanical losses (because porosity increases acoustic attenuation, which is related to mechanical losses in piezoelectric material), denser materials should have a higher figure of merit at resonance frequency. Improving the density would require careful optimization of all processing steps, from powder preparation to a sintering program. As thick films supported on porous substrates, these transducers have many advantages for use in highfrequency medical ultrasounds. The low acoustic impedance (<20 MRayl) compared to the traditional leadbased systems (>30 MRayl) [53] are very favorable, since an impedance value closer to that of biological tissue (1.5 MRayl) [53] reduces the loss of ultrasonic energy across the interfaces between the transducer and the biological tissue to be examined. The relatively low dielectric permittivity of the transducers and the values of the sound velocity are suitable for a single-element transducer operating at high-frequency [9]. Still, the thickness coupling coefficient should be increased to values above 30%. Improving the poling conditions, as discussed above, is the best pathway towards this. The substrate layer should also be thicker or more porous, such that the coupled resonator behavior is suppressed, and this layer can be considered as a backing with a semi-infinite medium behavior. The co-sintering process has the advantage of producing layers of similar acoustic impedance compared to gluing the piezoelectric to a backing, typically with a low acoustic impedance polymer-based glue [22]. In general, the integrated, co-sintered multilayer structure with texture demonstrated for the first time in this work has, once slightly higher coupling coefficients are realized, promise for use in ultrasound transduction. Conclusion Piezoelectric transducers of the lead-free composition Li 0.06 (K 0.52 Na 0.48 ) 0.94 Nb 0.71 Ta 0.29 O 3 with 0.25 mol% Mn (KNNLTM) were studied as multilayers made by tape casting and co-sintering. By using PMMA as the pore former, porous support layers could successfully be co-sintered with a dense KNNLTM thick film and Pt inner electrode, while tapes with graphite as the pore former could not be co-sintered due to larger differences in shrinkage. Texture up to 85% in the [100] crystallographic direction could be developed by templated grain growth from NaNbO 3 templates. This sample with the highest degree of texture, displayed the highest piezoelectric response in form of the highest thickness coupling coefficient k t of 0.18. Our work demonstrates that introduction of texture and increased piezoelectric response is possible via co-sintering on porous substrates, and that this method has promise for applications for ultrasound transducers and energy harvesting.
6,728.4
2019-12-23T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Research on Stability Control Technology of Hazardous Chemical Tank Vehicles Based on Electromagnetic Semi-Active Suspension : Liquid sloshing in the tank can seriously affect the stability of hazardous chemical tanker trucks during operation. To this end, this paper proposes a solution based on an electromagnetic semi-active suspension system to prevent chemical spills and ensure safe driving of hazardous chemical tank vehicles. A comprehensive investigation was conducted across four domains: theoretical research, simulation model establishment, co-simulation platform construction, and simulation data analysis. Three fuzzy controllers were used to suppress the vibration of the tank vehicles Introduction Road transportation using tank vehicles, distinguished by its large loading capacity and low transportation cost, serves as the main mode of transporting hazardous liquid chemicals in China [1]. The tanker truck is a heavy-duty transportation vehicle with a specialized tank structure, with high bearing capacity, high center of gravity, and large volume. Under non-full load conditions and in complex operational scenarios, the tanker truck's internal liquid is susceptible to significant oscillations, interacting intensively with the tank body. This interaction can alter the vehicle's center of gravity, precipitating a drastic shift in the axle load, posing risks of tilting or even rolling over. These dynamics seriously undermine the operational safety and stability of the vehicle. In order to reduce the occurrence of rollover accidents, many measures have been proposed to improve the lateral stability of tank vehicles. Yim et al. [2][3][4][5] controlled the active lateral stabilizer bar based on different control algorithms, using lateral load transfer rate as the control objective to reduce vehicle oscillation. Xu et al. [6,7] established an active steering-based anti-rollover control system for vehicles, which can effectively reduce the vehicle's roll angle according to experimental results. Hu et al. [8][9][10][11] used the lateralsway-angle speed as the control variable, and determined the additional lateral sway moment using different control calculation methods. They employed differential braking to apply the lateral sway moment, thereby controlling the vehicle's stability. Through simulation experiments, they confirmed that such a method effectively suppresses the vehicle's oscillation influence. However, the methods mentioned above have not effectively solved the contradiction between vehicle comfort and handling stability. Lateral stabilizer bars cannot adjust the roll-angle stiffness in real time, which may cause excessive vehicle roll during high-speed turning. Using differential braking and active steering not only introduces safety hazards during high-speed driving, but also contributes to driver fatigue and insecurity, negatively impacting the driving experience. To address these issues, this paper proposes the use of controllable suspension technology to improve the driving stability of liquid tankers. The proposed controllable suspension can directly control the body sway of the liquid tanker, adjust the body posture in real time, and diminish the tank sway, all without compromising the driver's experience. The proposition considers both the comfort and handling stability of the vehicle. Among controllable suspensions, the semi-active suspension has the advantages of low energy consumption, low cost, and similar control effect compared with the active suspension. Moreover, the electromagnetic semi-active suspension, which adopts the electromagnetic principle to change the damping characteristics of the damper, has a faster response speed and higher reliability. The electromagnetic semi-active suspension is composed of a sensor, an actuator, a controller, and a power supply. The sensors are designed to detect body posture and road information. The actuators, composed of electromagnetic actuators and dampers, along with the controller, are tasked with calculating the damping of the electromagnetic semi-active suspension. The controller sends control signals to the actuators, ensuring semi-active control of the suspension. When the electromagnetic linear actuator is inactive, the damper leverages hydraulic oil from the working cylinder and damping springs to achieve damping. Conversely, when the electromagnetic linear actuator is operational, the controller modifies the device's electromagnetic impedance by altering the circuit equivalent resistance. This flexible adjustment of the suspension damping creates a controlled damping force, enabling semi-active control of the vehicle. By adopting electromagnetic semi-active suspension technology, the liquid tanker can attain enhanced driving stability and an enriched driving experience for the driver, while ensuring the comfort and handling stability of the vehicle. Such an advancement contributes significantly to reducing the risk associated with dangerous chemical transport accidents, thereby improving overall road safety. Structural Principle of Electromagnetic Semi-Active Suspension The suspension system is a key component of the vehicle and an important device to ensure the smooth running and stable handling of the vehicle. Passive suspension refers to the suspension whose stiffness and damping coefficient do not change with the external state. Semi-active suspension is a controllable suspension system that can adjust damping parameters to improve vehicle ride comfort and stability. Compared with semiactive suspension, passive suspension has the advantages of a simple structure and low cost. However, there is no energy supply device in the passive suspension system, and its stiffness and damping cannot be artificially controlled and adjusted during the driving process, so it is difficult for the passive suspension to take into account the requirements of vehicle driving, comfort, and handling stability, and it is increasingly unable to meet the high performance and high-energy efficiency needs of the rapid development of vehicle technology. Therefore, electromagnetic semi-active suspension technology has gradually become a research hotspot. As shown in Figure 1a, the electromagnetic semi-active suspension device is mainly composed of an outer magnetic yoke, a permanent magnet, a moving coil, and an inner core. The shock absorber piston rod is used as the inner yoke of the electromagnetic semi-active suspension, wherein the coil skeleton is fixed to the piston rod. When the suspension vibration and shock absorber piston reciprocate, the shock absorber will follow the synchronous movement of the coil, and cut off the magnetic induction line to generate an induction current. The generated induction current can be used to supply the semi-active control of the device, but also can be stored for other electrical equipment's energy supply. electromagnetic linear actuator is embedded in the suspension damper, as shown in Figure 1a [12]. The permanent magnet of the electromagnetic actuator adopts the Halbach array structure [13,14], which can effectively improve the electromagnetic characteristics of the device, as shown in Figure 1b. The left side of the Halbeck array is the area where the field is enhanced, which is also the area where the coil is active, which can generate a greater induced current, and on the right side is the area where the field is weakened. The electromagnetic semi-active suspension retains the traditional suspension's piston hydraulic cylinder, which can passively absorb shock through the hydraulic cylinder. In addition, an added electromagnetic linear actuator can also provide semi-active control of the vehicle body. During semi-active control, the vehicle control module executes a preset suspension control strategy based on the vehicle's posture as detected by sensors. By controlling a supercapacitor to provide a corresponding current to the electromagnetic linear actuator, the electromagnetic damping force of the actuator is adjusted to improve the vehicle's posture and enhance driving stability. Therefore, the electromagnetic damping force can be expressed by the thrust coefficient and the back electromotive force coefficient of the electromagnetic actuator. In electromagnetic semi-active suspension, the thrust coefficient of the electromagnetic actuator is denoted as Let the damping of the electromagnetic actuator be denoted as = + i δ a k k C R r . Then, the electromagnetic damping force of the electromagnetic semi-active suspension can be written as: The electromagnetic semi-active suspension used in this paper is based on a cylindrical damper structure, with the addition of an electromagnetic linear actuator. The electromagnetic linear actuator is embedded in the suspension damper, as shown in Figure 1a [12]. The permanent magnet of the electromagnetic actuator adopts the Halbach array structure [13,14], which can effectively improve the electromagnetic characteristics of the device, as shown in Figure 1b. The left side of the Halbeck array is the area where the field is enhanced, which is also the area where the coil is active, which can generate a greater induced current, and on the right side is the area where the field is weakened. The electromagnetic semi-active suspension retains the traditional suspension's piston hydraulic cylinder, which can passively absorb shock through the hydraulic cylinder. In addition, an added electromagnetic linear actuator can also provide semi-active control of the vehicle body. During semi-active control, the vehicle control module executes a preset suspension control strategy based on the vehicle's posture as detected by sensors. By controlling a supercapacitor to provide a corresponding current to the electromagnetic linear actuator, the electromagnetic damping force of the actuator is adjusted to improve the vehicle's posture and enhance driving stability. Therefore, the electromagnetic damping force can be expressed by the thrust coefficient and the back electromotive force coefficient of the electromagnetic actuator. In electromagnetic semi-active suspension, the thrust coefficient of the electromagnetic actuator is denoted as k i = B i L i ; the back electromotive force coefficient of the electromagnetic actuator is denoted as k δ = B δ L δ . Therefore, the electromagnetic damping force can be expressed as follows: Let the damping of the electromagnetic actuator be denoted as C a = k i k δ R+r . Then, the electromagnetic damping force of the electromagnetic semi-active suspension can be written as: Control Strategy for Electromagnetic Semi-Active Suspension For a more refined analysis of the driving dynamics of a tanker truck, the vehicle is abstracted into a seven degree of freedom model, as depicted in Figure 2. This model thoroughly considers the vibration characteristics of the tanker truck body in vertical, pitch, and roll directions. It also takes into account the vertical vibration characteristics of the four electromagnetic semi-active suspensions bridging the vehicle body. To mitigate body vibrations in the vertical, pitch, and lateral tilt directions, this paper employs a fuzzy control method [15,16] for adjusting the damping of the electromagnetic semi-active suspension. Characterized by its adaptability and robustness, fuzzy control effectively addresses the challenges posed by the electromagnetic semi-active suspension in maintaining liquid tanker stability. For a more refined analysis of the driving dynamics of a tanker truck, the vehicle is abstracted into a seven degree of freedom model, as depicted in Figure 2. This model thoroughly considers the vibration characteristics of the tanker truck body in vertical, pitch, and roll directions. It also takes into account the vertical vibration characteristics of the four electromagnetic semi-active suspensions bridging the vehicle body. To mitigate body vibrations in the vertical, pitch, and lateral tilt directions, this paper employs a fuzzy control method [15,16] for adjusting the damping of the electromagnetic semi-active suspension. Characterized by its adaptability and robustness, fuzzy control effectively addresses the challenges posed by the electromagnetic semi-active suspension in maintaining liquid tanker stability. According to Newton's second law, the dynamic equation of the mass on the vehicle spring is as follows: The dynamic equations for the four suspensions are as follows: According to Newton's second law, the dynamic equation of the mass on the vehicle spring is as follows: The dynamic equations for the four suspensions are as follows: where m s and (i = 1, 2, 3, 4) represent the sprung mass of the vehicle and the unsprung mass of the four wheels, respectively; z c represents the vertical displacement of the vehicle body; φ and θ represent the pitch and roll angles, respectively; J φ and J θ represent the roll and pitch moments of inertia of the vehicle, respectively; F si (i = 1, 2, 3, 4), F ti (i = 1, 2, 3, 4) and K ti (i = 1, 2, 3, 4) represent the passive damping force, spring force, and tire stiffness coefficient of each suspension, respectively; z i (i = 1, 2, 3, 4) represents the displacement of each suspension mass; q i (i = 1, 2, 3, 4) represents the road roughness excitation; U z represents the vibration control force in the vertical direction; M θ and M ϕ represent the roll and pitch vibration moments of the vehicle body, respectively; B and L represent the wheelbase and track width of the tanker truck, respectively; b represents the distance from the center of mass of the tanker truck to the front axle; a represents the distance from the center of mass of the tanker truck to the wheels; u i (i = 1, 2, 3, 4) represents the control force of each electromagnetic semi-active suspension. Given the interference between the control forces of the four suspensions on the vehicle body during actual driving, and the intertwined control objectives during the control process, direct adjustments to the control forces of the four suspensions may not yield significant control effects. Therefore, this paper first designs three fuzzy controllers to suppress the vibration of the liquid tanker truck body in the vertical, pitch, and roll directions, respectively. Subsequently, the requisite control forces and moments of the fuzzy controllers are equivalently calculated and allocated as the damping forces for the four electromagnetic semi-active suspensions. Finally, these suspensions feed the corresponding damping forces into the vehicle model, enabling comprehensive control of the vehicle's motion. The principle of this control strategy is shown in Figure 3. yield significant control effects. Therefore, this paper first designs three fuzzy controllers to suppress the vibration of the liquid tanker truck body in the vertical, pitch, and roll directions, respectively. Subsequently, the requisite control forces and moments of the fuzzy controllers are equivalently calculated and allocated as the damping forces for the four electromagnetic semi-active suspensions. Finally, these suspensions feed the corresponding damping forces into the vehicle model, enabling comprehensive control of the vehicle's motion. The principle of this control strategy is shown in Figure 3 The damping forces of the four suspensions can be calculated using the following moment-allocation formula: Construction of the Vehicle Model for the Hazardous Chemical Liquid Tanker Truck This paper studies the driving stability of the liquid tanker truck under different road conditions. For the precise simulation of complex road conditions, we utilized the TruckSim heavy vehicle simulation software and established a pendulum equivalent model of the tank body using Simulink to account for the effect of liquid sloshing. This integrative modeling approach enhances the accuracy of the simulation, yielding more realistic and dependable results, thereby better fulfilling the research requirements. Analysis of Liquid Sloshing in the Tank Body This paper aims to study the characteristics of liquid sloshing in the liquid tanker truck during driving, where the filling ratio is an important factor affecting the liquid sloshing. A tank with a filling ratio of 60% was selected for the study, as it demonstrates the dynamics of liquid sloshing and is a more common configuration in actual transportation. The tank body of the liquid tanker truck was modeled in Fluent and features an elliptical cross-section, with a major axis of 1 m, a minor axis of 0.8 m, and a length of 6 m. The liquid sloshing under the same longitudinal excitation with different filling ratios was simulated. The tank body with a filling ratio of 60% was selected to simulate the lateral excitation of the liquid tanker truck during turning, and the simulation time was 5 s. The longitudinal sloshing force and moment of the tank body over time were obtained, and the simulation results are shown in Figure 4. The damping forces of the four suspensions can be calculated using the following moment-allocation formula: Construction of the Vehicle Model for the Hazardous Chemical Liquid Tanker Truck This paper studies the driving stability of the liquid tanker truck under different road conditions. For the precise simulation of complex road conditions, we utilized the TruckSim heavy vehicle simulation software and established a pendulum equivalent model of the tank body using Simulink to account for the effect of liquid sloshing. This integrative modeling approach enhances the accuracy of the simulation, yielding more realistic and dependable results, thereby better fulfilling the research requirements. Analysis of Liquid Sloshing in the Tank Body This paper aims to study the characteristics of liquid sloshing in the liquid tanker truck during driving, where the filling ratio is an important factor affecting the liquid sloshing. A tank with a filling ratio of 60% was selected for the study, as it demonstrates the dynamics of liquid sloshing and is a more common configuration in actual transportation. The tank body of the liquid tanker truck was modeled in Fluent and features an elliptical cross-section, with a major axis of 1 m, a minor axis of 0.8 m, and a length of 6 m. The liquid sloshing under the same longitudinal excitation with different filling ratios was simulated. The tank body with a filling ratio of 60% was selected to simulate the lateral excitation of the liquid tanker truck during turning, and the simulation time was 5 s. The longitudinal sloshing force and moment of the tank body over time were obtained, and the simulation results are shown in Figure 4. This paper uses an equivalent pendulum model [17,18] to simulate the liquid sloshing inside the tank. The schematic diagram of the equivalent pendulum model is shown in Figure 5. The dynamic equation is: The lateral sloshing force of the liquid: The lateral tilting moment of liquid sloshing on the center of the tank bottom is: γ + m l h l a p + m p l p gγ (8) In the liquid-equivalent pendulum model diagram shown in Figure 5, m 0 is the fixed mass of the liquid, in kg; m p is the mass of the equivalent pendulum of the liquid, in kg; h 0 is the height from the center of mass of the liquid's fixed mass to the tank bottom, in m; h p is the height from the center of mass of the pendulum to the tank bottom, in m; c l is the equivalent damping of the liquid; γ is the swing angle of the equivalent pendulum; l p is the length of the equivalent pendulum, in m; a p is the lateral acceleration of the tank body. This paper uses an equivalent pendulum model [17,18] to simulate the liquid sloshing inside the tank. The schematic diagram of the equivalent pendulum model is shown in Figure 5. The lateral sloshing force of the liquid: The lateral tilting moment of liquid sloshing on the center of the tank bottom is: This paper uses an equivalent pendulum model [17,18] to simulate the liquid sloshing inside the tank. The schematic diagram of the equivalent pendulum model is shown in Figure 5. The lateral sloshing force of the liquid: The lateral tilting moment of liquid sloshing on the center of the tank bottom is: h m mha m l γ l gγ (8) In the liquid-equivalent pendulum model diagram shown in Figure 5, m0 is the fixed mass of the liquid, in kg; mp is the mass of the equivalent pendulum of the liquid, in kg; h0 Construction of a Complete Vehicle Mode for the Liquid Tanker Truck Based on TruckSim In order to visually obtain a dynamic simulation of the vehicle, this paper uses Truck-Sim to establish a complete vehicle model. Based on the parameters of a selected liquid tanker truck model, the models for the vehicle body, suspension system, tires, steering system, powertrain system, braking system, and aerodynamics are set in TruckSim. The main parameters of the complete vehicle model are shown in Table 1. Establishment of Simulink Suspension Control Model Based on the analysis of the forced sloshing of the liquid inside the tank in Section 2.1, MATLAB was employed to identify the parameters of the pitch sloshing force and lateraltilting moment curves of the tank body, and to derive the parameters of the equivalent pendulum model. Following this, the tank-equivalent pendulum model was established in Simulink. The lateral sloshing force and lateral tilting moment of the liquid on the tank bottom, obtained from the Fluent numerical simulation, were compared with those derived from the tank-equivalent pendulum model in Simulink, as shown in Figure 6. TruckSim to establish a complete vehicle model. Based on the parameters of a selected liquid tanker truck model, the models for the vehicle body, suspension system, tires, steering system, powertrain system, braking system, and aerodynamics are set in TruckSim. The main parameters of the complete vehicle model are shown in Table 1. Establishment of Simulink Suspension Control Model Based on the analysis of the forced sloshing of the liquid inside the tank in Section 2.1, MATLAB was employed to identify the parameters of the pitch sloshing force and lateral-tilting moment curves of the tank body, and to derive the parameters of the equivalent pendulum model. Following this, the tank-equivalent pendulum model was established in Simulink. The lateral sloshing force and lateral tilting moment of the liquid on the tank bottom, obtained from the Fluent numerical simulation, were compared with those derived from the tank-equivalent pendulum model in Simulink, as shown in Figure 6. From Figure 6, it can be seen that the simulation results of the liquid sloshing pendulum model built by Simulink software fit well with those of the Fluent numerical model, which verifies the reliability and accuracy of the established Simulink liquid-equivalent pendulum model, laying a basis for the establishment of the next step: a joint model of the liquid tanker truck. Based on the designed suspension control strategy, the suspension control module was established in Simulink, as shown in Figure 7. From Figure 6, it can be seen that the simulation results of the liquid sloshing pendulum model built by Simulink software fit well with those of the Fluent numerical model, which verifies the reliability and accuracy of the established Simulink liquid-equivalent pendulum model, laying a basis for the establishment of the next step: a joint model of the liquid tanker truck. Based on the designed suspension control strategy, the suspension control module was established in Simulink, as shown in Figure 7. Construction of the TruckSim-Simulink Co-Simulation Platform TruckSim can be connected to Simulink models through a data interface, and the Construction of the TruckSim-Simulink Co-Simulation Platform TruckSim can be connected to Simulink models through a data interface, and the established TruckSim complete vehicle model can be connected to the Simulink suspension control module and tank-equivalent pendulum model through the input and output variables. The variables output from TruckSim to Simulink include the vertical vibration speed of the vehicle body θ, and the lateral acceleration a y . The variables input from Simulink to TruckSim include the damping force of the four electromagnetic semi-active suspensions u i , the pitch sloshing force F y , and the sloshing moment M y of the tank-equivalent pendulum model. The established co-simulation platform of the liquid tanker truck using TruckSim-Simulink is shown in Figure 8. Validation of the Complete Vehicle Model To verify whether the TruckSim-Simulink co-simulation model established in this paper can accurately portray the vehicle's dynamic characteristics, we opted for a step Validation of the Complete Vehicle Model To verify whether the TruckSim-Simulink co-simulation model established in this paper can accurately portray the vehicle's dynamic characteristics, we opted for a step input test as a means of model validation. In compliance with the regulations of the GB/T 12534 Road Vehicle Test Method General Rules, we conducted a step input test for the steering wheel angle. The vehicle speed was set at 60 km/h, the steering wheel angle was adjusted to 180 • , the road adhesion coefficient was set at 0.8, and the duration was fixed at 10 s. The resulting steering-wheel-angle step input curve is illustrated in Figure 9. For ease of analysis, a seven degree of freedom vehicle model was established in Simulink as a reference model for conducting the same steering-wheel-angle step input simulation test. The roll angle and yaw angle of the liquid tanker truck obtained from the co-simulation model were compared with those of the reference model, as shown in Figure 10. Validation of the Complete Vehicle Model To verify whether the TruckSim-Simulink co-simulation model established in this paper can accurately portray the vehicle's dynamic characteristics, we opted for a step input test as a means of model validation. In compliance with the regulations of the GB/T 12534 Road Vehicle Test Method General Rules, we conducted a step input test for the steering wheel angle. The vehicle speed was set at 60 km/h, the steering wheel angle was adjusted to 180°, the road adhesion coefficient was set at 0.8, and the duration was fixed at 10 s. The resulting steering-wheel-angle step input curve is illustrated in Figure 9. For ease of analysis, a seven degree of freedom vehicle model was established in Simulink as a reference model for conducting the same steering-wheel-angle step input simulation test. The roll angle and yaw angle of the liquid tanker truck obtained from the co-simulation model were compared with those of the reference model, as shown in Figure 10. As inferred from Figure 10, although there exists some discrepancy between the Simulink reference model and the co-simulation model, and their respective peak values differ, the overall trends of their parameter curves are essentially consistent. Moreover, the steady-state value error between the two models remains less than 5%. This indicates that the established co-simulation model of the liquid tanker truck can accurately simulate the basic motion characteristics of the liquid tanker truck. As inferred from Figure 10, although there exists some discrepancy between the Simulink reference model and the co-simulation model, and their respective peak values differ, the overall trends of their parameter curves are essentially consistent. Moreover, the steady-state value error between the two models remains less than 5%. This indicates that the established co-simulation model of the liquid tanker truck can accurately simulate the basic motion characteristics of the liquid tanker truck. Stability Control Simulation of the Hazardous Materials Tank Truck To verify the feasibility of using electromagnetic semi-active suspension to control the stability of the liquid tanker truck, this paper implements the double lane change test condition. The input steering wheel angle is shown in Figure 11, where the vehicle speed is set to 60 km/h, and the road adhesion coefficient is 0.85. The simulation results of the vertical vibration acceleration, pitch angle acceleration, and roll angle acceleration of the liquid tanker truck body are comparatively analyzed between the passive suspension system and the electromagnetic semi-active suspension control system. The curves of the vertical vibration acceleration and pitch angle acceleration of the liquid tanker truck body are shown in Figures 12 and 13. They show that the vertical vibration and pitch of the body change sharply from 0 to 4 s, indicating that the startup acceleration condition of the liquid tanker truck has a significant effect on the vertical vibration and pitch of the body. Figure 14 shows that the startup acceleration of the liquid tanker truck has little effect on the roll angle, but the roll angle of the body starts to change sharply when the liquid tanker truck changes lanes at 3 s. By comparing the root-mean-square values in Table 2, it can be concluded that the liquid tanker truck controlled by the electromagnetic semi-active suspension has significantly improved vibration performance in the vertical, pitch, and roll directions of the vehicle body. The roll angle exhibited a remarkable performance improvement of 27.86%, highlighting the significant impact of the electromagnetic energy-fed suspension in suppressing the roll angle of the liquid tanker truck. The curves of the vertical vibration acceleration and pitch angle acceleration of the liquid tanker truck body are shown in Figures 12 and 13. They show that the vertical vibration and pitch of the body change sharply from 0 to 4 s, indicating that the startup acceleration condition of the liquid tanker truck has a significant effect on the vertical vibration and pitch of the body. Figure 14 shows that the startup acceleration of the liquid tanker truck has little effect on the roll angle, but the roll angle of the body starts to change sharply when the liquid tanker truck changes lanes at 3 s. By comparing the root-meansquare values in Table 2, it can be concluded that the liquid tanker truck controlled by the electromagnetic semi-active suspension has significantly improved vibration performance in the vertical, pitch, and roll directions of the vehicle body. The roll angle exhibited a remarkable performance improvement of 27.86%, highlighting the significant impact of the electromagnetic energy-fed suspension in suppressing the roll angle of the liquid tanker truck. Actuators 2023, 12, x FOR PEER REVIEW 12 of 15 The root-mean-square equation of vertical vibration acceleration is: .. .. The root-mean-square equation of pitch angle acceleration is: .. The root-mean-square equation of roll angle acceleration is: .. .. The root-mean-square equation of vertical vibration acceleration is: The root-mean-square equation of pitch angle acceleration is: The root-mean-square equation of roll angle acceleration is: Conclusions This paper aims to solve the problem of liquid tank vehicles being prone to rollover during transportation due to liquid sloshing in the tank. To this end, the proposed solution is to use electromagnetic semi-active suspension to reduce vehicle body sway. By Conclusions This paper aims to solve the problem of liquid tank vehicles being prone to rollover during transportation due to liquid sloshing in the tank. To this end, the proposed solution is to use electromagnetic semi-active suspension to reduce vehicle body sway. By analyzing the characteristics of liquid sloshing in the tank, a TruckSim vehicle model and a single-pendulum equivalent model for simulating liquid sloshing are established. An electromagnetic semi-active suspension control strategy is developed, and variables are set to connect the two software models. A TruckSim-Simulink co-simulation model is constructed with the goal of improving the stability of the liquid tanker truck. Moreover, simulation studies of the electromagnetic semi-active suspension control technology are conducted. The following conclusions are mainly obtained through the above research:
6,921.6
2023-08-17T00:00:00.000
[ "Engineering" ]
Industry 4.0. Technique for ranking vector estimates when choosing business partners . The fourth industrial revolution is affecting companies and leading to new strategic thinking. The changes brought about by the requirements of Industry 4.0 are forcing restructuring in many areas of management or building new business models. Introduction The fourth industrial revolution is a term that refers to the social, industrial and technological changes brought about by the digital transformation of industry.A characteristic feature of the Fourth Industrial Revolution is the knowledge of customer needs, which constitutes the competitive advantage of enterprises, allowing them to correctly identify their opportunities, challenges and problems, which, in turn, guarantees the conscious use of new market opportunities. One of the subsets of the Fourth Industrial Revolution is the concept of Industry 4.0, which was adopted to denote the tasks of identifying and analyzing upcoming changes that are of strategic importance to the economy.In essence, Industry 4.0 is a trend towards automating data exchange in production systems, including cyber-physical systems, the Internet of things, cloud computing, cognitive computing and artificial intelligence, which is achieved by integrating intelligent machines and systems with business processes to improve production efficiency [1].With the introduction of the above technologies, through intelligent monitoring and decision making, companies and all their networks will be able to monitor and optimize their activities in near real time.Therefore, Industry 4.0 involves the introduction of modern IT solutions throughout the value chain, which allows you to create personalized products for a specific customer and related value chains.Advanced information and communication technologies make it possible to accurately adapt production to customer expectations while maintaining low costs, high quality and efficiency [2].Modern technological business models are accelerating the transformation of the industry, changing the structure of the market.This poses new challenges for many areas of management, which are forced to adapt to the architecture of the digital world. Progressive globalization and networking of the economy necessitates the creation of new business concepts.Dynamic technological development and solutions implemented in modern companies lead to a change in management paradigms and the need to build new business models based on maintaining a balance between the development of intelligent technologies and the quality of life.As a rule, the company's business model is presented as a set of activities, methods and time frames and reflects the implementation of the strategy in terms of economic effects [3].The role of strategy in the model is most important, as current and future revenues are generated by the products offered to customers and the competitive approach to the market.This results in a revenue stream and return on investment through a combination of profits and an appropriate cost structure.Thus, the business model is a configuration of the strategy, taking into account the sources of income and profit.Innovation can be applied to all elements of a business model and is necessary to create value for the customer [4].Business models developed and implemented by companies determine their profitability and competitiveness.New strategic behavior is determined by the changes that can be observed in modern business.Nowadays, managers are required to use more and more sophisticated management methods and concepts.The analysis of the essence, structure and types of strategic models is an essential cognitive element in the sphere of development and operation.An important element of detailing business models are business processes, which, to some extent, are a way to realize value in the form of relationships with customers, in particular, providing them with products that meet specific needs [5].As a result, companies will have to redefine their strategies and business models in the coming years, not in relation to traditional market competitors, but in relation to emerging consumer ecosystems.Industry 4.0 technologies are creating new business opportunities by significantly facilitating open business models based on open innovation.These models are the strategic and operational basis for changing the configuration of products and processes in the enterprise, the basis of competitive market advantage, determined by the rules of the Industry 4.0 concept, where customers and business partners are directly involved in business processes and value creation.They also allow you to get more value by using key assets, resources or positions of the company not only in its own business, but also in the business of other companies [6]. Thus, companies with an open business model are actively looking for innovative ways to collaborate with all business partners: suppliers, customers or general partners, to expand their business, for example, through servitization [7]. In the process of implementing an open business model, the customer company is constantly faced with the need to evaluate its business partners.The assessment under consideration is multi-criteria (vector), while it should be noted that in the general case, the target function of the customer's company may not coincide with the target functions of partners who are also active participants in business processes seeking to realize their target functions.To formalize the description of the model for choosing business partners by the customer company, it is advisable to designate the latter as the Center, and partners as active systems for which vector estimates are generated.[8].Let the division of the set into classes have already been obtained.Therefore, it is known in advance to which class the presented vector estimate will be assigned.Thus, it is possible to determine the minimum number of questions to the Center required to build a given partition, if you use the previous procedure, but determine i Ô according to the formula i li Ô g = , where l - 4 number of the class to which the vector estimate belongs i y in this partition [9].This procedure was called <reference= (i.e., the best within the framework of the proposed approach), since at each step of the procedure a vector estimate is presented that determines in accordance with the ratio 0 P belonging to one of the classes of the maximum number of vector estimates. Materials and methods To generate the initial partition of the set of alternatives into decision classes, the following procedure is proposed.The researcher sets the number of criteria Q, the number of gradations on the scale of each criterion number of classes N. A set of alternatives (vector estimates) is formed Y representing all possible combinations of assessments on criteria scales.It is known that a vector score that has the first scores on the scales of all criteria belongs to the first class, and a vector score that has the last scores on the scales of all criteria belongs to N -mu class.Then, in accordance with the proposed survey procedure, the Center determines the vector estimate i y Y  , which must be presented to the Center in order for it to be assigned to one of the classes [10].The answer of the Center is modeled using a pseudo-random number generator: the coefficients of proximity of a given vector estimate to different classes, determined in accordance with the procedure, are considered as the probabilities of attributing the presented vector estimate y i to the corresponding class.By the value of the next pseudo-random number R, the number of the class to which the presented vector assessment will be assigned is determined: where 0 0 i P = After that, in accordance with the dominance relation, the sets of numbers are corrected j G for each vector estimate j y Y  , belonging to a certain class has not yet been determined.Next, the coordinates of the centers of the classes are recalculated, and the procedure is repeated until the belonging to one of the classes of each vector estimate of the set Y [11].The number of calls to the pseudo-random number generator characterizes the number of questions to the Center for constructing this partition based on the proposed rational procedure for polling the Center. Research and results The scheme of statistical testing of the behavior of the proposed algorithm consists of the following stages.I. Generation of the Center's answers using a random number generator and determination of the initial partition of the set of alternatives into decision classes. II. Determination of a <reference= sequence of questions to the Center (in this case, the Center is a partition of the set of alternatives generated at stage I) to split the initial set of alternatives into classes. III.Multiple repetition of stages I-II and comparison of results. Steps I and II are carried out in accordance with the described procedures.Stage III consists of repeated repetition of stages I and II for a different number of criteria Q, gradations on the scales of criteria w and decision classes N. The data obtained at each stage on the number of vector assessments presented are averaged, and the value of this indicator obtained at the stage is compared I (N0) and II ( ý N ).This ratio characterizes the effectiveness of the proposed rational procedure for polling the Center [12]. For each variant of the number of criteria, gradations on the scales of criteria and the number of decision classes, about 500 implementations of procedures were carried out, on each of which an estimate was determined for the number of presentations of vector estimates, when the partition is known in advance and when it is not known. The average values of these estimates are given in Table .1 for the case of four and five criteria with three and four gradations according to their scales and two, three and four classes of solutions [13].The given data show that the proposed procedure requires the presentation of no more than 2.8 times more vector estimates than the reference algorithm.Moreover, this ratio decreases with the growth of the number of classes of solutions.The absolute values of the number of presented vector estimates are much less than the power Y , which indicates the expediency of using the proposed procedure for surveying the Center [14]. Note that the proposed algorithm for determining the vector estimates that should be presented to the Center ensures the construction of a partition of the set Y into decision classes for a relatively small number of calls to the Center.At the same time, its use significantly reduces the possibility of checking the correctness of the Center's answers based on the ratio 0 P , so for the case N=2 when applying the algorithm, the possibility of contradictions in the partition is completely excluded Y .On the one hand, this is a positive phenomenon, since we need to construct a consistent partition.On the other hand, a random error of the Center when assigning the presented vector assessment to one of the classes can lead to a split that does not meet the real preferences of the Center [15]. A practical decision-making method should provide the possibility of verifying the information received [16].In this case, the requirement is put forward that each vector estimate of the set not presented to the Center Y was evaluated directly or indirectly (based on the relationship 0 P ) at least two times.Therefore, after constructing a partition using the proposed algorithm, an additional presentation of a part of vector estimates for which this condition was not satisfied is provided [17,18]. The following procedure for additional polling of the Center is proposed.To a subset Z Y  vector estimates are distinguished, the belonging of which to a certain class of solutions was determined on the basis of the relation with the help of the proposed algorithm was assigned to the class l Y ) [19]. Thus, the number of vector estimates presented to the Center for constructing a partition of the set Z into classes of solutions will not exceed the number of vector estimates presented in Table .1 for the corresponding values of the parameters Q,  и N. In the resulting partitioning of the set Y into classes of solutions, the vector estimates of the set Z refer to the classes in which they fell when partitioned into classes of the set Z [20]. The division of the set Y into classes of solutions constructed in this way, even for N = 2, does not guarantee the asymmetry of the relation * P .In this regard, one should use Y the procedure for reducing the resulting partition of the set to a consistent form [21]. Conclusions Thus, the proposed method of ranking vector estimates allows you to determine the class of business partners and make decisions about further interaction within the framework of an open business model.In addition, the proposed methodology can be used as the basis for automating business processes associated with the need to rank active systems, which is consistent with the basic requirements for open business models characteristic of Industry 4.0. just one time.For this subset, a <reference= procedure is used to determine the sequence of presented vector estimates, i.e., 0 P
2,963.8
2022-01-01T00:00:00.000
[ "Business", "Computer Science" ]
Diversity of putative archaeal RNA viruses in metagenomic datasets of a yellowstone acidic hot spring Two genomic fragments (5,662 and 1,269 nt in size, GenBank accession no. JQ756122 and JQ756123, respectively) of novel, positive-strand RNA viruses that infect archaea were first discovered in an acidic hot spring in Yellowstone National Park (Bolduc et al., 2012). To investigate the diversity of these newly identified putative archaeal RNA viruses, global metagenomic datasets were searched for sequences that were significantly similar to those of the viruses. A total of 3,757 associated reads were retrieved solely from the Yellowstone datasets and were used to assemble the genomes of the putative archaeal RNA viruses. Nine contigs with lengths ranging from 417 to 5,866 nt were obtained, 4 of which were longer than 2,200 nt; one contig was 204 nt longer than JQ756122, representing the longest genomic sequence of the putative archaeal RNA viruses. These contigs revealed more than 50% sequence similarity to JQ756122 or JQ756123 and may be partial or nearly complete genomes of novel genogroups or genotypes of the putative archaeal RNA viruses. Sequence and phylogenetic analyses indicated that the archaeal RNA viruses are genetically diverse, with at least 3 related viral lineages in the Yellowstone acidic hot spring environment. Electronic supplementary material The online version of this article (doi:10.1186/s40064-015-0973-z) contains supplementary material, which is available to authorized users. Background Almost all life forms can be infected by viruses. To date, thousands of viruses have been identified (King et al. 2012). However, most of these viruses infect bacteria or eukaryotes. Compared to the more than 6,000 viruses that infect bacteria (Ackermann 2007;Ackermann and Prangishvili 2012), there are fewer than 100 viruses of archaea (Pina et al. 2011), all of which harbor DNA genomes (Prangishvili 2013). Viruses in the environment are abundant, and viral communities are incredibly diverse (Breitbart et al. 2002;Breitbart and Rohwer 2005;Angly et al. 2006;Breitbart 2012). There are an average of 10 7 viruslike particles per milliliter of surface seawater (Bergh et al. 1989), an estimated 5,000 viral genotypes in 200 liters of seawater (Breitbart et al. 2002) and at least 10 4 viral genotypes in one kilogram of marine sediment (Breitbart et al. 2004). The presence of archaeal RNA viruses in the environment is likely considering both the large number of various RNA viral types infecting eukaryotes and bacteria (Culley et al. 2006;Prangishvili et al. 2006;Lang et al. 2009) and that archaea comprise up to one-third of the ocean's prokaryotes (Karner et al. 2001). Recently, sequences of putative archaeal RNA viruses were obtained using a metagenomic approach (Bolduc et al. 2012). Viral samples were collected from hightemperature, acidic hot springs in Yellowstone National Park, and viral RNA was extracted and transcribed into cDNA for metagenomic sequencing. Two contigs were assembled and were demonstrated to be genomes of putative archaeal RNA viruses (GenBank accession no. JQ756122 and JQ756123) (Bolduc et al. 2012). The nucleotide sequence JQ756122, which is 5,662 nt in length, is thought to be a near-full-length genome of the putative archaeal RNA viruses and contains a single open reading frame that encodes a putative viral polyprotein encompassing an RNA-dependent RNA polymerase and a putative capsid protein (Bolduc et al. 2012). The second sequence, JQ756123, with a length of 1,269 nt, encompasses three overlapping short ORFs, each of which shows approximately 70% amino acid sequence identity with the predicted RNA-dependent RNA polymerase of JQ756122 (Bolduc et al. 2012). Here, we investigate the genetic diversity of the putative archaeal RNA viruses in global metagenomic datasets based on sequence assembly. Sequence and phylogenetic analyses indicate that at least three lineages of the putative archaeal RNA viruses may be present in Yellowstone hot springs. Sequence assembly The nucleotide sequences of the putative archaeal RNA viruses (GenBank accession no. JQ756122) was downloaded from GenBank and was searched (BLASTN, E-value < 10 −5 ) against the NCBI non-redundant nucleotide database. Hits with a significant level (E-value < 10 −5 ) included those two nucleotide sequences of JQ756122 and JQ756123, which were identified as nucleotide sequences of putative archaeal RNA viruses, suggesting that JQ756122 was archaeal RNA virus-specific and was well conserved, making it easy to map reads in metagenomic databases. Subsequently, JQ756122 was used to search (TBALSTX, E-value < 10 −5 ) all of the databases on the CAMERA 2.0 portal (http://camera.calit2.net). Hits were obtained from four databases (Additional file 1: Table S1). The broad phage metagenome database contained the largest number (n = 3,763) of matched reads, including all of the reads that were detected in both the metagenomic 454 whole genome shotgun reads and the metagenomic 454 reads databases (Additional file 1: Table S1). Only one hit, JQ756122, was found by searching the NCBI environmental sample nucleotide database. Subsequently, these 3,763 reads, which had significantly similarity to JQ756122, were downloaded from the CAMERA 2.0 portal (Additional file 1: Table S1) and further analyzed for their RNA source based on information regarding the nucleotide samples. As a result, 6 reads originating from natural DNA samples were removed, while the remaining 3,757 reads of RNA samples (Additional file 2: Table S2) were all from an acidic hot spring in Yellowstone National Park and were used for de novo assembly to obtain JQ756122-related contigs. Each contig was searched separately (TBALSTX, E-value < 10 −5 ) against the broad phage metagenome database in the CAMERA 2.0 portal. Reads that were significantly similar to the contig were downloaded from the CAMERA 2.0 portal and checked for RNA origin. The contig then served as a reference sequence to assemble these retrieved reads. Once an extended contig with a relatively longer size and higher coverage was obtained after reference assembly, it was used to search the broad phage metagenome database again. This procedure was repeated until the assembled sequence stopped extending. All of the sequence assemblies were generated using the Geneious Pro (version 5.6.2; Biomatters Ltd.). A schematic presentation of the sequence assembly procedure is shown in Figure 1. Sequence analysis The nine putative archaeal RNA virus sequences were searched against the NCBI nucleotide database using BLASTN (E-value < 10 −5 ) and against the NCBI nonredundant protein database using BLASTX (E-value < 10 −3 ) for the potential homologous sequences in the databases. The REPuter program (Kurtz et al. 2001) was used to identify the repeat sequences. Phylogenetic analysis A conserved genomic fragment of 464 nt was identified in contigs 1, 3 and 4; JQ756122; and JQ756123 by sequence alignment using Geneious Pro (version 5.6.2) and used to reconstruct the phylogenetic trees. Maximum likelihood analyses were performed using phyML (Guindon et al. 2010) with the HKY85 model and 1,000 replicates. Figure 1 Schematic presentation of the sequence assembly procedures. Nucleotide sequence accession numbers The nucleotide sequences of the nine contigs were deposited in DDBJ under the accession numbers AB979436 -AB979444. Results After the de novo and reference assemblies, nine archaeal RNA-virus-related contigs were obtained. The data regarding the metagenomic assembly of these nine contigs are provided in Table 1. The longest contig was 5,866 nt in length, being longer than the JQ756122 sequence (5,662 nt) by approximately 40 nt at the 5' end and 170 nt at the 3' end, while the remaining length was almost identical to the JQ756122 sequence with only a 4-nt difference. The G + C contents of these nine contigs ranged from 49.6 to 54.9% and were very similar to that of the putative archaeal RNA viruses (JQ756122 and JQ756123), whose G + C contents were 50.7 and 52.2%, respectively. A pairwise sequence similarity comparison indicated that the assembled contigs in this study shared a similarity of 50 to 99% with JQ756122 or JQ756123 (Figure 2), suggesting the genetic diversity of the putative archaeal RNA viruses in the Yellowstone hot spring. In total, five reverse-repeat and three palindromic sequences were identified from the nucleotide sequences of 7 contigs and of a putative archaeal RNA virus (JQ756122) using the REPuter program (Table 2) and checked manually. JQ756122 and contigs 1 and 2 shared two types of reverse-repeat sequences (Figure 2) with >97% of sequence similarity. All of the repeat sequences were searched against (BLASTN, E-value < 0.1) the virus database but without a significant hit. The functions of these repeat sequences remain unknown. BLASTN (E-value < 10 −5 ) and BLASTX (E-value < 10 -3) analyses showed that all 9 contigs were significantly similar to the sequences of the putative archaeal RNA viruses (JQ756122 or JQ756123) (Additional file 3: Table S3 and Additional file 4: Table S4). These results further confirm that these contigs are the partial or complete genomes of putative novel archaeal RNA virus isolates that are closely or distantly related to the reported isolates (Bolduc et al. 2012). Phylogenetic analyses indicate 3 lineages of the putative archaeal RNA viruses ( Figure 3); contig 1 was closely related to JQ756122, and contig 4 was closely related to JQ756123. Contig 3 represented the third genogroup. Given the relatively low sequence similarity between other the contigs and JQ756122 or JQ756123, it is reasonable to speculate that putative archaeal RNA viruses are genetically diverse in the Yellowstone hot spring. Discussion To investigate the worldwide diversity of the putative archaeal RNA viruses, the nucleotide sequence JQ756122 was used to search against global metagenomic databases to retrieve significantly similar reads. Subsequently, based on both the de novo and reference sequence assemblies of these retrieved reads, nine novel partial or nearly complete genomes of the putative archaeal RNA viruses were successfully obtained. Similar mapping methods have been used by our group to assemble the genomic sequences of novel virophages in the CAMERA metagenomic datasets, through which seven complete virophage genomic sequences were obtained (Zhou et al. 2013;Zhou et al. 2015). Consequently, the established sequence assembly procedures generate a better understanding of the genetic diversity of enigmatic viruses and can be applied to similar studies. Interestingly, all 3,757 of the putative archaeal RNA virus-related RNA-origin sequences were detected in the metagenomic dataset of sample NL10 (GPS coordinate: N44.7535, W-110.7238) collected by Bolduc et al. (Bolduc et al. 2012) in the acidic hot spring in Yellowstone National Park. It indicates that the associated archaeal RNA viruses may be unique to this location. Similar archaeal RNA viruses may also exist in other environments. The absence of related reads in other metagenomic datasets may result from the relatively small number of RNA metagenomic datasets compared to the number of DNA metagenomic datasets. In addition, other environments may also possess archaeal RNA viruses whose genomes are quite different from the putative archaeal RNA viruses that were identified in Yellowstone National Park. The genome sequencing of archaeal viruses has revealed very few genes whose products have significant sequence similarity to any known proteins (Prangishvili et al. 2006;Pina et al. 2011), and only a few homologous genes are shared between the members of different families of crenarchaeal viruses (Prangishvili 2013). Accordingly, archaeal RNA viruses in different or even in the same environment may have different genome contents. Bolduc et al. identified CRISPRs from cellular metagenomes (Bolduc et al. 2012). Direct repeats and spacers were extracted from the identified CRISPRs, and the CRISPR spacers were then compared against the viral RNA metagenome. In their paper, these authors reported that "Forty-six spacers, associated with 4 types of direct repeats, were identical to RNA sequences within the
2,627.8
2015-04-18T00:00:00.000
[ "Biology", "Environmental Science" ]
Elevated levels of erythrocyte-conjugated dienes indicate increased lipid peroxidation in schistosomiasis mansoni patients Schistosoma mansoni causes liver disease by inducing granulomatous inflammation. This favors formation of reactive oxygen species, including superoxide ions, hydrogen peroxide and hydroxyl radicals all of which may induce lipid peroxidation. We have evaluated lipid peroxidation in 18 patients with hepatosplenic schistosomiasis mansoni previously treated with oxamniquine followed by splenectomy, ligature of the left gastric vein and auto-implantation of spleen tissue, by measuring levels of erythrocyte-conjugated dienes and plasma malondialdehyde (MDA). Age-matched, healthy individuals (N = 18) formed the control group. Erythrocyteconjugated dienes were extracted with dichloromethane/methanol and quantified by UV spectrophotometry, while plasma MDA was measured by reaction with thiobarbituric acid. Patient erythrocytes contained two times more conjugated dienes than control cells (584.5 ± 67.8 vs 271.7 ± 20.1 μmol/l, P < 0.001), whereas the increase in plasma MDA concentration (about 10%) was not statistically significant. These elevated conjugated dienes in patients infected by S. mansoni suggest increased lipid peroxidation in cell membranes, although this was not evident when a common marker of oxidative stress, plasma MDA, was measured. Nevertheless, these two markers of lipid peroxidation, circulating MDA and erythrocyte-conjugated dienes, correlated significantly in both patient (r = 0.62; P < 0.01) and control (r = 0.57; P < 0.05) groups. Our data show that patients with schistosomiasis have abnormal lipid peroxidation, with elevated erythrocyte-conjugated dienes implying dysfunctional cell membranes, and also imply that this may be attenuated by the redox capacity of antioxidant agents, which prevent accumulation of plasma MDA. Correspondence Schistosomiasis mansoni is endemic in northeast Brazil, mainly in rural areas of Pernambuco.The disease is also extending to new areas, for example the vacation island of Itamaracá, 50 km to the north of Recife (1), as infected people migrate.Patients with severe infections frequently develop periportal fibrosis, portal hypertension and hepatosplenomegaly and, classically, are surgically treated by splenectomy, accompanied by obliterating suture of esophageal varices and ligature of the left gastric vein (2).In the last decade, auto-implantation of spleen tissue in children has been carried out with the surgery and this has improved liver function and increased survival time (3).Among tropical diseases, Schistosoma mansoni is the second major cause of morbidity and mortality worldwide (2) and persists mainly in developing countries, with significant economic and public health consequences (1).Morbidity depends on genetic and environmental factors, as well as severity of infection, and all three influence the granulomatous inflammation of the liver and later fibrosis around the eggs (2). Eosinophil cells associated with schistosome-induced granulomas form oxygen free radicals, such as superoxide and hydroxyl radicals (4), and release active eosinophil peroxidase around the egg granulomas (5).However, the consequences of free radical generation in schistosomiasis mansoni are still unknown.This also applies to the effects on lipid peroxidation, where polyunsaturated fatty acids and other lipids are oxidized by intermediate free radicals to form conjugated dienes, malondialdehyde (MDA) and lipid hydroperoxides, among other products (6).MDA is the most abundant aldehyde produced during lipid peroxidation, and its assay is often used as a marker for oxidative stress in several diseases (7), including Alzheimer's disease, atherosclerosis and diabetes (8).In liver disease, free radicals have been implicated in the inflammatory process, and increased lipid peroxidation has been found (9).Lipid peroxidation starts with abstraction of OH (6).In mice, the antioxidant capacity of livers damaged by S. mansoni is reduced, and this results in generation of lipid peroxides (10). The aim of the present study was to evaluate lipid peroxidation in patients with schistosomiasis mansoni who were submitted to clinical and surgical treatment, measuring levels of plasma MDA and erythrocyteconjugated dienes. Thiobarbituric acid, MDA and methanol were obtained from Sigma (St. Louis, MO, USA) and dichloromethane and trichloroacetic acid from Vetec (Rio de Janeiro, RJ, Brazil).All other solvents and chemicals were of analytical grade. Young patients (11-20 years; N = 18) of both genders with hepatosplenic schistosomiasis mansoni, who had had upper digestive bleeding but no renal, cardiac, hepatitis, or other parasite/microbial associated disease were included in this study.They were outpatients at the Clinical Hospital, Federal University of Pernambuco (UFPE), Recife, and had been treated with the antischistosomal drug, oxamniquine (a single dose of 20 mg/ kg) followed by splenectomy, ligature of the left gastric vein and auto-implantation of spleen tissue into an omental pouch of the major omentum.The study was conducted on patients 3 to 6 years after the surgical procedure.Patients had generally developed normally, although development improved after treatment, as did their indicators of liver function, especially serum albumin concentration and prothrombin time (3).Body mass index (BMI) was calculated as the body weight (kg) per height 2 (m 2 ).None of the women in either group were pregnant or used oral contraceptives.The study was approved by the Ethics Committee of the University Hospital, UFPE (Process number 193/99-CEP/CCS) and written informed consent was obtained from all patients and volunteers used as controls. Early morning blood samples were collected into ice-cold tubes containing EDTA (1 mg/ml) and erythrocytes and plasma were isolated by centrifugation at 3,000 g for 15 min at 4ºC.As storage at -20ºC may increase lipid peroxidation, duplicate samples were analyzed immediately.Plasma thiobarbituric acid reactive substance (TBARS) were measured by the method of Buege and Aust (11), mixing aliquots with 15% (w/v) trichloroacetic acid and 0.375% (w/v) thiobarbituric acid and heating at 100ºC for 15 min.Samples were cooled to room temperature, centrifuged at 3,000 g for 5 min and their absorbance was measured at 535 nm against a reaction mixture lacking plasma but subjected to the same procedure.As in other studies (12), levels of TBARS are reported as nmol of MDA per liter of plasma, using a standard curve prepared with MDA bis (diethyl acetal) as the MDA reference source.Conjugated dienes in washed erythrocytes were extracted with dichloromethane/methanol (2:1, v/v) and measured by the method of Buege and Aust (11).A solution of 50 mM potassium chloride was then added and, after the preparation was left to stand overnight at 4ºC, the concentration of conjugated dienes in the lower phase was measured by absorbance at 233 nm against dichloromethane (11). Results were analyzed by the unpaired Student t-test and Pearson's correlation coefficient; both were calculated using the Origin software program version 5.0, and differences were considered significant for P < 0.05.Data in the text are reported as means ± SEM.No gender-related difference was observed in the correlation analysis and so data for both genders were used as a single group. Most studies exclude patients considered to be malnourished, usually based on BMI scores or on plasma protein and hemoglobin levels.Here, we used BMI and dietary history to exclude malnourished patients.All patients included lived at home with their mother or a caregiver responsible for preparing food, and no significant changes in their dietary habits were noted as a result of the disease.In addition, the BMI of our patients (19.3 ± 1.2 kg/m 2 ) did not differ from the control group (21.7 ± 0.5 kg/m 2 ; P > 0.05). One indicator of lipid peroxidation, plasma MDA concentration, was increased by 10% in patients with schistosomiasis mansoni (Figure 1A), but the rise was not significant.However, levels of conjugated dienes extracted from patient erythrocytes were significantly increased by about 100% (P < 0.001) in comparison to the control group (Figure 1B).Moreover, these two markers of lipid peroxidation, circulating MDA and erythrocyte-conjugated dienes, were significantly correlated in both patient (r = 0.62; P < 0.01) and control (r = 0.57; P < 0.05) groups (Figure 2). The clinical manifestations of schistoso- miasis range from the mild intestinal form to severe hepatosplenic forms associated with esophageal varicose veins and upper digestive bleeding.Infection caused by S. mansoni can induce granulomatous inflammation of the liver and lead to formation of certain reactive oxygen species, such as superoxide ions (O 2 These promote lipid peroxidation (4), an adverse event which contributes to the pathology associated with schistosomiasis.Lipid peroxidation is a complex and relatively imprecise series of reactions leading to a diverse array of bioactive molecules, many still ill-defined (6).Perhaps not surprisingly, given its complexity, there is no clear agreement on how best to quantify lipid peroxidation and the techniques used range from relatively crude measures to more sophisticated analyses of individual products using HPLC and/or mass spectrometry.As our study is the first investigation of lipid peroxidation in patients with schistosomiasis, we chose two practical assays for oxidative stress, i.e., plasma TBARS and erythrocyteconjugated dienes.These markers correlate closely with other methods commonly used to detect lipid peroxidation and are wellcharacterized and reliable indicators (7,13).However, while the TBARS assay is frequently used as a measure of MDA, with data expressed as nmol MDA per liter plasma (12,14) as we did in the present study, the assay is not specific for MDA since other compounds including sugars, amino acids and bilirubin may cross-react with thiobarbituric acid.Thus, in future studies it may be preferable to use HPLC, a more precise though complex analysis (14). In the present study, we found clear evidence of lipid peroxidation in erythrocytes from patients with schistosomiasis mansoni, who had undergone clinical and surgical treatment, as judged by their significantly high content of conjugated dienes, products which reflect the initial phase of lipid peroxidation.On the other hand, when the degradative phase of lipid peroxidation was examined in the plasma of these patients, assaying TBARS as a measure of MDA levels, there was no significant increase compared to normal subjects. The increased lipid peroxidation in patient erythrocytes seems to be a consequence of the disease rather than of clinical treatment with oxamniquine.Indeed, plasma MDA levels were not altered in S. mansoni-infected mice treated with praziquantel, a drug used to kill the parasite (15).Furthermore, our patients did not have granulomatous inflammation, which is known to increase lipid peroxidation and raise levels of conjugated dienes (4).However, our patients do still have moderate liver disease, and in a study of mild alcohol-induced liver damage lipid peroxidation was detected by various indicators of oxidative stress, even though plasma MDA levels were normal (16). Erythrocytes constitute a well-established model to study cytotoxic damage to mem- branes by chemical and physical free radical promoters.As shown here, schistosomiasis mansoni may induce chemical damage to erythrocyte membranes through an oxidative stress pathway.Superoxide anion, singlet oxygen and H 2 O 2 can all contribute to erythrocyte lipid oxidation with the formation of intermediate conjugated dienes preceding that of aldehyde products (TBARS). The reactions begin when the main endogenous lipophilic antioxidant, vitamin E, is extensively consumed and in experimental schistosomiasis mansoni vitamin E supplementation lowers the activity of catalase and glutathione peroxidase, liver enzymes involved in antioxidant mechanisms (17).In the present study, schistosomiasis appeared to promote lipid peroxidation, as measured by increased erythrocyte-conjugated diene concentration.However, excessive plasma MDA was not produced, perhaps because reduced glutathione (GSH) and protein thiols were preserved, although, as yet, the redox capacity of GSH/oxidized glutathione (GSSG) and antioxidant levels of vitamin E and C have not been measured in human schistosomiasis.Thus, although we have examined two blood markers of oxidative stress in this preliminary study of patients with schistosomiasis, it will be important in future studies to determine how changes in lipid peroxidation markers relate to the disease process and to reductions in antioxidant defenses, including the consumption of ascorbate and other antioxidant nutrients during free radical scavenging.For example, in elderly people blood antioxidant defenses are significantly reduced, and in patients with unstable hemoglobin disorders or low levels of glucose-6-phosphate dehydrogenase the erythrocytes are susceptible to oxidative attack, predisposing them to drug-or infection-mediated hemolytic crises.Furthermore, when hepatic granulomas develop in experimental schistosomiasis they trigger production of reactive oxygen species which alter the antioxidant defense profile (5). Our finding of a positive correlation between plasma MDA and erythrocyte-conjugated dienes supports the use of these routine and practical assays as a means of identifying and monitoring patients susceptible to severe lipid peroxidation in schistosomiasis.The correlation is also consistent with a mechanism of a reactive oxygen species-induced lipid peroxidation process in which polyunsaturated fatty acids undergo hydrogen abstraction by free radical attack to form conjugated dienes (6,9), which are further attacked to generate MDA or lipid hydroperoxide.Thus, conjugated dienes are intermediates during MDA production and this could explain the positive and significant correlation between them, both in patients and control groups.However, we recently reported a significant reduction in activity of lecithin-cholesterol acyltransferase, the plasma enzyme which esterifies cholesterol and helps regulate cell membrane lipid composition, in patients with schistosomiasis who had undergone the same clinical and surgical treatments as the patients in the present study (18).This thiol-containing enzyme is highly sensitive to several oxidizing species, either directly or by crosslinking of its co-factor, apoAI (19), and so oxidative stress may contribute to the acquired lecithin-cholesterol acyltransferase deficiency and membrane disturbances (20) seen in human schistosomiasis. Figure 1 . Figure1.Concentration of lipid oxidation products in plasma and erythrocytes of patients with hepatosplenic schistosomiasis mansoni.Plasma malondialdehyde (A) and erythrocyteconjugated diene (B) levels in 18 patients with schistosomiasis, who had been treated by splenectomy, obliterative suture of the esophageal varices, ligature of the left gastric vein, and auto-implantation of spleen tissue.*P < 0.05 compared to controls (unpaired Student ttest). Figure 2 . Figure 2. Correlation between plasma malondialdehyde and erythrocyte-conjugated dienes.A, Patients with schistosomiasis mansoni.B, Healthy individuals (control group).Data were analyzed by the Pearson correlation method.
3,085.6
2004-06-22T00:00:00.000
[ "Biology", "Medicine", "Chemistry" ]
Java Based Computer Algorithms for the Solution of a Business Mathematics Model —A novel approach is proposed as a framework for working out uncertainties associated with decisions between the choices of leasing and procurement of capital assets in a manufacturing industry. The mathematical concept of the tool is discussed while the technique adopted is much simpler to implement and initialize. The codes were developed in Java-programming language and text-run and executed on a computer system running on Windows 7 operating system. This was done in order to solve a model that illustrates a case study in actuarial mathematics. Meanwhile the solution obtained proves to be stable and proffers to suit the growing frenzy for software for similar recurring cases in business. In addition, it speeds up the computational results. The results obtained using the empirical method is compared with the output and adjudged excellent in terms of accuracy and adoption. INTRODUCTION This paper is directly geared towards presenting mathematics as a veritable tool in financial investments of which an average businessman must know the fundamental principles involved in the use of these tools [1].However, the business man ought not only to have the traditional personal qualities expected of a leader and expert in the field, but he must also know the fundamental principles involved in the use of the most modern tools for financial management [2].Actuarial mathematics, or rather mathematics of Finance is amongst these tools.This branch of mathematics is certainly not new, but with the proliferation and increased capacity of financial institutions today, it becomes very pertinent that the tools of the trade be advanced to match the growing trend [3].An organization needs not concern herself with the ways in which a certain model or formula is obtained, but must know as much as possible about the use of them in terms of their adaptability to his problems [2].Hence an optimum advantage of this tool could be reached, when there are the possibilities of achieving a faster and more accurate solution whenever it is necessary [4].Secondly, office automation has been in vogue among business communities.Ignorantly the acquisition of computers of all sizes and capacity seem to be used to measure the success of an establishment only.Yet, hardly do they know that computers can be used extensively for solving problems in this field of mathematics.Therefore, it becomes imperative to make a tentative survey of what is essential for an organization to know how to write or use computer programming in solving problems of Actuarial mathematics.Hence, part of this endeavor is to prepare and present a model with computer program package that handles some of the inherent problems.This will in go a long way to make the operations faster, more accurate and more reliable.A business model is a sustainable way of doing business, and sustainability stresses the ambition to survive over time and create a successful, perhaps even profitable, entity in the long run [5]. II. PROBLEM STATEMENT The widespread use of computers in business organizations creates needs for some of mathematical problems associated with these organizations to be solved with aid of computers.Moreover some of these problems are routine and their solutions, if obtained, can easily be stored in a computer memory.With the above scenario in mind, several questions arise as regards the use of computer to solve such problems in Actuarial mathematics via computer programs in order to achieve a faster and more accurate solution.This paper however, seeks to provide an answer to similar problem by developing and test-running a computer algorithm for solution to a model case, written in java programming language. III. METHOD OF ATTACK Programming is not only coding.Primarily it implies structuring of the solution to a problem and then refines the solution step by step [6].On the first instance, the applications of mathematical and experimental techniques are facilitated as compared to manual procedure when computer facilities are used.Furthermore, it will be instructive to outline the complete process of setting up a representative technical problem for computer solution to see just what a person does and what a computer does.This is where programming helps since computer cannot follow direct orders.Effective answers to prevailing queries in actuarial mathematics make it necessary to achieve a precise and unambiguous statement of exactly what we want the computer to do in terms of operations of which it capable [7].Meanwhile, the stability of the computer algorithms depends on the language used and clearly defining what is to be done.The computer language imposes restrictions of its own in terms of what kind of orders it can interpret and execute.Besides, there is too much likelihood to make mistake in programming.The mistakes must be located and corrected.While the program must be thoroughly tested to prove that it actually does what the writer meant to do.Another thing that matters in ensuring stability is the correct interpretation of the output or result from the computer.The computer is faster and more accurate than a human being, but it cannot decide how to www.ijacsa.thesai.orgproceed or what to do with result.On the general method of attack, there are different methods or rather several procedures involved in solving a problem with a computer.Moreover, in placing today's computing power in perspective, there is much more to solving a problem with a computer than the part the computer does [8].Insolving a problem, you have to define the problem itself.This is necessary since the computer cannot do it on its own.Secondly, we formulate the mathematical description of the process under study.It is also appropriate to use numerical analysis if the need arises.The algorithm is then formulated in agraphical form, which is the flowchart.The program checkout or debugging is carried out in order to minimize the chances of in putting garbage into the computer.Finally, the program can be combined with problem data and run.From here, the computer produces results or output of which should be interpreted correctly.For this paper, Java programming language is used for the codes. IV. THE TYPICAL PROBLEM This paper looks into the model problem as proposed by [1] and attempts to solve same using a different programming approach.Suppose a company is considering either buying machinery for or leasing it for per month.Assume that money is worth for period of time may be; annually, semiannually, quarterly or monthly and the life of the machinery is M years, after which the salvage value becomes suppose the company could purchase a maintenance contract for per month.Then advise the company on whether they should buy or lease the machinery. Let the interest conversion period be j then the total number of the interest periods is where m is the asset's life span.Then if i% is the interest rate in decimal form therefore .Here we have and , but if , then the company will just buy the machinery for instead of leasing it which will be costlier. V. DERIVATION OF THE MODEL To solve the problem, the present value, #z which is to be received at the end of m years and deduct the result from x. Hence this is denoted by .The formula for finding present value at a compound interest is And subtract the result from x to get the real present value, which represents the present value of the cost of owing the machinery.Hence we have; The present value of the rent is for M years.However, the formula for the annuity does not include a payment at the beginning of the term, and then the rental price would be Let be the maintenance cost of the machinery which may also be included in the rental price for the same period of time M years.Suppose the company could purchase a maintenance contract or other miscellaneous expenses for servicing the machinery then the present value for would be; K may be found the use of the formula for depreciation. Generally we have; which is the total cost of buying and maintaining the machinery. Then the following conclusions are drawn: 1) If then the company would be advised to buy the machinery. 2) If then the company would be advised to lease the machinery. 3) If then the company can take any of the options since they are equal. VI. APPLICATION OF THE MODEL Suppose the Company considers buying machinery for Ninety Thousand Pounds (£90, 000.00) or lease it for Three Thousand Pounds (£3, 000.00) per month.Assume that the money is worth 12% compounded monthly and the life of the Machinery is Five (5) years, after which time the salvage value will be Twenty Thousand Pounds (£20, 000.00).Suppose the Company could purchase a maintenance contract for one thousand Pounds (£1, 000.00).Then advise the company on whether they should buy or lease the machinery. VII. SOLUTION TO THE REAL-LIFE CASE Finding the present value of Twenty Thousand Pounds (£20, 000.00) which is to be received in five years, , since the money is worth 12% compounded monthly then Then . Using equation (1) gives; Meanwhile, the difference of the cost and leasing of the machinery which is obtained thus, www.ijacsa.thesai.orgrepresents the present value of the cost of owning the machinery.Using equation ( 4) or ( 5) to find the present value of the rent for five years, the , would be; This is in line since the formula for annuity does not include a payment at the beginning of the term.This certainly seems to indicate that the company should buy the machinery.This does not however consider other factors such as maintenance, which would be included in the rental price .Suppose the Company could purchase a maintenance contract for per month.Then the present value of the maintenance contract would be; Therefore, which is the total cost of buying and maintaining the machinery, gives; Therefore , which is translated as , the company would be advised to buy the machinery. VIII. COMPUTER ALGORITHM USING JAVA PROGRAMMING LANGUAGE To compute is to determine by mathematics which does not generally depend on the programming language [9].The point is that the algebraic expressions are directed towards providing a few specific answers to the posed questions.However, business programs tend to be oriented towards reading or accessing a great deal of data for processing information.Hence the use of computer for solving and processing bulky data is justified if the solutions must be repeated number of times.More so, the computers are capable of performing such bulky calculations/information rapidly and accurate the manual procedure, if solutions require a large amount of storage and the problem solving process can facilitate a clearer understanding of the given problem.Generally, these instances outlined above can also be identified as the advantages of computers methods of solution to that of the manual procedures. The world is fast evolving especially in the field of computers and computing.The development of applications has also evolved along these lines using sophisticated programming languages, one of which is Java [10].Java is a general purpose, object oriented language that runs on billions of computers and mobile devices (cell phones, smart phones and hand held devices) worldwide.Java is used in a wide spectrum of applications and it has three different editions, namely, Java Standard Edition 7 (Java SE 7) which was employed in developing the program for this paper; Java Enterprise Edition (Java EE) which is geared towards the development of large-scale, distributed networking applications and web-based applications [11].The Java Micro Edition (Java ME) is geared towards developing applications for small, memory-constrained devices such as BlackBerry smart phones, Google's Android operating systemused on numerous smartphones, tablets (small, lightweight mobile computers with touch screens) [12].Java has revolutionized how things work and will continue to do so for many more years to come.Java enables the development of applications that mimicked how things "objects" exists in the real world thus making it more natural and easier to program.Importantly, Java is also an open source development application. IX. DISCUSSION This paper which involves vital formulas essential for solving financial problems that often arise in the business and industries was diligently treated.Software codes to solve the model problem was developed in java SE7programming language, adopted and applied satisfactorily in the solution of the model problem. The user interface was also designed using NetBeans IDE (Integrated Development Environment) that simplified and made easier the design process by generating codes [12].While graphical objects were drag and dropped (most of these codes as developed by the IDE are not reproduced in this paper because of space constraints).The program was executed on a computer system running on Windows 7 operating system.Using computers definitely improves the speed, accuracy, reliability of the solutions developed.The output is in good agreement with the results obtained using the empirical method.Thus it is a good approach to achieving feasible and stable solutions to management problems in business and industries.Emphatically, using computer facilitates a faster means of solving challenging problems generally. X. CONCLUSION This paper brings to limelight, the importance of adopting mathematical models and computer programming in solving problems in businesses and industries.Actuarial mathematics as a veritable tool has been adjudged excellent for good and meticulous fiscal appropriations in business and industries.More so, Java programming language proves in this research to be the most friendly and versatile in use.These have justified that both tools are of great benefit to business and industry if they are fully utilized, proving that successes do not always depend on just plain luck.
3,089.8
2014-01-01T00:00:00.000
[ "Business", "Mathematics", "Computer Science" ]
Developing low-cost nanohybrids of ZnO nanorods and multi-shaped silver nanoparticles for broadband photodetectors Photodetectors are essential elements for various applications like fiber optic communication systems, biomedical imaging, and so on. Thus, improving the performance and reducing the material costs of photodetectors would act as a motivation toward the future advancement of those applications. This study introduces the development of a nanohybrid of zinc oxide nanorods (ZnONRs) and multi-shaped silver nanoparticles MAgNPs through a simple solution process; in which ZnONRs are hybridized with MAgNPs to enable visible absorption through the surface plasmon resonance (SPR) effect. The photodetector based on ZnONRs/MAgNPs is responsive to visible light with representative wavelengths of 395, 464, 532 and 640 nm, and it exhibits high responsivity (R), photoconductive gain (G) and detectivity (D). The maximum R is calculated from the fitting curve of the responsivity-power relation with the value of 5.35 × 103 (mA W−1) at 395 nm excitation. The highest G and D reach 8.984 and 3.71 × 1010 Jones at that wavelength. This reveals the promise of our innovative broadband photodetector for practical usage. Introduction Photodetectors (PDs) play essential roles in almost every aspect of human life such as industry, the military, imaging, communication, etc.; therefore, studies into this eld have continuously attracted scientists over the years. [1][2][3] These devices, operating mainly on the principle of light detection in suitable regions, can be classied into several groups: ultraviolet (UV), visible (Vis), infrared (IR) and sometimes broadband (UV to Vis, Vis to IR, even UV to IR) depending on the optical properties of their active materials. 4 For instance, while b-Ga 2 O 3 is appropriate for UV PDs, as reported by the group of J. Yu; (In,Ga)N nanowires were exploited to fabricate a device used in the visible region. 5,6 Otherwise, Zeng and partners demonstrated that epitaxial-growth PtTe 2 -based PD had the ability to detect light in the IR region with the value of wavelength up to 10.6 mm. 7 However, researchers are now moving towards broadband phototodetectors, which can operate in a wide range of wavelengths, in order to take advantage of various components in the solar spectrum. In 2021, a two-dimensional layered Ta 2 -NiSe 5 photodetector was reported by Y. Zhang et al., showed a noticeably high performance with the responsivity of 198.1 A W −1 . 8 Around the world, this type of device has been investigated so far by many groups, indicating the high demand for a device that is sensitive to different light sources. [9][10][11] In terms of photodetector fabrication, various lowdimensional (0D, 1D, 2D) materials can be applied. 12 Among them, 1D metal oxides (CeO 2 , Cu 2 O, SnO 2 , .) with unique morphologies own superior sensitivity to light thus making this type of material prevailing in photodetectors fabrication. 13 Up to now, 1D zinc oxide (ZnO), particularly ZnO nanorods (NRs), has been investigated deeply because of their outstanding properties such as durability, large exciton energy of 60 meV and simple synthesis processes. 14-16 Moreover, the morphology and orientation of ZnONRs (1D), which have a big impact on the performance of the ZnO-based photodetectors could be easily controlled by changing the preparation condition. 17 However, ZnONRs have never been a suitable material for broadband detection due to several problems, especially the large band gap of 3.3 eV, which means that devices based on this material can operate only in the UV region that accounts for only 4% in the solar spectrum. [18][19][20] Therefore, modifying ZnONRs to improve their optical properties has become the interest of research for many years. 21,22 For years, scientists have sought for solutions to overcome the limitations of ZnO's absorption, and several modication methods as doping with transition metals, decorating with noble metals have been intensively surveyed. [23][24][25][26] The former method utilized the transition metals element consisting of Cu, Ti, Co, Mn, etc. on the platform of replacement of the host with the dopant metal atoms. 27 For instance, by doping copper into ZnO lattice, energy levels of Cu + and Cu 2+ localize inside the band gap of the host material, leading to the narrow optical band gap. 18,28 For the latter, metal nanoparticles loaded onto ZnO provide the resonant oscillation of electron clouds under visible excitation, known as surface plasmon resonance (SPR) effect, can effectively contribute to the visible detection of the photodetectors. 29 Both these two modication approaches have been widely investigated by scientists and signicantly enhancement in performance of ZnO-based photodetectors were reported; 30,31 nevertheless, the discussed methods still have some aspects that need to be improved further. Indeed, although the doping solution has been proven as an effective way to shi the absorption edge to the visible region, some demerits such as long photo-response time and the difficulty in controlling the defects make the industrial manufacture of ZnO-doped photodetectors difficult. 28,32 Besides, the decoration procedure usually requires state-of-the-art facilities as physical deposition systems, which is high-cost and time-consuming. 33 Finally, the most problematic challenge is that the modied ZnO-based photodetectors in previous reports were only sensitive to a specic wavelength of just around 400 nm, which is not ideal for the desired broadband devices. 28,29,34 In this study, for the rst time, we employed a solutionprocessed nanohybrid of ZnONRs and multi-shape silver nanoparticles (MAgNPs) as the active channel of a resistive-type photodetector and reported the device's outstanding response towards a wide wavelengths. Through various characterizations and measurements, our photodetector is conrmed sensitive to light with various wavelengths, including purple (395 nm), blue (464 nm), green (532 nm) and red (640 nm) with relatively good responsivity of 5.35 × 10 3 , 9.84 × 10 2 , 5.0 × 10 2 and 5.92 mA W −1 , respectively. Our simple and low-cost process can be applied in the manufacture of ZnO-based broadband PDs and enable a new domain of optoelectronic devices operating in a diversity of wavelengths. Methods The hydrothermal synthesis method of ZnONRs in this study followed the reported work. 18 At rst, a seed layer of ZnO NPs (dispersed in ethanol) was spin-coated onto a glass substrate at the rate of 3000 rpm for 30 seconds, followed by a heat treatment at 95°C for 1 hour in order to evaporate the solvent. Next, ZnONRs were hydrothermally grown on the prepared substrates at 95°C for 3 hours with the nutrient solution containing 50 mM of Zn(NO 3 ) 2 $6H 2 O and 50 mM of HMTA. Aer gently rinsing with water and drying by N 2 , decorating MAgNPs on the ZnONRs sample was carried out by a photoreduction procedure. Particularly, the ZnONRs sample was immersed in the MAgNPs solution and then irradiated toward ultraviolet light for 1 hour and then dried at 40°C. Here, the synthesis process of MAgNPs was described in Fig. S1 (ESI †). Device fabrication and characterizations At rst, silver electrodes were patterned on glass substrate by a sputtering process using a shadow mask. Then, a layer of ZnO NPs seed solution was formed on active channel with the area of 0.6 mm 2 , followed by the growth of ZnONRs through a hydrothermal step. Finally, the photodetector was completed by immersing the as-grown ZnONRs substrate in the MAgNPs solution under UV irradiation to decorate the nanoparticles onto the nanorods. The crystal structure of ZnONRs and ZnONRs/MAgNPs was recorded by X-ray diffraction (XRD) spectroscopy performed on the D8 Advance-Bruker diffractometer with the monochromatic Cu-Ka radiation (l = 1.54 Å). The surface morphology and elemental composition (EDX) of the assynthesized samples were examined using a eld emission scanning electron microscope (Model JSM-6500F, JEOL Co. Ltd). The shape of MAgNPs were investigated by a transmission electron microscopy system (TEM, JEOL, JEM-1400) while their size are evaluated through hydrodynamic size by a nanoparticle analyzer (HORIBA, SZ-100). The optical properties were measured through Ultraviolet-Visible (UV-Vis) spectrophotometer (JASCO V670). In terms of the photodetector, the device's performance was assessed through the currentvoltage (I-V) relation and current depending on time (I-t) curves which were recorded by the system Keithley 2400. 35,36 Noticeably, we found a small peak at 2q = 38.09°a ppeared in the XRD pattern of the ZnONRs/MAgNPs sample, which can be attributed to the (111) plane of Ag. 37,38 Here, a small signal of Ag in the EDX spectrum (Fig. 2b) conrmed the existence of MAgNPs in the ZnONRs/MAgNPs sample (without Ag electrodes). Besides, a small loading of these nanoparticles onto ZnONRs does not affect the structure of the host material as the XRD peaks remain unchanged. Structural properties Regarding the materials morphologies, TEM image of MAgNPs (Fig. 2c) shows various shapes of the Ag nanoparticles ranging from circle to triangle, oval with the sizes of several dozens of nanometers. The diversity in shape and size of MAgNPs may be the element contributing to the visible absorption enhancement of the ZnONRs/MAgNPs hybrid structure, which will be discussed later. The surface morphology of the hybrid sample is shown by FE-SEM image in Fig. 2d. Overall, the ZnONRs were grown into a hexagonal structure with high density. Around the nanorods, there are some particles attributed to MAgNPs. Along with XRD and EDX results, this FE-SEM image once again conrms the successful decoration of MAgNPs onto the ZnONRs. Fig. 3a shows the UV-vis absorption spectra of a pristine ZnONRs sample and a nanohybrid ZnONRs/MAgNPs sample. It can be clearly observed that both samples demonstrate absorption peaks at nearly 368 nm due to the excitonic absorption of ZnO. 39,40 It can be clearly observed that both samples demonstrate absorption peaks at nearly 368 nm due to the excitonic absorption peaks of ZnO. For pristine ZnO, an absorption in visible light region is detected, which may be a result of band tail formation inside the ZnONRs during the synthesis process; 41,42 however, this absorption is small in comparison with that of ZnONRs/MAgNPs sample. Furthermore, in the ZnONRs/MAgNPs sample, a wide absorption region ranging from just under 400 to more than 500 nm is determined, which is assigned to the surface plasmon resonance (SPR) effect of MagNPs. 43,44 In fact, since the sizes of the silver nanoparticles have a certain effect on the SPR wavelengths, wide SPR band can be explained by the size diversity of the MAgNPs which is conrmed by dynamic light scattering (DLS) analysis in Fig. 3b. 45,46 Therefore, TEM, SEM images ( Fig. 2c and d) and UV-vis, DLS spectra ( Fig. 3a and b) conrm the successful synthesis and the decoration of MAgNPs onto ZnONRs. Due to the appropriate wide absorption, the hybrid structure ZnONRs/MAgNPs was chosen to fabricate broadband photodetector and 395, 464, 532, 640 nm light was employed as excitation sources to evaluate its performance. Photodetector characteristics I-V characteristics of the photodetector under 395 nm-light exposure at different intensities (P) are measured and presented in Fig. 4a. Here, the linear I-V relations with high symmetry under forward and reverse bias reveal the good ohmic metal-semiconductor contact. 47 Accordingly, as the light intensity rises, the current goes up remarkably and the ZnONRs/ MAgNPs photodetector exhibits a comparatively good on/off ratio of 1.744 × 10 3 at P = 37 mW cm −2 under 1 V bias. Especially, there is almost no disparity between the dark and the recovery lines in the I-V characteristics, meaning that the device possesses stable performance. To have a profound inside into the hybrid photodetector operation, time-dependent photocurrent measurements (I ph -t), in which I ph = I light − I dark , 48 are taken into consideration. The photocurrent data at a constant bias of 1 V is shown in Fig. 4b. Clearly, the current rises when the 395 nm light is turned on and reduces when the light is turned off. Especially, the photocurrent climbs with the increase of light intensity, explained by the relationship between I ph and P according to the formula I ph = A × P q , 49 in which A stands for the wavelength constant and q represents the exponential number. The recorded photocurrents match well with the I-V characteristics as presented previously. Regarding the device's response time, it is determined as the time for the photocurrent through the photodetector to reach 63% its maximum and recovery time, dened as the time to return back ca. 37% its highest value. 50,51 Here, we found that the response time and recovery time were about 27.04 and 15.81 seconds, respectively, toward the 395 nm-light exposure. Although the response time and recovery time are still comparatively long, they are still shorter than that of some ZnObased photodetectors, 29,52,53 indicating the potential of using our nanohybrid device for practical applications. Other crucial parameters of a photodetector as responsivity (R), photoconductive gain (G) and detectivity (D) are calculated to evaluate the device's performance. First, R is dened as the ratio between the generated photocurrent and the incident light intensity, as described by: Then, the photoconductive gain, determined as the number of carriers detected per an absorbed photon can be obtained by applying the equation: And nally, detectivity that represent the ability to detect weak signals of light, is assessed by: where I ph , P and A are the photocurrent, the incident light intensity and the effective device area (0.6 mm 2 ) in the given order, h stands for the Planck's constant, c represents the velocity of light, l and e are the wavelength and the electron charge, respectively. [54][55][56] According to the mentioned platform, R, G, D of the ZnONRs/ MAgNPs device under 395 nm-light exposure are measured and presented in Fig. 5. It can be seen in this gure that R, G and D decline as the light intensity goes up. Particularly in Fig. 5a, the collected experimental data of R tted relatively well with the function RðPÞ ¼ c þ b P þ a ; therefore, the maximum R achieved at very low excitation power (P / 0) is ca. 5.35 A W −1 . Besides, Fig. 5b indicates that the highest recorded value of G and D under the same 395 nm-light exposure are 8.98 and 3.71 × 10 10 Jones, respectively. Interestingly, similar behaviors in both I-V and I ph -t relations were also observed when the photodetector was exposed to other wavelengths including 464 nm ( Fig. S2a and b †), 532 nm ( Fig. S2c and d †) and 640 nm ( Fig. S2e and f †). Besides, photocurrent of both pristine ZnONRs and ZnONRs/MAgNPs photodetectors were measured and plotted versus wavelengths (Fig. S3 †). Clearly, pristine ZnONRs device does not exhibit response towards light in invisible region, except a small rise in photocurrent under 395 nm illumination, which may be the result of band tail formation. 41,42 However, this value is trivial compared with that of ZnONRs/MAgNPs device. The response and recovery times of the device toward each wavelength are evaluated and listed in Table S1. † The R, G, and D values as a function of P toward each wavelength can be assessed through Fig. S4. † Clearly, the values of R, G, and D toward 464 nm or 532 nm wavelengths ( Fig. S4a and d †) slightly decreased when P increased but its R values was also t with the same function as exposed toward 395 nm light. However, when excited by the 640 nm light source, the device witnesses a rise in all three mentioned parameters at the high intensity ( Fig. S4e and f †). We assumed that this was partly due to the thermal effect, which arose under the exposure condition of long wavelength (640 nm) at high intensity and contributes to the generation of charge carriers. Thus, the highest R of our photodetector under this wavelength is reported at P = 1.65 mW cm −2 . Broadband photodetection property of our hybrid device is demonstrated in Fig. 6a. Herein, under several excited wavelengths, the values of the photocurrent climb, revealing the sensitivity of wide-range excitations. The photodetector's stability is also investigated and represented by Fig. 6b. Apparently, under repeated stimulation when the light is continuously turned on and off, the changes in the device's performance are negligible, exhibiting the prospect of longterm operation. Table 1 shows the summary of the ZnONRs/MAgNPs photodetector and comparison with several studies. Clearly, our device exhibits sensitivity to longer wavelengths under lower bias voltage compared with the others. Furthermore, the photodetector's typical parameters as R, G, and D are comparable to those of other devices reported by other research groups. Although the values of R, G and D obtained in our study are not superior, the simple solution-processed ZnONRs/ MAgNPs material is still potential for practical broadband detection applications. Sensing mechanism Energy band diagrams of the hybrid structure are depicted in Fig. 7. In other words, in dark condition (Fig. 7a), there is a metal-semiconductor contact formed at the interface between ZnONRs and MAgNPs due to the Fermi levels alignment. 51 Under light exposure (Fig. 7b), SPR effect of MAgNPs occurs. Indeed, the incident photons are absorbed by MAgNPs, which leads to the oscillation of electron clouds with a typical frequency. If the frequency of the excitation light matches the specic frequency of the electron clouds, these clouds undergo resonant oscillation and the electrons inside the MAgNPs become highly energetic, known as "hot electrons". Consequently, because the excited state of the hot electrons (SPR state) is higher than the conduction band (CB) of ZnONRs, these "hot electrons" will easily move towards the ZnONRs and transfer into the electrodes, generating the photocurrent through the device. 24,29,64,65 Therefore, the response of ZnONRs/ MAgNPs hybrid structure to visible illumination is strongly attributed to the SPR effect occurring in the MAgNPs. Moreover, since it was demonstrated that when the size of silver nanoparticles increases, the SPR wavelength becomes longer, 46 the usage of MAgNPs in this work plays a signicant role. Indeed, because the synthesized MAgNPs possess different shapes (triangle, sphere, oval, etc.), which leads to various sizes, the SPR effect can occur in many wavelengths; therefore, broadband response of ZnONRs/MAgNPs photodetectors is observed. Conclusions In summary, a ZnONRs/MAgNPs hybrid broadband photodetector is fabricated by a simple solution procedure. Due to the novel utilization of MAgNPs with different shapes and sizes, the device reveals a noticeable sensitivity to a wide range of wavelengths, including 395 nm, 464 nm, 532 nm and 640 nm with the maximum responsivity of 5.35 × 103, 9.84 × 102, 5.0 × 102, and 5.92 mA W −1 , respectively. Other device's parameters also exhibit acceptable values, from 0.011 to 8.984 for the photoconductive gain and from 3.3 × 10 7 (Jones) to 3.71 × 10 10 (Jones) for the detectivity. This simple hybrid structure is believed to pave the way for studies into a new domain of highperformance broadband photodetector in the future. Conflicts of interest There are no conicts to declare.
4,377.8
2023-07-12T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
The Myriad Uses of Instantons In quantum chromodynamics (QCD), the role which topologically non-trivial configurations play in splitting the singlet pseudo-Goldstone meson, the $\eta^\prime$, from the octet is familiar. In addition, such configurations contribute to other processes which violate the axial $U(1)_A$ symmetry. While the nature of topological fluctuations in the confined phase is still unsettled, at temperatures well above that for the chiral phase transition, they can be described by a dilute gas of instantons. We show that instantons of arbitrary topological charge $Q$ generate anomalous interactions between $2 N_f |Q|$ quarks, which for $Q = 1$ make the $\eta^\prime$ heavy. For two flavors we compute an anomalous quartic meson coupling and discuss its implications for the phenomenology of the chiral phase transition. A dilute instanton gas suggests that for cold, dense quarks, instantons do not evaporate until very high densities, when the baryon chemical potential is $\gtrsim 2$ GeV. In quantum chromodynamics (QCD), the up, down and strange quarks are relatively light, and there is an approximate global flavor symmetry of SU (3) L ×SU (3) R ×U (1) A . When the hadronic vacuum spontaneously breaks chiral symmetry, a flavor octet of light pseudo-Goldstone bosons is generated, which are the π, K, and η mesons of broken SU (3) L × SU (3) R . When QCD first emerged, it was a puzzle why there isn't an associated ninth pseudo-Goldstone boson in the flavor singlet channel, the η , from the breaking of the axial U (1) A symmetry. This occurs because while classically there is an axial U (1) A symmetry, it is not valid quantum mechanically because of an anomaly [1]. There are topologically nontrivial fluctuations which violate the U (1) A symmetry [2] and make the η heavy [3]. Classically these configurations are instantons: these have a topological winding number equal to an integer Q, and an (Euclidean) action equal to 8π 2 |Q|/g 2 , where g is the coupling constant of QCD . Instantons split the singlet η from the octet of pseudo-Goldstone bosons, and also generate the θ parameter of QCD [5]. There are several open questions regarding the nature of topological fluctuations in the QCD vacuum. In absence of a large energy scale to cut-off the size of the instantons, their fluctuations on any length scale become relevant and the integration over their contribution blows up. This is cured non-perturbatively through confinement, where dense topologically non-trivial fluctuations may form an instanton liquid [15][16][17]. Furthermore, it is expected that QCD behaves smoothly as the number of colors, N c , goes to infinity [38,39]. In this limit, the contribution of a single instanton vanishes exponentially, while current algebra can be used to show that the η is still split from the octet of pseudo-Goldstone bosons [38]. This could occur if there are topologically non-trivial fluctuations whose topological charge is not an integer, but an integer times 1/N c ; in certain limits, such as for adjoint QCD on a femto-torus, this can be shown semi-classically [29,30]. *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>However, if the effective coupling is small, e.g. at high temperature or quark density, then a semi-classical analysis is valid, and topologically non-trivial fluctuations can be approximated as a dilute instanton gas [11,12]. Numerical simulations of lattice QCD provide insight into how the topological structure changes with temperature [31][32][33][34][35][36][37]. Remarkably, these demonstrate that the overall power of the topological susceptibility with respect to the temperature T is given by a dilute instanton gas above temperatures as low as a few hundred MeV [33][34][35]. In this Letter we address a modest problem and consider quantities which are nonzero only because of topologically nontrivial configurations, using a dilute instanton gas as an illustrative example. Studies of the phenomenological implications of the axial anomaly, including the effects mentioned above, have been based on effective quark interactions that are generated in a dilute gas of instantons of unit topological charge [4]. Here we generalize this by demonstrating that effective 2N f |Q|-quark interactions are generated in a dilute gas of instantons of arbitrary topological charge Q [7][8][9]. Even though semi-classically such topological field configurations are suppressed exponentially, these interactions can give rise to novel anomalous effects related uniquely to fluctuations of higher topological charge. We explicitly work out the local effective interaction for Q = ±2 for the case where the color orientation of instantons is aligned. At low energies and for two quark flavors this is a quartic meson interaction. We study its qualitative impact on the mass spectrum within a simple mean-field picture. An appendix includes technical details of the computation. Multi-instanton-induced interactions. We start with an analysis for arbitrary topological charge, generalizing that of 't Hooft [4]. We consider the generating functional of QCD for Gaussian fluctuations around a background of instantons with topological charge Q, which we term Q-instantons. For a Q-(anti-) instanton background, massless quarks have N f |Q| (right-) left-handed zero modes [6]. We show that the functional zero mode determinant of quarks has the structure of a 2N f |Q|-quark correlation function and compute its coupling constant in arXiv:1910.14052v1 [hep-ph] 30 Oct 2019 a dilute gas of Q-instantons. The zero modes of gauge fields arise from symmetries, such as translations, that yield inequivalent instanton solutions. This defines a moduli space which is parametrized by the collective coordinates of the instantons. The general Q-instanton has been constructed by Atiyah, Drinfeld, Hitchin and Manin (ADHM) [8,9]. It can be viewed as a superposition of Q instantons with unit charge, where each constituent is described by a location z i , a size ρ i and an orientation in the gauge group U i . There are then 4N c collective coordinates for each constituent-instanton, so the moduli space of the Q-instanton has dimension 4N c |Q|. Schematically, the generating functional is where χ = (A µ , c,c, ψ,ψ) contains the fluctuating gluon, ghost and quark fields and χ (Q) = (A (Q) µ , 0, 0, 0, 0) contains the Q-instanton background field A (Q) µ . S[χ] is the gauge-fixed action of QCD in Euclidean spacetime. In the second line we integrate the path integral over the nonzero modes to leading order in the saddle-point approximation, leaving only the integration over the collective coordinates C Q . The instanton density n Q contains the functional determinants of the zero and non-zero modes of gluons and ghosts, the non-zero mode determinant of the quarks and the Jacobian from changing the integration over zero modes to collective coordinates [40]. Our main ingredient is the zero modes of massless quarks [10]. Due to the axial anomaly, the Dirac operator in the presence of the Q-instanton, / D (Q) = γ µ ∂ µ +A 1, . . . , N f is an index for flavor and i = 1, . . . , |Q| is a topological charge index. Because of the zero modes, the generating functional is only nonzero in the presence of a source J, which generates the quark zero mode determinant, det 0 (J), in Eq. (1). The generating functional in Eq. (1) has first been computed for Q = 1 and N c = 2 [4] and arbitrary N c [41]. For |Q| > 1, the generating functional to one loop order is only known in certain limits [19]. One limit where one can compute is when the distance between the locations of the constituent-instantons are much larger than their sizes; i.e. |R ij | ≡ |z i − z j | ρ i for all i = j. In this case, at leading order, the Q-instanton can be viewed as |Q| instantons of unit charge which are well separated. Expanding the general ADHM-solution in this dilute limit [9], the path integral factorizes into into a product of constituent-instanton contributions, For ease of notation, we assume Q > 0 as anti-instantons with Q < 0 can be treated similarly. The factor of Q! arises because the single instantons can be treated as identical particles. The collective coordinate measure for the i-th constituent-instanton is dC i = dρ i d 4 z i dU i . dU i is the Haar measure of the coset space SU (N c )/I Nc , where the stability group of the instanton I Nc is given by all SU (N c )-transformations that leave the instanton unchanged. We emphasize that in the dilute limit the instanton density only depends upon the sizes ρ i . Deriving the quark zero modes at leading order for the dilute Q-instanton using the methods of [10], one finds that they are simply given by the corresponding zero modes for Q = 1, so that the quark zero mode determinant factorizes and Z (Q) [J] = (Z (1) [J]) Q /Q!. Thus, for a dilute gas of Q-instantons, the effective Lagrangian which results is the Q th power of the 't Hooft determinant, where each determinant is integrated over space-time, What we require, however, is a local interaction, given by a single integral over space-time for the Q th power of Q . To find this, one needs to account for the overlap between the constituent-instantons. To order ρ 4 /(R 2 ) 2 the only change we need to account for is the difference in the quark zero modes [9]. The zero mode for the Q = 1 instanton is where ϕ R is a right-handed spinor so that the zero mode is left-handed. It will be useful later to note that far from the instanton the quark zero mode is proportional to the free quark propagator ∆(x) = γ µ x µ /2π 2 (x 2 ) 2 . For simplicity we consider instantons with charge two, assuming that the constituent-instantons are aligned in color space. Using the zero modes of Ref. [10], the 2N f zero modes for Q = 2 can be expressed in terms of the Q = 1 zero modes as: where So for dilute instantons the Q = 2 zero modes decompose into separate Q = 1 zero modes, connected by the overlap term X i . In general, the determinant depends on the locations of both constituent-instantons, z 1 and z 2 , which can be rewritten as an average position z = (z 1 + z 2 )/2 and their separation R 12 . Integrating over R 12 the zero mode determinant becomes where I N f measures the overlap of the zero modes, For one flavor the overlap integral is infrared-divergent, requiring a cutoff for large distances |x − z i |. Presumably, this cutoff is set by the average separation between an instanton and an anti-instanton. For two or more flavors, a local interaction is generated when all quark zero modes are close to the same constituentinstanton [42]. In this case we find Because zero modes approach free quark propagators at large distances (3), the zero mode determinant (6) has the form of a 2N f Q-quark correlation function. Hence, in direct generalization of [4], the generating functional in the presence of dilute 2-instantons gives rise to an effective interaction between 4N f quarks. Assuming that the topological fluctuations are described by a dilute gas of instantons, the contribution from dilute Q = 2 instantons and antiinstantons generates an anomalous contribution to the local effective Lagrangian in the color-singlet channel [43]: where P R/L = (1±γ 5 )/2 are the right-/left-handed projection operators and K Q, The effective coupling in this semi-classical analysis is, This result generalizes the instanton-induced local interaction to topological charge Q = 2. We note that, while the effective action induced by a single instanton breaks U (1) A down to the cyclic group Z N f , the Q = 2 contribution has a larger residual Z 2N f -symmetry. The computation outlined here can be generalized to arbitrary topological charge and will be discussed in a future publication [44]. A low energy model. To illustrate the physical effect of interactions induced by higher topological charge, we consider a linear sigma model for N f = 2 that includes all anomalous interactions up to quartic order. These are generated in a dilute gas of instantons and anti-instantons with Q = 1 and 2. Classically, the global chiral sym- Effective mesons are given by Φ = (σ + iη) + ( a 0 + i π) τ , with the Pauli-matrices τ . The resulting Lagrangian then is (cf. e.g. [23]) x P a E J n x e l j L W 9 0 R 0 W 2 U F E o g n b 3 r F h y y o 5 Z 9 m + n k j s l 5 G s n K r 7 h G O e I 4 K G F E B I K m n 4 A g Z T f E S p w E B M 7 Q Y d Y Q s 8 3 c Y k u R p n b I k u S I Y g 2 + W / w d J S j i u d M M z X Z H m 8 J u B N m 2 l j g 3 j S K L t n Z r Z J + S v v O f W u w x p 8 3 d I x y V m G b 1 q X i i F H c J q 5 x S c Z / m W H O 7 N X y f 2 b W l c Y F q q Y b n / X F B s n 6 9 L 5 0 1 h l J i D V N x M a G Y T a o 4 Z r z F V 9 A 0 d Z Z Q f b K P Q X b d H x O K 4 y V R k X l i o J 6 C W 3 2 + q z H j L n a m + 5 P p z f m / a V y Z b m 8 t L t S q l X z g R c w h 3 k s c q q r q G E L O 6 z D w z W e 8 I w X 6 9 S 6 s + 6 t h 0 + q 1 Z f n z O L b s h 4 / A P j G l n g = < / l a t e x i t > 2 = 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " B 3 M B m + r a w 3 9 M 9 8 y v 1 r r c r Z X P A F I = " > A A A C 0 3 i c h V F L S 8 N A E J 7 G V 1 t f V Y 9 e g k X w V N I q 2 I t Q 8 I E X o Y J p i 2 0 p S b p N l + Z F s i 3 U 4 k W 8 e v O q / 0 t / i w e / r I m g I t 2 w m d l v Z r 5 5 m Y H D I 6 F p b x l l Y X F p e S W b y 6 + u r W 9 s F r a 2 G 5 E / D i 2 m W 7 7 j h y 3 T i J j D P a Y L L h z W C k J m u K b D m u b o N L Y 3 J y y M u O / d i G n A u q 5 h e 3 z A L U M A u u 1 Y Q 9 6 r q C e q 1 i s U t Z I m j / p X K S d K k Z J T 9 w v v 1 K E + + W T R m F x i 5 J G A 7 p B B E b 4 2 l U m j A F i X Z s B C a F z a G d 1 T H r F j e D F 4 G E B H + N t 4 t R P U w z v m j G S 0 h S w O b o h I l f Z x L y S j C e 8 4 K 4 M e Q X 7 g 3 k n M / j f D T D L H F U 4 h T T D m J O M V c E F D e M y L d B P P t J b 5 k X F X g g Z U l d 1 w 1 B d I J O 7 T + u Y 5 g y U E N p I W l c 6 l p w 0 O U 7 4 n m I A H q a O C e M o p g y o 7 7 k M a U j L J 4 i W M B v h C y H j 6 q E e u u Z p u 9 7 e S r r l R K Z U P S 5 X r o 2 K t m i w 8 S 7 u 0 R w f Y 6 j H V 6 J L q q M N C l m d 6 o V d F V 2 b K g / L 4 5 a p k k p g d + n G U p 0 8 S o 5 H I < / l a t e x i t > 2 = 10 < l a t e x i t s h a 1 _ b a s e 6 4 = " a D 0 O 0 F Q A H i J h l j Q Q 1 B B 9 5 R m E B 1 Y = " > A A A C 1 H i c h V F L S 8 N A E J 7 G V 1 t f V Y 9 e g k X w V J I q 2 I t Q 8 I E X o Y J 9 Q F t K k m 7 T p X m R p I V a P Y l X b 1 7 1 d + l v 8 e C 3 a y K o S D d s Z v a b b 7 6 d 2 T E D h 0 e x p r 1 l l I X F p e W V b C 6 / u r a + s V n Y 2 m 5 E / j i 0 W N 3 y H T 9 s m U b E H O 6 x e s x j h 7 W C k B m u 6 b C m O T o V 8 e a E h R H 3 v Z t 4 G r C u a 9 g e H 3 D L i A G 1 O 9 a Q 9 8 r q i a p r v U J R K 2 l y q X 8 d P X G K l K y a X 3 i n D v X J J 4 with We emphasize that taking into account the contributions from instantons and anti-instantons is necessary to ensure CP-invariance. The term ∼ χ 1 arises from bosonizing the usual 't Hooft determinant from instantons with Q = ±1, while the term ∼ χ 2 is generated by bosonizing interactions with Q = ±2 in Eq. (8) [45]. We focus on the mass spectrum of mesons in the meanfield approximation. We use the σ-, η-, a 0 -meson masses and f π to fix four of the five parameters of L in the vacuum. Chiral symmetry breaking is controlled by the mass parameter m 2 . By varying m 2 relative to its vacuum value, in Eq. (H12) we define a reduced temperature t = t(m 2 ), where t = 0 corresponds to the vacuum and t = 1 to the chiral phase transition. By choosing χ 2 as a free parameter, we can study the impact of the topological charge-two term on the masses in the phases with broken and restored chiral symmetry. The resulting mass spectrum is shown in Fig. 1. The splitting between the pion and eta mass is due exclusively to the axial anomaly in the chiral limit. Since χ 2 is a quartic coupling, its contribution to the masses is proportional to the chiral condensate. As the condensate melts, this contributions vanishes so that χ 1 is the only anomalous contribution to the masses in the symmetric phase. The larger we choose χ 2 , the smaller χ 1 has to be to reproduce the correct vacuum masses. In the chirally symmetric phase m σ = m π and m η = m a0 , but m σ = m η when χ 1 = 0. Even when χ 1 is small, however, we stress that there are still anomalous effects in the chirally symmetric phase from nonzero χ 2 . These manifest themselves in correlation functions of quartic and higher order. Needless to say, the effects generated by anomalous coupling from instanton with Q = ±2 depend upon how large it is in vacuum and how rapidly it decreases with temperature T and quark chemical potential µ. In vacuum, the nature of the dominant fluctuations in topological charge is certainly a formidable problem in non-perturbative physics. To estimate these effects, we use a simple gas of dilute instantons. To this end, we adopt a crude bosonization scheme, which yields simple relations between the anomalous meson couplings in (11), and the corresponding quark couplings in the dilute instanton gas: with and κ 2 is given in (9). We introduce a mass scale, M , which is a fundamental parameter of our effective theory. Motivated by the complete computation at one loop order [4], and the partial computation at two loop order [18], for three colors and two massless flavors we take the density of a single instanton in the vacuum to be where g 2 = g 2 (ρΛ M S ) is the running coupling constant at two loop order and d M S is a renormalization-scheme dependent constant. The apparent simplicity of our form for the instanton density belies a major assumption that everywhere the coupling g 2 appears that we can replace it with g 2 (ρΛ M S ). This assumption, while admittedly extreme, is both simple and useful. Owing to the interplay between the running coupling from the classical action in the exponential and the factor ∼ g −12 from the collective coordinate Jacobian, n 1 (ρ) develops a pronounced maximum at ρΛ M S ≈ 1/2. For typical values of Λ M S ≈ 300 MeV [46], this implies typical instanton sizes of ρ ≈ 1/3 fm, which is consistent with the value in an instanton liquid [15][16][17]. Of course we cannot compute reliably at large ρ, since inevitably the instanton size is comparable to the confinement scale, and semiclassical approximations break down. Since the two anomalous couplings χ 1,2 now only depend on a single free parameter M , we can redo the meanfield analysis of the meson masses, and find a unique value for M in the vacuum. From the dilute instanton gas with Λ M S = 300 MeV we find χ 1 = 0.33 GeV 2 and χ 2 = 0.57 in vacuum. However, χ 2 and all other anomalous effects are very sensitive to the value chosen for Λ M S . We conclude by discussing how the dilute instanton gas evaporates as T and µ increase. For a single instanton we approximate the change to the instanton density for three colors and two flavors as where m 2 D (T, µ) is the Debye mass at leading order, and A(x) has been determined in [11,12]. Owing to the screening of the color-electric field in the medium, the instanton density decreases both with increasing T and µ. We find that instanton effects are decreased to 10% of their strength in vacuum at about T ≈ 0.7 Λ M S at µ = 0 and µ ≈ 2.4 Λ M S at T = 0. Using realistic values for the critical temperature T c [47] and Λ M S [46], we find that instanton effects are significantly suppressed at temperatures T 1.5T c for µ = 0, consistent with lattice results [31][32][33][34][35][36][37]. As discussed in App. I, at zero temperature in a dilute instanton gas, instantons evaporate only at extremely high densities of µ 1.5 πT c . Using T c = 156 MeV, this corresponds to baryon chemical potentials of µ B 2 GeV. Summary & outlook. We demonstrated that novel effective interactions are generated by instantons of higher topological charge. In general, instantons of topological charge Q give rise to 2N f |Q|-quark interactions. This opens up the possibility to study the effects of the axial anomaly directly for higher correlation function of quarks or hadrons. Besides the example studied here, it is especially interesting to study QCD with one light flavor, where instantons with Q = ±2 generate a mass for the η meson. These methods can also be used to compute anomalous couplings for heterochiral mesons with J ≥ 1 [28] and tetraquark mesons [27]. Appendix A: Conventions We use the chiral representation for the Euclidean gamma matrices, i.e. with the Pauli matrices σ i we define and The fifth gamma matrix is then given by Left-(right-) handed fields are defined by having eigenvalue −1 (+1) with respect to γ 5 . Thus, the projection operators on left-and right-handed fields are given by We also define the matrices which are selfdual and antiselfdual respectively, They are related to the 't Hooft symbols η aµν through the SU (2) color generators T a via In terms of the Pauli matrices τ a , the generators are T a = −iτ a /2. For N c > 2 these generator are given by an appropriate embedding of SU (2) into SU (N c ). For instance, one may use the first three Gell-Mann matrices instead of the Pauli matrices for SU (3). Note that we use σ a for the Pauli matrices in spinor space and τ a in color. The 't Hooft symbols are given by They apparently inherit the (anti-) selfduality from the σ's. Appendix B: Dilute instantons We start with the gauge field configurations with topological charge Q in the dilute limit. The most general form of the Q-instanton can be obtained by means of the ADHM construction [8,9]. In general, the Q-instanton solution is described by a superposition of 1-instantons, where each of these constituent-instantons is parametrized by a position z i , a size ρ i and an orientation in the gauge group U i . For the special case where all these constituent-instantons are aligned in the gauge group, a Q-instanton solution has first been discovered by 't Hooft [7]. It is based on a superpotential Π(x) satisfying = ∂ µ ∂ µ is the d'Alembertian in Euclidean space. With this, the Q-instanton can be written as withσ µν defined in Eq. (A5). 't Hooft's solution for the superpotential is given by This solution only depends on the location and sizes of the constituent instantons. There is no relative orientation in the gauge group. A global orientation U is indirectly contained in the corresponding gauge field, A where the collective coordinates, z i , ρ i and U i , have a physical interpretation in terms of positions, sizes and gauge group orientations of the Q constituent-instantons that make up the Q-instanton. Here, we are interested in dilute Q-instantons. This means that the separation between the constituent-instantons is large against their sizes, |z i − z j | ρ i for all i = j. A key feature of this limit is that, at leading-order, the Q-instanton (B4) is zero everywhere except for x close to one of the constituent-instanton locations z i . Hence, in the vicinity of each z i , (B4) looks like a Q = 1 BPST-instanton in singular gauge, To leading order, the dilute Q-instanton is given by a chain of independent 1-instantons. As a result, the generating functional in the saddle-point approximation about this instanton configuration factorizes into a product of generating functionals in a 1-instanton background, as shown in Eq. (2). We refer to [10,19] for a more detailed discussion of this factorization. Following [9], one can derive the general Q-instanton solution to leading order in the dilute/small-instanton limit. This facilitates the generalization of the present discussion to arbitrary orientation in the gauge group and will be discussed in a forthcoming publication [44]. Appendix C: Quark zero modes for the dilute Q-instanton In the presence of Q-instantons, quarks have zero modes, where is the Dirac operator in the Q-instanton background. It follows from the Atiyah-Singer index theorem that gauge field configurations with topological charge Q give rise to N f |Q| left-handed (for Q > 0) or right-handed (for Q < 0) quark zero modes [6,48]. For 't Hooft's solution for the aligned Q-instanton (B4), they are obtained from the superpotential via [10] ψ where C is a normalization constant, and ϕ R is a right-handed spinor, where α is a spinor index and c is a SU (2) color index. Note that, owing to the gamma-matrix in the zero mode in singular gauge, the zero mode ψ is left-handed for Q > 0, as required by the index theorem. For an anti-instanton, Q < 0, one simply has to replace ϕ R by We first discuss the explicit form for Q = 2: For a dilute 2-instanton the separation of the two constituent-instantons is always far larger than their respective size, Furthermore, in order to be insensitive to the extended nature of the instanton, we consider the zero modes to be far away from the constituent-instanton locations, i.e. The reason for this limit is that the resulting effective interaction is generated by quarks scattering off the instanton. Due to the extended nature of the instanton, this interaction is in general non-local. In the limit (C8), however, the size of the instanton can be neglected. This allows us to rewrite the zero mode (C6) in a suggestive way: and analogously for the second zero mode ψ (2) f 2 (x). In the first step, we dropped the term ρ 2 1 ρ 2 2 in the denominator in the second line since it is subleading. In the second step we used We define the Q = 1 zero modes as Comparison between the form of the exact Q = 2 quark zero mode ψ (2) f 1 (x) and our approximation in (C9). We used the parameters z1 = 3, z2 = 6 and ρ1 = ρ2 = 0.21 for this plot. Note that we added a small offset to facilitate a logarithmic plot. The baseline of the zero mode is close to zero. The scalar function we plot here is defined in Eq. (C12). and Hence, the Q = 2 quark zero mode looks like the sum of the zero modes corresponding to the two constituent-instantons and the term X i , which quantifies their overlap. We emphasize that ψ f 2 , while being located at z 2 and of size ρ 2 , still has the same gauge group orientation, U 1 , as the first Q = 1 zero mode. This follows from 't Hooft's solution, where the gauge group orientations of the constituent-instantons are aligned. Also note that the leading contribution to ψ (2) We show a comparison between the form of the exact quark zero mode (C6) and our approximation (C9) in Fig. 2. For this figure, we project onto the scalar part of the zero mode ψ for the configuration (x − z 1 ) · (x − z 2 ) = |x − z 1 ||x − z 2 | in order to have a function that only depends on the relative distances. We find that our approximation is very good even close to z 1 and z 2 for ρ/|R| 0.3. The normalization constant C is determined at leading order via which yields C = −1/ √ 2π. To make the following computations more transparent, we use a graphical representation of the zero modes (C9): where the left peak is located at z 1 and the right peak at z 2 . In the limit of small constituent-instantons, Eq. (C8), we can further simplify the zero modes by taking only the leading terms in |x − z i |/ρ i . Then, (C10) becomes where ∆(x − z) is the free propagator of a massless quark, Hence, the Q = 2 quark zero modes can be represented in terms of free quark propagators and the overlap term, This property facilitates the identification of the quark zero mode determinant with an effective correlation function of quarks, which we will do explicitly below. For arbitrary topological charge, we only mention the leading order in the dilute limit. The corresponding gauge field configuration is given by (B4). With this, the leading contribution to the quark zero modes in the dilute limit for any topological charge is given by: Thus, the N f Q quark zero modes of the dilute Q-instanton are, to leading order, given by a collection of 1-instantoninduced zero modes. This is consistent with our solution for Q = 2 in Eq. (C9), since the term ∼ X i is a sub-leading correction in the dilute limit. Appendix D: Generating functional in a Q-instanton background: leading order in the dilute limit We first briefly discuss the generating functional of QCD in a Q-instanton background with a focus on the quark zero mode determinant to leading order in the dilute-instanton limit. This will set the stage for the subsequent detailed analysis for Q = 2. The general strategy is to to evaluate the QCD generating functional in the saddle point approximation to leading order, where the stationary point is given by an instanton of topological charge Q. This is, to some extent, natural, since (anti-) self-dual, topological gauge field configurations indeed minimize the classical Yang-Mills action under the assumption that it is finite [2]. Because of this, such an analysis is called semi-classical. The general form of the generating functional is, where χ = (A µ , c,c, ψ,ψ) is the fluctuating multi-field containing gluons, ghosts and quarks. S[χ] is the gauge-fixed action of QCD in Euclidean spacetime. µ , 0, 0, 0, 0) is the Q-instanton background field. We only introduced a source J for quark-antiquark pairs, since this is the only relevant case for the present purposes. To leading order in the saddle point approximation, one expands the action S χ + χ (Q) about vanishing fluctuating field to quadratic order. The linear terms vanish on the equations of motion. The leading term is given by the action of the Q-instanton, The quadratic terms give rise to the well-known functional determinants. Denoting them as 1 2 A µ M µν A A ν +c M c c + ψ M ψ ψ, the generating functional becomes The exact form of the other terms is irrelevant here. To renormalize the contributions of large eigenvalues, it is understood that all nonzero-mode determinants are normalized with the determinant at vanishing gluon background field. In the presence of the Q-instanton, all fields have zero modes related to the invariance of the action under certain translations, dilatations and global gauge rotations that lead to inequivalent instanton configurations. These symmetries give rise to the 4N c |Q| instanton collective coordinates describing their position (z i ), size (ρ i ) and orientation in the gauge group (U i ). Fluctuations in the directions of zero modes cannot assumed to be small, so strictly speaking the saddle point approximation is only valid for the non-zero modes, while the zero modes have to be treated exactly. To this end, one changes the integration over zero modes to an integration over collective coordinates. We define the Q-instanton density, where C q is the set of all collective coordinates of the Q-instanton. J is the Jacobian from the coordinate change from zero modes to collective coordinates. det / 0 / D is the determinant over the non-zero modes of the quarks. We assume that the quark source J is only a small perturbation and can be neglected in the non-zero mode determinant. With this, the generating functional becomes det 0 (J) is the determinant of the source J in the space of quark zero modes. All this has been discussed in detail in [4], where the generating functional in Eq. (1) has first been computed for Q = 1 and N c = 2. The generalization to SU (N c ) is discussed in [41]. For |Q| > 1, solutions are only known in certain limits, see e.g. [10,19]. For Q-instantons with aligned gauge group orientation at leading order in the dilute limit, the gauge field configuration is given by (B4) and the zero modes are given by (C18). The zero mode determinant then is where we do not sum over the flavor and zero mode indices. The source J is a (N f Q × N f Q)-matrix in the space of zero modes. It is sufficient to only consider the contribution from the diagonal of J. We will match the zero mode determinant to an effective multi-quark interaction, so the different contributions to the determinant can be obtained by permutation of the quark fields (cf. Eq. (G2)). We denote the diagonal elements as J f f ii ≡ J f i and find: where we used that for |x f i − z i | ρ i , this can be expressed in terms of free quark propagators (C16). The quark zero modes have mass-dimension one, so that J has dimension 2. It is convenient to introduce a source with the canonical mass-dimension one, hence we introducedJ f i = ρ i J f i . Since the collective coordinates z i , ρ i and U i are integrated over in the generating functional, we can write this as i.e. to leading order in the dilute limit, the quark zero mode determinant factorizes into independent Q = 1 contributions. We call this determinant non-local, as it depends on all independent instanton locations. As we will show below, if we go beyond leading order, there is also a local contribution, where the determinant only depends on a single location. As discussed e.g. in [9,10,19], the functional determinants of the gluons and ghosts also factorize, even to order ρ 4 /|R| 4 in the dilute limit. Hence, the generating functional factorizes completely, with the Q = 1 collective coordinate integration measure Note that the zero mode determinant depends on all collective coordinates, while the instanton density only depends on the instanton size. We used that the instanton density and the collective coordinate integration measure for arbitrary Q to leading order in the dilute limit also factorize [10,19], i.e. and where Q! is a combinatorial factor related to the permutation-symmetry of the 1-instanton contributions. Appendix E: Generating functional in a 2-instanton background: N f = 1 Next, we use the form of the quark zero modes in Eq. (C9) to compute the zero mode determinant of the quarks. We start with N f = 1, so we can drop the flavor index. The (diagonal part of the) quark zero mode determinant then is: Using that the zero mode in the dilute case decomposes into a sum of one contribution centered at z 1 and one at z 2 , (C9) and (C14), Eq. (E1) contains various contributions. We drop the integrations over the source locations, but include the integrations over the instanton location here for convenience. One term contains the dominant pieces of each zero mode, z1,z2 1 Since this term has no overlap between the contributions at z 1 and z 2 , it completely decomposes into two separate 1-instanton contributions. This is the leading-order, non-local contribution in the dilute limit. Hence, the corresponding generating functional is given by Eq. (D9). Beyond leading order, there are corrections to this non-local contribution given, e.g., by z1,z2 1 But there are also two local terms in the sense that they can be written as a contributions solely from terms centered around the same point times an overlap-term that can be integrated out. They are given by z1,z2 1 and z1,z2 These terms are indeed given by four Q = 1 zero modes centered around a single z i . The remaining overlap integral , integrates-out the 'leakage' from the contributions of ψ f i (x) around z j to z i (for i = j). This integral can be carried out analytically, For N f = 1, the overlap integral is dominated by large distances |x − z i |. We therefore introduced an infrared cutoff R 0 . Presumably, this is generated by repulsive instanton-anti-instanton interactions [16]. For the generating functional, we use that the gauge contribution factorizes also at next-to-leading order in the dilute limit [9]. Only corrections for the quark zero mode determinant have to be taken into account. The local part of the Q = 2 partition function Z (2) [J] (2) for N f = 1 therefore is We defined the ρ i -independent function, For |x − z i | ρ i this function can be expressed in terms of free quark propagators (C16): Since the expression in (E8) is symmetric under the exchange of the topological charge indices 1 and 2 , we finally arrive at where the Q = 1 instanton density n 1 is given by (14). Appendix F: Generating functional in a 2-instanton background: N f ≥ 2 The discussion for N f ≥ 2 is a straightforward generalization of the N f = 1 case. Again, it is sufficient to take only the contribution from the diagonal of the (2N f × 2N f )-matrix J into account. The determinant is As for N f = 1. There are numerous non-local contributions. Focussing on the integration over the instanton locations, the leading non-local contribution is given by, i.e. it is a product of two independent terms, each involving 2N f Q = 1 quark zero modes. This is discussed in App. D. Most of the terms of the determinant give corrections to this non-local term. However, there are again two local terms, which read and The overlap integral for any N f is now given by: In general, this integral depends on the different, arbitrary source locations. However, for a dilute 2-instanton, where we assume that the constituent-instantons are far apart, there naturally arises a contribution that is independent of the source locations. It is precisely given by the limit where the non-local contributions are suppressed. To this end, we note that the zero modes ψ (2) f 2 (x) are responsible for the overlap term in Eq. (F3). This overlap stems from configurations where ψ This limit is consistent with our initial assumption for the dilute 2-instantion in Eqs. (C7) and (C8) as long as |x f 2 − z 1 | ρ 1 , ρ 2 . Furthermore, the non-local terms, which are dominated by configurations where at least one the zero modes ψ The analogous statement is true for the overlap from ψ (2) f 1 (x f 1 ) in Eq. (F4) and the corresponding non-local corrections. Hence, in this case the overlap term only depends on the instanton size and the distance between the instantons, and the quark zero mode determinant is dominated by the local contribution. The overlap integral for N f ≥ 2 then becomes and the Q = 2 partition function for any N f is with F i defined in (E9). We emphasize that since at large distances F i contains two free quark propagators (E10), the generating functional has the form of a 2N f Q-quark correlation function. Appendix G: The effective interaction We now discuss the details of the derivation of the effective action from the quark zero mode determinant computed in the previous sections. The main trick is to exploit that far away from the instanton the quark zero mode determinant can be expressed in terms of quark propagators, cf. Eq. (E10). With this it is possible to find a quark correlation function that mimics the zero mode determinant without a topological background field. The location of the effective vertex then coincides with the instanton location. Any topological charge at leading order Before we discuss the local interaction for Q = 2, we start with the leading order dilute Q-instanton. We make the following ansatz for the effective generating functional for arbitrary topological charge and flavor: where we note again that, without loss of generality, we assume Q > 0. The index LO indicates that this ansatz is specifically for the leading order in the dilute limit. ω i are constant tensors carrying spin and color which will be determined explicitly below. K Q,N f = (Q!) N f /(N f Q)! is a combinatorial factor. The pre-exponential factor V (Q)+ eff,LO generates a non-local 2N f Q-correlation function with coupling strength κ Q . The superscript + indicates that this is contribution from instantons. We will use − for anti-instantons with Q < 0. We note that, due to Fermi statistics, this term can be rewritten as a determinant, This justifies why we only took the diagonal contribution of the zero mode determinant into account. All other contributions are given by permutations of the quark fields. The correlation function generated by V (Q)+ eff,LO can be computed by expressing the exponential as a power series inJ, and using Wick's theorem to contract the quarks from the sources with the ones in V (Q)+ eff,LO . Note our suggestive notation for the vertex locations in (G1) and the source locations in (G3). The dilute Q-instanton limit corresponds to the assumption that the generating functional is dominated by configurations where the z i in (G1) are widely separated. As a result, all contractions of quark fields are suppressed except for the ones where all quarks sourced at x f i are contracted with all quarks at z j in V (Q)+ eff,LO . All other contraction involve at least one propagator ∆(z i , z j ), with i = j, which is highly suppressed in the dilute limit. Hence, only the term of order N f Q inJ in (G3) can contribute to the correlation function. Then, for fixed f , there are Q! equivalent ways to contract the quarks at x f i to the ones at z j . Since this can be done for each f , there are (Q!) N f equivalent contributions. All other contractions are suppressed since they contain at least one ∆(z i , z j ). Hence, by expanding the exponential in powers of the the sources and using Wick's theorem in the dilute limit, the generating functional Z To compensate for this combinatorial factor, we introduced the factor 1/K Q,N f for the effective coupling in (G1). Taking all of this into account, we find for the 2N f Q-quark correlation function: We can now compare this to the generating functional in the dilute Q-instanton background in (D9), where we use the representation of the quark zero mode determinant in (D7), i.e. we demand that the effective generating functional (G4) and the generating functional in the Q-instanton background (D9) are indentical, From this, we read-off that the effective coupling is (G6) power Q. From the integrands on both sides of Eq. (G5) we infer that the tensor ω obeys the identity, where we made the color indices (a, b, c) and spinor indices (α, β) explicit now. Regarding the color structure of ω i , we see that they are required to carry the global color orientation U i , such that we can define the tensor ω via Furthermore, from the explicit form of the spinor ϕ R (C4) follows for the sum over color indices, where P R is the right-handed projection operator defined in (A4). This implies ω a αω a β = P αβ R . With this, the integration over the gauge group orientation in the effective action (G1) can be carried out explicitly. Since we have the same integral for different topological charge indices i, we can do the integration for fixed i following [4]. The final result is then given by taking this result to the power Q. We therefore explicitly evaluate Thus, for the gauge group SU (N c ) we have to carry-out the group integration where for U i is an element of SU (N c )/I Nc , with the stability group of the instanton I Nc , given by all SU (N c )transformations that leave the instanton configuration unchanged. For N c = 2 this is just the identity. dU i is the corresponding Haar measure. Hence, this integration is quite complicated for arbitrary N f and N c . For the present purposes, we restrict ourselves to color-singlet interactions only and use N f = 2 as an example. In general, this group integration will yield color-singlet and non-singlet terms in (G11). For two flavors, the color singlet part is extracted from where (non-singlet) refers to terms that lead to color-non-singlet effective interactions in (G1). c Nc is a N c -dependent constant. Since the gauge group orientation integral is performed on both sides of (G5), this factor cancels out. Of course, if we were interested also in the color-non-singlet channels, there are relative factors that do not cancel. Keeping this in mind, we find Now we can apply the identity for ω in (G10) to arrive at, Plugging this into (G1), we find for the pre-exponential factor with the coupling κ Q,LO given in (G6). We note that even though we explicitly used N f = 2, the color-singlet channel is given by a flavor-determinant for any N f , since the general structure of the gauge group integration (G13) is where σ(i) are permutations of i = 1, . . . , N f , see e.g. [49]. The present result therefore holds for any number of flavors. This is not a proper effective action, since V (Q)+ eff,LO in not in the exponent. However, so far we considered the generating functional in the background of a single dilute Q-instanton. For a single dilute Q-anti-instanton, one simply has to replace the right-handed projection operator with the left-handed one, P R → P L , in (G16) to get V (Q)− eff,LO . We now assume that the field configurations of topological charge Q are described by a dilute gas of dilute Q-instantons and anti-instantons, i.e. a double-dilute limit for the topological sector of QCD. This generalizes the dilute instanton gas in [4] to arbitrary topological charge. The complete Q-instanton contribution to the functional integral is then given by a simple statistical ensemble, ν + and ν − are the numbers of instantons and anti-instantons. Hence, the resulting anomalous contribution to the effective action is Thus, to leading order in the double-dilute limit, the effective interaction induced by instantons of topological charge Q is a 2N f Q-quark interaction of the form However this interaction is non-local at leading order. To get a local interaction, we need to go beyond leading order. This is done explicitly next for the special case of topological charge Q = 2. But first, we comment on the dilute gas of dilute instantons. In is conceivable that instantons of any topological charge contribute to the functional integral. Of course, in the semi-classical regime the contributions with higher topological charge are exponentially suppressed due to the factor exp(−8π 2 Q/g 2 ) in the instanton density. This picture is therefore not in conflict with lattice results on the topological charge at large temperature [33][34][35]. Still, these contributions can be present and the resulting anomalous contribution to the effective action of a dilute gas of dilute instantons and anti-instantons of all topological charges is While the the effective interactions are certainly small in the dilute instanton gas, they might have relevant phenomenological implications at lower energies. The local interaction for Q = 2 We now repeat the same analysis for Q = 2, taking into account the results of App. E and F. As opposed to the leading-order analysis, this results in a local contribution to the effective action. To this end, we make the ansatz Following the arguments above, V (2)+ eff gives rise to the correlation function Again, we choose the coupling κ 2 and the tensors ω such that this correlation function is identical to the generating functional for Q = 2 in (F9), This expression holds for any N f . The overlap integral I 1 is given by (E7) and I N f for N f ≥ 2 by (F8). From this we infer that the effective coupling is given by The determination of ω and the integration over the gauge group are identical to our discussion above. The main difference here is that all propagators connect to the same point z, i.e. the correlation function is local. We therefore find for the color-singlet channel of the effective generating functional, With this, a dilute gas of dilute instantons and anti-instantons of topological charge Q = 2 gives rise to the local contribution to the effective action: with κ 2 given in (G24) and K 2,N f = 2 N f /(2N f )!. Owing to the overlap between the constituent-instantons, this interaction is local. Appendix H: A low-energy model Here, we discuss the details of the two-flavor linear sigma model (LSM) defined by the effective action, where the classical and anomalous (quantum) contributions to the effective Lagrangian are defined in (11). However, for the sake of generality, we include one more anomalous quartic term which is generated by 1-instantons, but has been neglected in the main text for reasons that become clear below. With this, the anomalous part of the effective Lagrangian is We setλ 3 = 0 in the main text. The the meson field is given by The equations of motion, where φ = (σ, a 0 , η, π), yield the vacuum expectation valueφ = (σ, 0, 0, 0) with Hence, for m 2 − χ 1 > 0σ is imaginary and the physical VEV isφ = 0. For m 2 − χ 1 < 0 G qu is spontaneously broken down to SU V (2) × Z A 2 . For vanishing anomalous terms, i.e. ∆L qu = 0, also U A (1) would be spontaneously broken, resulting in four Goldstone bosons π and η. In the presence of the anomalous terms U A (1) is broken explicitly and the spontaneous breaking of only SU A (2) results in the pions as the only Goldstone bosons. Due to isospin symmetry, there are only four distinct masses, In the symmetric phase,σ = 0, only the quadratic terms contribute to the masses directly and the Q = 1 term χ 1 induces a splitting of the chiral pairs (σ, π) and (η, a 0 ). The higher order couplings can only influence these masses via loop corrections in the symmetric phase. Inserting the VEV from Eq. (H5) yields for the masses in the broken phase The pion is the only Goldstone boson in the general case. With this we can explore the influence of the anomalous terms in the symmetric and the broken regime. To fix the masses in the vacuum, we use the following observables: f π =σ 0 = 93 MeV , m σ,0 = 400 MeV , m η,0 = 820 MeV , m a0,0 = 980 MeV . The η mass is taken from [31]. For the other masses, we chose values compatible with [46]. Note that we identify σ with f 0 (500). Within the mean-field approximation and in absence of effects from topological charge Q > 1 andλ 3 = 0, all parameter, including χ 1 , are fixed by the vacuum masses. This then also fixes the amount of axial symmetry breaking above the chiral phase transition, as χ 1 is the only anomalous contribution to the masses in the symmetric phase. With nonvanishing χ 2 the vacuum mass spectrum can be fixed for different values for χ 2 and we can explore the influence of interactions induced by higher topological charge on the mass spectrum. For a given χ 1 , the value of m 2 determines whether the symmetry is broken or not. So variations in m 2 can be related to variations in the temperature. To analyze how the mass spectrum changes as the symmetry is restored, we therefore assume that m 2 plays the role of temperature in the mean-field analysis. In order to explore the influence of the higher-order anomalous interactions, we fix the masses in the vacuum according to Eq. (H8) and study the mass spectrum as a function of m 2 for different values of the 2-instanton term χ 2 . We then find the following relations between the model parameters and the physical parameters, Since we have five model parameters, but use only four parameters to fix them, we choose χ 2 andλ 3 to be the free parameters for now. We not that in absence of the 2-instanton term, χ 2 = 0, andλ 3 = 0, the 1-instanton is fixed by the η mass, χ 1 = m 2 η,0 /2. Conversely, if U (1) A -breaking is only due to 2-instanton effects and χ 1 =λ 3 = 0, one finds χ 2 = m 2 η,0 /(2f 2 π ). The system has two characteristic scales in m 2 . The vacuum scale m 2 vac is the scale in m 2 where the masses in the broken phase in Eq. (H7) assume their vacuum values (H8). It can be read-off from the first equation in (H9), Furthermore, there is the critical scale m 2 crit of chiral symmetry breaking. It is defined as the value of m 2 where the VEVσ (H5) vanishes, Hence, for different values of χ 2 the characteristic scales of the system change. For a meaningful comparison of the masses for different χ 2 , we therefore define the reduced temperature and rewrite the mass in terms of t, t = 0 is the vacuum and t = 1 is where the phase transition occurs. The VEV of the σ as a function of t then becomes With this parametrization, the masses take very simple forms, We see that m σ (t) and m π (t) are independent of χ 2 . σ is the critical mode that becomes massless at the phase transition. The mass splitting between the chiral pairs (σ, π) and (η, a 0 ) in the symmetric phase is induced by χ 1 (t > 1) = m 2 η,0 /2. This mass splitting vanishes in the limit t → ∞. For χ 2 = 0 the η mass is independent of t in the broken phase. An interesting observation is that for χ 2 > 0, m η is a strictly decreasing function of t in the broken phase and strictly increasing in the symmetric phase. Hence, it has a minimum at the chiral phase transition. This behavior can therefore be attributed to corrections related to topological charge two. Furthermore, in terms of the reduced temperature, the masses are independent ofλ 3 , so the 2-instanton-induced coupling χ 2 is the only relevant anomalous quartic interaction here. An analysis of the vacuum stability of the effective potential implies thatλ 3 ≤ m 2 σ,0 /4f 2 π ≈ 4.62. The choiceλ 3 = 0 we made in the main text is therefore innocuous. Most importantly, Fig. 1 is exactly the same for any value ofλ 3 . Our estimated values for the couplings χ 1 and χ 2 change, however. We find that the largerλ 3 < 0 is, the smaller χ 1 and χ 2 become, but it has to become very large to have a significant effect. By bosonizing the multi-quark interactions generated by Q-instantons, the fermionic couplings κ Q can be related to the anomalous mesonic couplings χ Q . For two flavors, κ 1 is a four-quark coupling and can readily be bosonized by means of a Hubbard-Stratonovich transformation. The 2-instanton term κ 2 is an 8-quark interaction for two flavors, so more elaborate path integral bosonization techniques are necessary in general [22]. Here, we adopt a simplistic bosonization scheme motivated by low-energy models where mesons are coupled to quarks through Yukawa interactions, i.e. quark-meson models. On the equations of motion, the mesons are typically proportional to quark-bilinears and we make the simple ansatz based on (H3), Φ = 1 2M 2 (ψψ +ψγ 5 ψ) + (ψ τ ψ +ψγ 5 τ ψ) τ . (H16) M is a fundamental parameter of our effective theory with the dimension of a mass. By using the identity the instanton-induced quark determinant can then be rewritten as, and similarly for the anti-instanton term The 1-instanton induced effective interaction then becomes, and the 2-instanton induced effective interaction becomes, We therefore identify By plugging this into the expressions for the mesons masses above, the dependence on χ 1 and χ 2 is replaced by a dependence only on M , provided that we know the fermionic couplings κ 1 and κ 2 . This reduced the number of independent parameters to four. Given the four input parameters (H8), the effective action (H1) is uniquely determined at the mean-field level. (14), versus ρΛ M S , for two massless quarks and three different temperatures at µ = 0. Appendix I: The instanton density The instanton density in the vacuum is given by (14). It depends on the constant and the running coupling constant g 2 (ρΛ M S ) at two loop order, is the renormalization mass scale of QCD in the modified minimal subtraction scheme. This expression is valid for small x, where log(x −2 ) is positive. By asymptotic freedom, the coupling g 2 (ρΛ M S ) is small at small ρ, so instantons are suppressed by the exponential of the classical action, 8π 2 /g 2 . Of necessity in a semi-classical computation, the exponential from the classical action dominates over the prefactor, ∼ g −12 , which arises from the Jacobian for the collective coordinates of the instanton [4]. Conversely, when ρ increases, so does the coupling g 2 (ρΛ M S ). The instanton density increases, but eventually decreases, suppressed by the prefactor from the Jacobian. The instanton density n 1 (ρΛ M S ) is illustrated in Fig. (3); as seen there, there is a natural maximum when ρ ∼ 0.50Λ M S in the vacuum. For a single instanton, at a temperature T and quark chemical potential µ, we approximate the change to the instanton density as n 1 (ρ, T, µ) = exp − 2π 2 g 2 ρ 2 m 2 D − 12A(πρT ) 1 + where is the Debye mass at leading order, and A(x) = −1/12 log(1 + x 2 /3) + .0129(1 + 0.159/x 3/2 ) −8 [11,12]. The dominant term, ∼ ρ 2 m 2 D , is straightforward to understand. The topological charge is proportional to tr( E · B), where E and B are the color electric and magnetic fields. In any plasma, electrically charged particles screen static electric fields over distances ∼ 1/m D . Since instantons must carry color electric fields, just Debye screening is sufficient to suppress the instanton density. Needless to say, this argument only applies in a plasma where there is Debye screning, and not at low temperature. For a single instanton at T = 0 and µ = 0, to one loop order the instanton density can be computed analytically either with puerile brute force [11] or cleverly [12]. The computation at µ = 0 is, unexpectedly, rather more difficult [14,20]. At nonzero µ, then, we only include the leading contribution of quarks to the Debye mass. Numerical computations at T = 0, though, show that for the instanton density, the difference between the complete result and that with just the leading term from the Debye mass is small, at most a few percent for all ρ and T . We comment that the instanton density to one loop order can be computed at µ = 0 numerically using the Gelfand-Yaglom method [6], as has been done for the computation of the one loop determinant in an instanton field for quarks of nonzero mass [26]. Using the elementary ansatzes of Eqs. (14), (15) and (I3), we can calculate numerically how the density changes with temperature and chemical potential. Consider first T = 0 and µ = 0. As illustrated in the left plot of Fig. (4), as the Debye mass increases the instanton density decreases smoothly. To have some definite measure, we define the temperature as that where the integrated instanton density is 1/10 th its value at zero temperature as T I . For three colors and two massless flavors, T 2f l I ≈ 0.71Λ M S ; for three massless flavors, T 3f l I ≈ 0.74Λ M S . Using Λ M S ≈ 332 MeV [46], for two flavors T 2f l I ≈ 236 MeV, and T 3f l I ≈ 246 MeV for three. We stress that these numerical values are, at best, merely suggestive. Under our naive ansatz for a dilute instanton gas, the instanton density is very sensitive to the choice of Λ M S ; after all, merely on dimensional grounds the instanton density is ∼ (Λ M S ) 4 . As discussed above, a dilute instanton gas is only applicable when fractional dyons can be ignored, for T > T χ . At nonzero temperature, to date the results from lattice QCD find that above temperatures 300 − 400 MeV, the fall off with temperature is a power law, whose value follows from the classical action for a single instanton and the running of the coupling g 2 with temperature. The overall prefactor measured in lattice QCD is approximately ten times larger than the one loop result, but at high temperature perhaps this is ameliorated by the complete computation at two loop order [18]. It is still an open question as to whether topologically non-trivial fluctuations become dilute below [32,35] or above [33] the appropriate transition temperature. This is presumably due to a combination of effects from fractional dyons and instantons with integral topological charge, either as a liquid or a gas. For our purposes, which is frankly phenomenological, the moral which we draw is that a dilute instanton gas is not a preposterous assumption. Consider next the case of zero temperature and nonzero quark chemical potential. As for temperature, the density of instantons are smoothly suppressed as µ increases. The integrated density of instantons, shown in the right plot of Fig. 4, is 1/10 th that in vacuum when µ 2f l I ≈ 2.44Λ M S for two flavors, and µ 3f l I ≈ 2.22Λ M S for three flavors. These correspond to µ 2f l I ≈ 810 MeV for two flavors, and µ 3f l I ≈ 737 MeV for three. Taking T χ ≈ 156 MeV [47], this is approximately ∼ 1.5 πT χ , While even the instanton density at one loop order is incomplete at µ = 0, we note that these are extremely high values of the quark chemical potential. they are almost into the perturbative regime, for µ > 1 GeV [50]. This gross disparity has a simple origin, and thus may persist a more careful analysis. In a thermal bath, or the Fermi sea of cold, dense quarks, instantons are suppressed primarily because of Debye screening. As can be seen from the expression for the Debye mass in Eq. (I4), the natural scale for the chemical potential is µ ≈ πT . Indeed, as the Euclidean energy of any fermion field is an odd multiple of πT , this balance between µ and πT is true of the propagator at tree level. The weak dependence upon the quark chemical potential can also be understood in the limit of large N c . As N c → ∞ the coupling g 2 ∼ 1/N c , so that if the number of quark flavors N f is held fixed as N c → ∞, any effects of quarks are suppressed by ∼ 1/N c . In the plane of T and µ, large N c then generates a "quarkyonic" regime [51]. Our naive estimate for a dilute instanton gas is simply another illustration of this. At present, numerical simulations of lattice QCD with classical computers can only provide can only provide results at nonzero temperature and µ ≤ T . Simulations of cold, dense quark matter may be possible with quantum computers, but will not be available for some time. This illustrates the virtue of using an effective model, such as a dilute gas of instantons.
16,433.2
2019-10-30T00:00:00.000
[ "Physics" ]
The Influence of Problem Based Learning Model For Student Competence In Computer System Learning Eyes Reviewed From Class X Multimedia Achievement Motivation In Vocational School 10 Surabaya  The process of teaching and learning activities uses problems in daily life as a form of improving thinking skills and problem solving, as well as gaining important concept knowledge by using student achievement motivation on computer systems subjects on computer system attitude competencies, computer system knowledge competencies, and competency skills computer system. Teachers at SMKN 10 Surabaya on average use the STAD cooperative model and students' competencies are taught low while researchers use the Problem Based Learning model and make student competencies increase.The design of this study uses 2 x 2 factorial analysis design with independent variables Problem Based Learning models and STAD cooperative models, moderating variables of high achievers motivation and low achievers motivation, and dependent variables competency in computer system attitudes, computer knowledge of computer systems, and computer system skills competencies. The results of this study in Multimedia 10 class students who were taught using PBL learning models were very significant competency compared to STAD Introduction Learning models are various components that are interconnected with one another and are a system so that the components of the learning model include the objectives, material, methods, and evaluation. Learning models are usually arranged as a foothold in its development based on principles or theories so that the learning model consists of educational principles, sociological theories, psychology, psychiatry, systems analysis or other theories.Researchers also aim at students being able to solve problems like students are able to make a number system program on material number systems which is one of the strategies in problembased learning on computer system subjects. According to Heriyanto (2013), concluded that the subjects of computer systems with the aim to help students of class X Multimedia in learning and understanding the basic concepts in computer system subjects in terms of hardware and software as well as its supporting components. This computer system subject was designed for the 2013 Vocational School curriculum to strengthen student competencies from computer system subject knowledge competency (KP-SK), computer system subject attitude competency (KS-SK), and computer system subject skill competency (KK-SK). In subject computer systems PBL learning model (Problem Based Learning) is needed which functions as an authentic problem solving and meaningful problems as well as to arouse student achievement motivation which is the most important element of effective teaching or successful teaching on high achievement motivation (MBT) and motivation low achievers (MBR). Vocational School 10 Surabaya requires PBL learning model because the teachers at the school use the STAD cooperative learning model, the researchers took the initiative to introduce the PBL model in Vocational School 10 Surabaya on the subject of a computer system. Methods This type of study is research with an experimental form that supports the proposed research on the learning model of Problem Based Learning (PBL) and students' achievement ability to improve student competencies in computer system lessons on number system material. The research sample was divided into an experimental group consisting of 10 MM 1 class students and the control group was 10 MM 2 class students at SMKN 10 Surabaya on a computer system maple. Each group from this class has students with high achievement abilities and low achievement motivation. The experimental class in this class will provide a learning class using the PBL learning model and the control class using the STAD cooperative learning model on a computer system maple. The research design used in this field of research is the factorial design shown in Figure 1. The control variable is the dependent variable that is not affected by external factors than those studied so that a constant variable occurs (Sugiyono, 2016, p. 63). As for the control variables are teachers who teach Multimedia teachers, students who are taught Multimedia majors, subjects taught by computer system subjects, and time allocation taught equally. Pretest and posttests data are only used in the competence of computer systems knowledge, and for competencies the attitude of a computer system only uses posstest not using pretest so the measurement process is not too long. The data used pretest and posttest are multiple choice questions with the competency test instrument of computer system knowledge, while the posstest data on the competence of the computer system attitude uses observation observation with the teacher using the observation sheet of the attitude of the computer system, and the posstest data on the competence of the computer system skills makes the number system program with instrument of competency performance test of computer system skills which in the control variable above is controlled or made constant, so there is no occurrence affecting the dependent variable. The operational definition of a variable is an indication of how a variable is measured in research so that in this study the research variable is determined by the theoretical basis of each variable. There are two independent variables that are told in this study, namely the learning model as the independent variable one and student achievement motivation as the independent variable two. In this study there are two learning models namely PBL learning models on computer system subjects and STAD cooperative learning models called independent variables. In the two independent variables in an area of this research are achievement motivation which is high achievement motivation and low achievement motivation on computer system subjects. In this study, the population of subjects in the computer system of material number system is vocational students at the Department of Multimedia at SMK Negeri 10 Surabaya, having their address at Jalan Keputih Tegal Sukolilo, Surabaya. According to Sugiyono (2013), a population that has certain characteristics and qualities is a region of generalisasai on an object or subject determined by researchers to study so that it draws conclusions on the subject matter computer system of number system material. The research in this target population is class X students of Multimedia Department of SMKN 10 Surabaya on the subject computer system of number system material. According to Sugiyono (2013), the sample is part of the number and characteristics possessed by the population so that based on the observations of researchers at SMK Negeri 10 Surabaya in determining the class classification is not based on the level of student learning outcomes in the subject computer system number material system but rather using the alphabet system. So that in taking samples, researchers used random sampling techniques. Sampling of members of the population is carried out in a draw regardless of the strata in the population. In this study, SMK Negeri 10 Surabaya students who are members of an affordable population, namely class X MM 1 and X MM 2. Students of class X MM are given lessons with Problem Based Learning learning models on computer systems maps of 35 students while students of 10 MM 2 classes are given lessons with the STAD cooperative learning model on the subject computer system number system material in 35 students. The following formula is used in determining the sample size as follows. Place and research will be carried out at SMKN 10 Surabaya Department of Multimedia which will be conducted in odd semester 2019-2020 with consideration at the school that researchers have never undertaken the learning process with the application of PBL models and achievement motivation on student competencies and according to the schedule of subject matter material computer systems number system, namely: (1) general description of the number system, (2) types of numbers (decimal, binary, octal, hexadecimal), (3) number conversion, (4) decimal binary code (BCD) and hexadecimal binary code (BCH) ), and ASCII Code. Data collection technique is a systematic and standard procedure for obtaining the required data, there is always a relationship between the data collection method and the research problem to be solved so that many research results are inaccurate and the research problem is not solved, because the data collection methods used are not appropriate with research problems (Siregar, 2012, p. 39). In this study the data collection techniques used are: (1) validation; (2) observation; (3) student competency tests. The research instrument is a tool used as data collection in a study so that the measurement scale of the instrument is to determine the units obtained, as well as the type of data or level of data, whether the data is of nominal, ordinal, interval, or ratio type (Siregar, 2012, p. 50) . The following research instruments used in this study are computer system knowledge competencies using computer system knowledge competency tests, computer system attitude competencies using computer system observation sheets, and computer system skills competencies using computer system performance tests. Results and Discussions The data collected was obtained through computer system attitude attitude competency tests, computer system knowledge competency tests, and computer system skills competency tests, which were used in this study. The results of data collection from classes taught by using the Problem Based Learning learning model and STAD cooperative learning models are still in the form of raw scores. For the purposes of statistical tests on research data, the raw score is converted into a standard score. Description of the data is done to explain the data, namely: (1) achievement motivation level data and (2) student competency data consisting of computer system attitude competencies, computer system knowledge competencies, and computer system skills competencies. The student competency data itself consists of student competencies using PBL learning models and student competency data using STAD cooperative learning models. This competency data was obtained from (1) the results of observations of students 'attitudes while attending five meetings for computer system attitude competencies, (2) the results of the pretest scores to determine the students' initial abilities and the results of the posttest scores for the competency values of computer system knowledge, (3) the results of the average performance observation test during five meetings for computer system skills competency. Testing the value of the hypothesis is the final step used to decide whether the temporary answers to the formulation of the problem mentioned in the research hypothesis are true or false. In other words, the statistical hypothesis test is also meaningful if the null hypothesis is accepted or rejected. The statistical hypothesis test used is Anava 2 path. In this study a statistical hypothesis test was separated between the competence of computer system attitudes, computer system knowledge competencies, and computer system skills competencies to the level of achievement motivation. The competence of computer system attitudes on the competence of understanding number systems (decimal, binary, octal, decimal) in this research data shows that the hypothesis testing results of the Fcount ratio = 8.07. By using dbA = 1 and dbD = 31, the Ftable price was 4.45 at the 5% level and 5.20 at the 1% level. Based on this, it can be proven that the value of Fcount is greater than the price of F table both at the 5% level and at the 1% level is very significant. The conclusion drawn is that there are significant differences in the competence of computer system attitudes between students taught on the PBL model and students taught with the STAD cooperative model on computer system subjects in SMK Negeri 10 Surabaya, then H1 is accepted and H0 is rejected, meaning that the competency of the computer system attitude of students is taught using the PBL learning model there are significant differences between students who are taught using the STAD cooperative learning model on a computer system maple at SMK Negeri 10 Surabaya. Furthermore, to support the research hypothesis, further testing of the mean is used, as shown in Table 1 Test Results Mean Competency of Computer System Attitudes Over the Effect of PBL Learning Model is 90.53 with a standard deviation of 5.639 with the number of students 32 while those using the STAD cooperative learning model have a mean test result of 71.44 with a standard deviation of 2,557 with a number of students 36 This proves that there is a significant difference between the competence of computer system attitudes among students taught using the PBL learning model compared to students taught with the STAD cooperative learning model at SMK Negeri 10 Surabaya. In Table 2, the hypothesis testing results obtained Fcount = 8.25 using dbAB = 2 and dbD = 31 obtained Ftable = 3.52 at 5% and 5.01 at 1%. Based on this, it can be proven that the Fcount value is greater than the Ftable value at the 5% or 1% level is very significant. Conclusions that can be drawn about what is meant by a significant interaction with the competence of computer systems in the learning model and achievement motivation used in research, H1 is accepted and H0 denies, what is meant by interactions between students who use PBL learning models that have MBT that show significance with students who use the STAD cooperative model that has an MBR in a computer system map at SMK Negeri 10 Surabaya in the table 2. So in this study there is an interaction between the use of learning models and student MB on the competence of computer system attitudes. The interaction of the use of learning models and student MB to the attitudes of computer system attitudes can be seen in Figure 2. Conclusion Achievement motivation is very influential on student competence. The teacher as a facilitator must be able to develop these abilities as potentials in order to increase maximum competency. The use of learning models, especially in the Department of Multimedia (MM) is absolutely necessary. An example is the Problem Based Learning model that is implemented in MM classrooms, proven to be able to have a positive impact on student competence. So that more similar learning models need to be made so that students can understand productive material both in theory and practice.
3,222.8
2020-09-30T00:00:00.000
[ "Computer Science", "Education" ]
Experimental Study on the Effects of Carbonated Steel Slag Fine Aggregate on the Expansion Rate, Mechanical Properties and Carbonation Depth of Mortar Steel slag is the main by-product of the steel industry and can be used to produce steel slag fine aggregate (SSFA). SSFA can be used as a fine aggregate in mortar or concrete. However, SSFA contains f-CaO, which is the main reason for the expansion damage of mortar and concrete. In this study, the carbonation treatment of SSFA was adopted to reduce the f-CaO content; the influence of the carbonation time on the content of f-CaO in the SSFA was studied; and the effects of the carbonated SSFA replacement ratio on the expansion rate, mechanical properties and carbonation depth of mortar were investigated through tests. The results showed that as the carbonation time increased, the content of f-CaO in the SSFA gradually decreased. Compared to the mortar specimens with carbonated SSFA, the specimens with uncarbonated SSFA showed faster and more severe damage and a higher expansion rate. When the replacement ratio of carbonated SSFA was less than 45%, the carbonated SSFA had an inhibitory effect on the expansion development of the specimens. The compressive strengths of the specimens with a carbonated SSFA replacement ratio of 60% and 45% were 1.29% and 6.81% higher than those of the specimens with an uncarbonated SSFA replacement ratio of 60% and 45%, respectively. Carbonation treatment could improve the replacement ratio of SSFA while ensuring the compressive strength of specimens. Compared with mortar specimens with uncarbonated SSFA, the anti-carbonation performance of mortar specimens with carbonated SSFA was reduced. Introduction Steel slag is an inevitable by-product in the steel production process, accounting for about 25% of the total steel production [1], and it is mainly divided into converter slag, open hearth furnace slag and electric arc furnace slag.In China, only about 30% of the steel slag is effectively utilized, lagging behind other developed countries [2,3], and the cumulative storage capacity of steel slag was 1.468 billion tons by 2020 [4].Most steel slag is treated as industrial waste, and it not only occupies valuable land resources but also leaches out heavy metal ions (such as Zn, Pb, Cr, Ni, etc.), causing serious pollution to the surrounding environment [5][6][7].Therefore, improving the utilization rate of steel slag is crucial. To date, some scholars have conducted extensive research on the comprehensive utilization of steel slag [8][9][10][11][12].The most common method is to use steel slag as a building material, such as directly as a roadbed cushion filler, finely ground as a cementitious material or crushed and screened as an aggregate [13].Compared with traditional aggregates, the utilization of steel slag as an aggregate is advantageous [14,15].Faraone et al. [16] found that, with a sufficient water-cement ratio and cement-slag ratio and an appropriate particle size of steel slag, the mortar could exhibit good compressive strength when cured for 28 days.Pellegrino et al. [17] used a large amount of oxidizing electric arc furnace (EAF) slag instead of natural sand and natural gravel to pour concrete, and they discovered that using EAF slag as a coarse aggregate helped to improve the compressive strength, tensile strength and elastic modulus, but it might have a negative impact on the compressive strength when used as a fine aggregate.Devi et al. [18] discussed the optimal dosage of steel slag as coarse and fine aggregates and showed that the mechanical properties of concrete with the addition of steel slag under the optimal dosage were better than those of ordinary concrete.However, the free calcium oxide (f-CaO) and free magnesium oxide (f-MgO) contained in steel slag undergo slow hydration to form calcium hydroxide (Ca(OH) 2 ) and magnesium hydroxide (Mg(OH) 2 ), with their volumes increasing by 1.98 times and 2.48 times, respectively, which may lead to problems related to poor volume stability, such as the cracking of concrete [19,20].Therefore, caution should be applied when using steel slag as an aggregate [21,22].Meanwhile, some studies have shown that the hydration expansion of f-MgO within a certain threshold can compensate for the shrinkage of steel slag products, which is beneficial to their durability [23,24].Therefore, it is generally believed that f-CaO is the main reason for the poor stability of steel slag [25], and the question of how to eliminate or weaken the expansion effect caused by f-CaO is also the key point of research on the stability of steel slag. Meanwhile, the use of steel slag to capture CO 2 and generate stable carbonates has become a research focus in recent years [26][27][28].Wang et al. [29] summarized the basic principles and common methods of steel slag carbonation and analyzed the effects of the temperature, liquid-solid ratio, carbonation time, CO 2 concentration and pressure and other factors on the carbonation effect of steel slag.Liu et al. [30] and Chen et al. [31] carbonized steel slag used as a supplementary cementitious material through direct carbonation and indirect carbonation, respectively, and both found that carbonation optimized the pore structure of the cement matrix and thus increased the compressive strengths of the mortars.Pang et al. [32] compared the basic characteristics of natural fine aggregate, steel slag aggregate and carbonated steel slag aggregate and found that the compressive strength of concrete with carbonated steel slag fine aggregate could be effectively improved.However, Yu et al. [33] showed different results, setting up four groups of mortar specimens with carbonated steel slag replacement ratios of 0%, 15%, 30% and 45% and finding that the compressive strengths of mortar specimens with carbonated steel slag at 28 days were always smaller than those of mortar specimens without steel slag aggregate.Thus, research on carbonated steel slag mainly focuses on its use as a cementitious material, and the conclusions about the mechanical properties of mortar specimens with carbonated steel slag fine aggregate are not unified.Moreover, due to the significant differences in the physical properties and chemical compositions of different types of steel slag [34,35], there is great uncertainty regarding the expansion and long-term performance of mortar specimens with carbonated steel slag fine aggregate. To fill this research gap, the influence of carbonated steel slag fine aggregate (SSFA) on the properties of mortar was investigated in this study.Firstly, the ethylene glycol-TG method was used to measure the content of f-CaO in SSFA, and the effect of the carbonation time on the content of f-CaO was studied.Then, the effects of different carbonated SSFA replacement ratios on the expansion performance, mechanical properties and carbonation performance of the mortar were compared and discussed. Raw Materials The cement used was ordinary Portland cement with a strength grade of 42.5 R, and the chemical composition is shown in Table 1.The fine aggregates included natural sand and steel slag, among which the natural sand was well-graded natural river sand.Steel slag was taken from a steel plant and crushed by a jaw crusher.The SSFA with a particle size of 0.15~5.0mm was obtained by a ball crusher.The mixing water was tap water. Carbonation of SSFA The prepared SSFA was laid flat on a plastic tray and placed into a concrete carbonation chamber for rapid carbonation, as shown in Figure 1.CO 2 gas with purity of 99% was injected into the carbonation chamber.According to the Chinese Standard GB/T 50082-2009 [36], the conditions inside the carbonation chamber included a temperature of 20 • C, relative humidity of 70% and a CO 2 concentration of 20%.To ensure complete carbonation, the SSFA was taken out from the chamber after 7 days.The carbonation effect of the SSFA was quickly determined by the color change of a phenolphthalein solution, and the results indicated that the carbonation of SSFA was complete. size of 0.15 ~5.0 mm was obtained by a ball crusher.The mixing Carbonation of SSFA The prepared SSFA was laid flat on a plastic tray and place ation chamber for rapid carbonation, as shown in Figure 1.CO was injected into the carbonation chamber.According to the 50082-2009 [36], the conditions inside the carbonation chamber in 20 °C, relative humidity of 70% and a CO2 concentration of 20%.bonation, the SSFA was taken out from the chamber after 7 day of the SSFA was quickly determined by the color change of a p and the results indicated that the carbonation of SSFA was comp Mortar Specimens Seven groups of mortar specimens were used for the exp tests, as displayed in Table 2.The water-cement ratio was 0.47, ment to sand was 1:2.25.Additionally, Table 3 shows the grading o and the corresponding SSFA was used to replace the natural riv chosen replacement ratios.To improve the reliability of the exper three mortar bars used in the expansion test and three cubes used for each group, with a mortar bar size of 25 × 25 × 280 mm and a mm.The finished mortar bars were placed into a standard curin midity and 20 ± 2 °C temperature) with molds and demolded af while the cubes were cured for 28 days after being demolded. Mortar Specimens Seven groups of mortar specimens were used for the expansion and compression tests, as displayed in Table 2.The water-cement ratio was 0.47, and the mass ratio of cement to sand was 1:2.25.Additionally, Table 3 shows the grading of the experimental sand, and the corresponding SSFA was used to replace the natural river sand according to the chosen replacement ratios.To improve the reliability of the experimental data, there were three mortar bars used in the expansion test and three cubes used in the compression test for each group, with a mortar bar size of 25 × 25 × 280 mm and a cube size of 40 × 40 × 40 mm.The finished mortar bars were placed into a standard curing room (90% relative humidity and 20 ± 2 • C temperature) with molds and demolded after being cured for 24 h, while the cubes were cured for 28 days after being demolded.Seven groups of specimens were also used for the carbonation test, as listed in Table 4.The chosen water-cement ratio was 0.50, and the mass ratio of cement to sand was 1:3.The fineness modulus of the natural sand was 3.06.Table 5 shows the continuous grading of the SSFA.There were three cubes for each group, with a size of 100 × 100 × 100 mm.After being poured and shaped, all specimens were demolded and placed into a standard curing room with 90% relative humidity and a temperature of 20 ± 2 • C for 28 days.Among all specimens tabulated in Tables 2 and 4, EM represents the groups used for the study of the expansion performance and mechanical properties of the specimens, while CR refers to those groups used to study the carbonation performance of the specimens.The middle number denotes the replacement ratio of SSFA in this group of specimens.The letters C and UC indicate whether carbonated SSFA or uncarbonated SSFA was used in this group of specimens, respectively. Determination of f-CaO Content in SSFA To investigate the effect of the carbonation time on the content of f-CaO in SSFA, the content of f-CaO was measured at different carbonation times according to Chinese Standard YB/T 4328-2012 [37].Approximately 10 g of SSFA was sealed and stored every 2 h until the content of f-CaO was measured to be below 1.00%. The content of f-CaO in SSFA was determined by the ethylene glycol-TG method.First of all, an ethylene glycol calcium solution was obtained by reacting an ethylene glycol solution with f-CaO, and then an EDTA-2Na solution was used to titrate the ethylene glycol calcium solution, so that the total free calcium content (c 1 ) in the SSFA could be obtained through Equation (1).Subsequently, based on the thermal decomposition characteristics of Ca(OH) 2 under the condition of a high temperature, the content of Ca(OH) 2 (c 2 ) in the SSFA was measured using a thermogravimetric analyzer.Finally, the difference between c 1 and c 2 indicated the content of f-CaO. where T CaO is the mass fraction of total free calcium (%), calculated from T CaO = c(EDTA)•56.08;c(EDTA) is the concentration of the EDTA standard titration solution (mol/L); V is the volume of the EDTA standard titration solution (mL); and m is the mass of the steel slag sample (g). Expansion Test of Mortar Specimens According to the Chinese standard JGJ 52-2006 [38], the volume stability of the mortar specimens was studied through the alkali activity test (rapid method) using crushed stones or pebbles.After demolding, the specimens were immersed in a curing tube and cured in a water bath at 80 • C for 24 h.The initial lengths (l 0 ) of the specimens were measured, and then the specimens continued to be immersed in the curing tube, filled with 1 mol/L NaOH solution with a temperature of 80 • C. Starting from the day on which the initial lengths were measured, the changes in the specimens were observed at 3, 7, 14, 21, and 28 days, and the corresponding lengths (l i ) were measured.The formula for the calculation of the expansion rates of the specimens is given in Equation ( 2), and the average value of the expansion rates of the three specimens was regarded as the result for each group of specimens. where ε i is the expansion rate of a specimen at the i-th day (%); l i is the length of a specimen at the i-th day (mm); l 0 is the initial length of a specimen (mm); and ∆ 1 and ∆ 2 are the lengths of the measuring heads at the left and right ends of the specimen (mm). Compression Test of Mortar Specimens According to the Chinese standard GB/T 17671-2021 [39], the compressive strengths of the mortar specimens were determined.The specimen was placed in the pressure testing machine and uniformly loaded at a rate of 2.4 kN/s until failure.The average value of the compressive strength of the three specimens was regarded as the result for each group of specimens. Carbonation Test of Mortar Specimens Figure 2 shows the process of the carbonation test.To ensure the one-dimensional carbonation of the specimens, only one side of the specimens was retained as the CO 2 erosion surface, while the other five surfaces were coated with epoxy resin.According to the Chinese standard GB/T 50082-2009 [36], the specimens were placed in batches in the carbonation chamber for the carbonation test, and the conditions in the chamber were consistent with the carbonation environment described in Section 2.2. After 7 days of carbonation, the specimens were taken out and cut along the midline using a rock cutting machine.A phenolphthalein solution with a concentration of 1% was sprayed onto the cutting surface, and, after about 30 s, the carbonation depth along the length of the cutting surface was measured every 10 mm.There was a total of 10 measurement points on the cutting surface of each specimen, and the average value of the carbonation depths of the three specimens was calculated as the carbonation depth for each group of specimens. Effect of Carbonation Time on Content of f-CaO in SSFA The content of f-CaO in SSFA at different carbonation times is shown in Figure 3.It is obvious that as the carbonation time increases, the content of f-CaO gradually decreases.After 8 h of carbonation, the content of f-CaO drops to 0.93%, only 29.34% of the initial content, which indicates that the carbonation of steel slag can effectively reduce the f-CaO content.The final products of carbonation are mainly CaCO3 crystals with stable chemical properties, which can fill the gaps on the surface of the SSFA and consolidate the original skeleton of the SSFA, thereby helping to improve the strength and volume stability of the SSFA. Effect of Carbonation Time on Content of f-CaO in SSFA The content of f-CaO in SSFA at different carbonation times is shown in Figure 3.It is obvious that as the carbonation time increases, the content of f-CaO gradually decreases.After 8 h of carbonation, the content of f-CaO drops to 0.93%, only 29.34% of the initial content, which indicates that the carbonation of steel slag can effectively reduce the f-CaO content.The final products of carbonation are mainly CaCO 3 crystals with stable chemical properties, which can fill the gaps on the surface of the SSFA and consolidate the original skeleton of the SSFA, thereby helping to improve the strength and volume stability of the SSFA. Effect of Carbonation Time on Content of f-CaO in SSFA The content of f-CaO in SSFA at different carbonation times is shown in Figure 3.It is obvious that as the carbonation time increases, the content of f-CaO gradually decreases.After 8 h of carbonation, the content of f-CaO drops to 0.93%, only 29.34% of the initial content, which indicates that the carbonation of steel slag can effectively reduce the f-CaO content.The final products of carbonation are mainly CaCO3 crystals with stable chemical properties, which can fill the gaps on the surface of the SSFA and consolidate the original skeleton of the SSFA, thereby helping to improve the strength and volume stability of the SSFA.From Figure 3, it can also be observed that the decrease rate of the f-CaO content with the carbonation time slows down.In the first two hours, the content of f-CaO reduces from 3.17% to 1.73%, with a decrease of 45.43%, and then the decrease in the content of f-CaO between adjacent carbonation times does not exceed 25%.This may be because the products of early carbonation are deposited on the surface of the steel slag, which, to some extent, hinders the diffusion of CO 2 gas into the interior of the steel slag, causing a slowdown in the carbonation reaction [40]. Expansion Rate of Mortar Specimens While recording the changes in the expansion rate of the specimens at different ages, the expansion phenomena can also be observed.As shown in Figure 4, there is certain regularity in the surface damage of each group of specimens.The surfaces of the specimens first show a peeling phenomenon, the surface concrete falls off in powder form, brown explosion points appear, and the distributions of the peeling positions and the explosion points are scattered and have no obvious law.Subsequently, several tiny cracks appear around the explosion points, and, as the experimental time increases, the cracks develop from the points to the surrounding areas, forming a network distribution.Finally, the width of the cracks gradually increases, and the specimens break along the cracks. Materials 2024, 17, x FOR PEER REVIEW 7 of 13 From Figure 3, it can also be observed that the decrease rate of the f-CaO content with the carbonation time slows down.In the first two hours, the content of f-CaO reduces from 3.17% to 1.73%, with a decrease of 45.43%, and then the decrease in the content of f-CaO between adjacent carbonation times does not exceed 25%.This may be because the products of early carbonation are deposited on the surface of the steel slag, which, to some extent, hinders the diffusion of CO2 gas into the interior of the steel slag, causing a slowdown in the carbonation reaction [40]. Expansion Rate of Mortar Specimens While recording the changes in the expansion rate of the specimens at different ages, the expansion phenomena can also be observed.As shown in Figure 4, there is certain regularity in the surface damage of each group of specimens.The surfaces of the specimens first show a peeling phenomenon, the surface concrete falls off in powder form, brown explosion points appear, and the distributions of the peeling positions and the explosion points are scattered and have no obvious law.Subsequently, several tiny cracks appear around the explosion points, and, as the experimental time increases, the cracks develop from the points to the surrounding areas, forming a network distribution.Finally, the width of the cracks gradually increases, and the specimens break along the cracks.However, the rate of surface damage development varies among different groups of specimens.At the beginning of the expansion test, there is no clear damage to any group of specimens.When the experiment lasts for 14 days, the specimens in both Groups EM60-C and EM60-UC (Figure 5e,f) show several brown explosion points and obvious network cracks on the surface, while the specimens in Group EM60-UC have more severe damage, with longer and wider cracks, and some cracks reach a width of 0.2 mm.Although the specimens in Group EM45-UC (Figure 5c) have one or two brown explosion points on the surface, the overall damage is not significant, while other groups of specimens (Figure 5a,b,d) have no damage.When the experiment lasts for 21 days, multiple breaks occur on the surfaces of the specimens in Group EM60-UC, indicating the end of the expansion test.One specimen in Group EM60-C also breaks into several parts, while the network cracks However, the rate of surface damage development varies among different groups of specimens.At the beginning of the expansion test, there is no clear damage to any group of specimens.When the experiment lasts for 14 days, the specimens in both Groups EM60-C and EM60-UC (Figure 5e,f) show several brown explosion points and obvious network cracks on the surface, while the specimens in Group EM60-UC have more severe damage, with longer and wider cracks, and some cracks reach a width of 0.2 mm.Although the specimens in Group EM45-UC (Figure 5c) have one or two brown explosion points on the surface, the overall damage is not significant, while other groups of specimens (Figure 5a,b,d) have no damage.When the experiment lasts for 21 days, multiple breaks occur on the surfaces of the specimens in Group EM60-UC, indicating the end of the expansion test.One specimen in Group EM60-C also breaks into several parts, while the network cracks on the surface of the remaining specimens continue to develop.The specimens in Groups EM45-C and EM45-UC show obvious peeling and cracking phenomena, while the other three groups of specimens still show little damage.When the experiment reaches 28 days, there is still no obvious damage on the surfaces of the specimens in Groups EM0 and EM30-C, while a small number of cracks appear on the surfaces of the specimens in both Groups EM30-UC and EM45-C.The network cracking of the specimens in Group EM45-UC and the remaining specimens in Group EM60-C becomes increasingly obvious and the cracks widen, but no break occurs.erials 2024, 17, x FOR PEER REVIEW 8 of on the surface of the remaining specimens continue to develop.The specimens in Grou EM45-C and EM45-UC show obvious peeling and cracking phenomena, while the oth three groups of specimens still show little damage.When the experiment reaches 28 da there is still no obvious damage on the surfaces of the specimens in Groups EM0 a EM30-C, while a small number of cracks appear on the surfaces of the specimens in bo Groups EM30-UC and EM45-C.The network cracking of the specimens in Group EM UC and the remaining specimens in Group EM60-C becomes increasingly obvious a the cracks widen, but no break occurs.It is not difficult to find that, compared to the carbonated SSFA specimens, the unc bonated SSFA specimens show faster and more severe damage, demonstrating that carbonation of SSFA can significantly improve the volume stability of the specime Moreover, the expansion damage of the specimens becomes increasingly severe with increase in the SSFA replacement ratio, which means that the SSFA replacement ratio do indeed have a significant impact on the volume stability of the specimens. The expansion rate varies with the age of the specimens, as shown in Figure 6.Co paring Figure 6a with Figure 6b, it can be seen that, under the same conditions, the expa sion rate of the uncarbonated SSFA specimens is much higher than that of the carbonat SSFA specimens, illustrating that the carbonation treatment of SSFA is beneficial in i proving the volume stability of the specimens.According to Section 3.1, the content o CaO in SSFA significantly decreases due to carbonation, and its hydration expansion eff is weakened.Moreover, the CaCO3 crystals generated by carbonation will fill the pores the surface or wrap the SSFA, which has a certain hindering effect on the subsequent h dration process.Therefore, the expansion rate of the carbonated SSFA specimens is g erally small.It is not difficult to find that, compared to the carbonated SSFA specimens, the uncarbonated SSFA specimens show faster and more severe damage, demonstrating that the carbonation of SSFA can significantly improve the volume stability of the specimens.Moreover, the expansion damage of the specimens becomes increasingly severe with the increase in the SSFA replacement ratio, which means that the SSFA replacement ratio does indeed have a significant impact on the volume stability of the specimens. The expansion rate varies with the age of the specimens, as shown in Figure 6.Comparing Figure 6a with Figure 6b, it can be seen that, under the same conditions, the expansion rate of the uncarbonated SSFA specimens is much higher than that of the carbonated SSFA specimens, illustrating that the carbonation treatment of SSFA is beneficial in improving the volume stability of the specimens.According to Section 3.1, the content of f-CaO in SSFA significantly decreases due to carbonation, and its hydration expansion effect is weakened.Moreover, the CaCO 3 crystals generated by carbonation will fill the pores on the surface or wrap the SSFA, which has a certain hindering effect on the subsequent hydration process.Therefore, the expansion rate of the carbonated SSFA specimens is generally small.Figure 6a shows that the early expansion rate of the uncarbonated SSFA specimens fluctuates around 0.10%, and the difference is not obvious.As the age increases, the expansion rates of the specimens in Groups EM30-UC and EM45-UC change relatively smoothly.The expansion rate of the specimens in Group EM30-UC is nearly identical to that of the specimens in Group EM0, and the difference in the expansion rates between the two groups at the same age is within 0.100%.Moreover, the expansion rate of the specimens in Group EM60-UC is much higher than that of the others, with an expansion rate of 0.857% at the age of 14 days, which is 234.8% and 86.7% higher than that of the specimens in Groups EM30-UC and EM45-UC, respectively, and the expansion development is also faster, with breaks appearing first.Therefore, the expansion rate of the uncarbonated SSFA specimens increases with the replacement ratio.When the replacement ratio of SSFA is high, the development of the expansion rate of the SSFA specimens can be divided into several stages.In the early stage, the hydration reaction of the specimens is mainly that of a cement-based material, and the expansion development is slow, with little difference in the change in the expansion rate.After the basic hydration of the cementitious materials is completed, the SSFA in the specimens begins to slowly hydrate, and the expansion effect begins to appear.As time passes, the expansion effect generated by the hydration of SSFA gradually becomes apparent, and the expansion rate of the specimens continues to increase.When the accumulated expansion stress inside the specimens exceeds the tensile strength of the matrix, micro-cracks will appear around the steel slag particles.If the steel slag is located near the surfaces of the specimens, it is easy to see the peeling phenomenon and explosion point damage on the surfaces of the specimens. Figure 6b shows that, for the carbonated SSFA specimens, the expansion rates on the third day are slower than those of the specimens without SSFA.Furthermore, during the experiment, the expansion rates of the specimens in Groups EM45-C and EM30-C are always slower than those of the specimens in Group EM0, illustrating that when the replacement ratio of carbonated SSFA is less than 45%, the carbonated SSFA has an inhibitory effect on the expansion development of the specimens.This is probably because carbonation treatment mainly involves a reaction with the f-CaO in the steel slag to generate CaCO3 crystals, and when carbonated steel slag is used as a fine aggregate to replace some natural sand, not only does it weaken the expansion effect of the steel slag itself, but it also improves the strength of the mortar, making the specimens less prone to expansion damage.In addition, the expansion rates of the specimens in Group EM60-C are nearly always larger than those of the specimens in Group EM0, with the expansion rate at the 28th day being 88.79% higher than that of EM0, demonstrating that the replacement ratio of carbonated SSFA has an important influence on the expansion rate of the specimens.Although carbonation accelerates the hydration of the f-CaO in the SSFA and helps to mitigate the problem of poor volume stability, the total amount of SSFA used increases, and Figure 6a shows that the early expansion rate of the uncarbonated SSFA specimens fluctuates around 0.10%, and the difference is not obvious.As the age increases, the expansion rates of the specimens in Groups EM30-UC and EM45-UC change relatively smoothly.The expansion rate of the specimens in Group EM30-UC is nearly identical to that of the specimens in Group EM0, and the difference in the expansion rates between the two groups at the same age is within 0.100%.Moreover, the expansion rate of the specimens in Group EM60-UC is much higher than that of the others, with an expansion rate of 0.857% at the age of 14 days, which is 234.8% and 86.7% higher than that of the specimens in Groups EM30-UC and EM45-UC, respectively, and the expansion development is also faster, with breaks appearing first.Therefore, the expansion rate of the uncarbonated SSFA specimens increases with the replacement ratio.When the replacement ratio of SSFA is high, the development of the expansion rate of the SSFA specimens can be divided into several stages.In the early stage, the hydration reaction of the specimens is mainly that of a cement-based material, and the expansion development is slow, with little difference in the change in the expansion rate.After the basic hydration of the cementitious materials is completed, the SSFA in the specimens begins to slowly hydrate, and the expansion effect begins to appear.As time passes, the expansion effect generated by the hydration of SSFA gradually becomes apparent, and the expansion rate of the specimens continues to increase.When the accumulated expansion stress inside the specimens exceeds the tensile strength of the matrix, micro-cracks will appear around the steel slag particles.If the steel slag is located near the surfaces of the specimens, it is easy to see the peeling phenomenon and explosion point damage on the surfaces of the specimens. Figure 6b shows that, for the carbonated SSFA specimens, the expansion rates on the third day are slower than those of the specimens without SSFA.Furthermore, during the experiment, the expansion rates of the specimens in Groups EM45-C and EM30-C are always slower than those of the specimens in Group EM0, illustrating that when the replacement ratio of carbonated SSFA is less than 45%, the carbonated SSFA has an inhibitory effect on the expansion development of the specimens.This is probably because carbonation treatment mainly involves a reaction with the f-CaO in the steel slag to generate CaCO 3 crystals, and when carbonated steel slag is used as a fine aggregate to replace some natural sand, not only does it weaken the expansion effect of the steel slag itself, but it also improves the strength of the mortar, making the specimens less prone to expansion damage.In addition, the expansion rates of the specimens in Group EM60-C are nearly always larger than those of the specimens in Group EM0, with the expansion rate at the 28th day being 88.79% higher than that of EM0, demonstrating that the replacement ratio of carbonated SSFA has an important influence on the expansion rate of the specimens.Although carbonation accelerates the hydration of the f-CaO in the SSFA and helps to mitigate the problem of poor volume stability, the total amount of SSFA used increases, and the superimposed expansion effect is also significant, resulting in a much higher expansion rate. Compressive Strength of Mortar Specimens Figure 7 shows the compressive strength of the specimens.It is obvious that the compressive strength of the specimens with SSFA is higher than that of the ordinary specimens (30.55 MPa), i.e., the addition of SSFA is beneficial for the compressive strength of the specimens.This may be because, compared to natural sand with a smooth surface, steel slag, whose surface is quite rough, can better blend with cement slurry.In addition, the hydration product of f-CaO in SSFA could fill the pores inside the mortar, improving the compactness of the mortar, which is manifested as an improvement in the compressive strength of the specimens.compactness of the mortar, which is manifested as an improvement in the compressive strength of the specimens. When the replacement ratio of SSFA is the same, the compressive strengths of the specimens in Groups EM60-C and EM45-C are 1.29% and 6.81% higher than those of EM60-UC and EM45-UC, respectively, which indicates that the carbonation of SSFA can have a positive effect in terms of the improvement of the compressive strength of the specimens.This may be related to the pre-filling of the pores with CaCO3 crystals, as described in Section 3.1, making the structure of the steel slag particles more compact. For the specimens with uncarbonated SSFA, the compressive strength of the specimens in Group EM60-UC is the lowest; specifically, it is 21.58% and 3.36% lower than that of the specimens in Groups EM30-UC and EM45-UC, respectively.In other words, the compressive strength of the specimens decreases with the increase in the replacement ratio of SSFA.This is because, although uncarbonated SSFA can have a positive effect on the compressive strength of the specimens, the expansion effect generated by the hydration of SSFA gradually becomes apparent with the increase in the replacement ratio of SSFA, manifesting as a decrease in the compressive strength of the specimens.For the specimens with carbonated SSFA, the specimens in Group EM45-C have the highest compressive strength, which is 6.23% and 9.11% higher than that of the specimens in Groups EM30-C and EM60-C, respectively.It is evident that carbonation treatment can improve the replacement ratio of SSFA while ensuring the compressive strength of the specimens. Carbonation Depth of Mortar Specimens From Figure 8, it can be seen that, compared to the specimens in Group CR0, the carbonation depths of the specimens in Groups CR15-UC, CR30-UC and CR45-UC decrease by 15.98%, 27.10% and 37.04%, respectively, while the carbonation depths of the specimens in Groups CR15-C, CR30-C and CR45-C are reduced by 9.36%, 14.23% and 23.78%, respectively, indicating that the carbonation resistance of the specimens with SSFA is better than that of the ordinary specimens.In addition, as the replacement ratio of When the replacement ratio of SSFA is the same, the compressive strengths of the specimens in Groups EM60-C and EM45-C are 1.29% and 6.81% higher than those of EM60-UC and EM45-UC, respectively, which indicates that the carbonation of SSFA can have a positive effect in terms of the improvement of the compressive strength of the specimens.This may be related to the pre-filling of the pores with CaCO 3 crystals, as described in Section 3.1, making the structure of the steel slag particles more compact. For the specimens with uncarbonated SSFA, the compressive strength of the specimens in Group EM60-UC is the lowest; specifically, it is 21.58% and 3.36% lower than that of the specimens in Groups EM30-UC and EM45-UC, respectively.In other words, the compressive strength of the specimens decreases with the increase in the replacement ratio of SSFA.This is because, although uncarbonated SSFA can have a positive effect on the compressive strength of the specimens, the expansion effect generated by the hydration of SSFA gradually becomes apparent with the increase in the replacement ratio of SSFA, manifesting as a decrease in the compressive strength of the specimens.For the specimens with carbonated SSFA, the specimens in Group EM45-C have the highest compressive strength, which is 6.23% and 9.11% higher than that of the specimens in Groups EM30-C and EM60-C, respectively.It is evident that carbonation treatment can improve the replacement ratio of SSFA while ensuring the compressive strength of the specimens. Carbonation Depth of Mortar Specimens From Figure 8, it can be seen that, compared to the specimens in Group CR0, the carbonation depths of the specimens in Groups CR15-UC, CR30-UC and CR45-UC decrease by 15.98%, 27.10% and 37.04%, respectively, while the carbonation depths of the specimens in Groups CR15-C, CR30-C and CR45-C are reduced by 9.36%, 14.23% and 23.78%, respectively, indicating that the carbonation resistance of the specimens with SSFA is better than that of the ordinary specimens.In addition, as the replacement ratio of SSFA increases, the carbonation depth of the specimens with uncarbonated SSFA significantly decreases, showing that the carbonation resistance of the specimens improves with the increase in the replacement ratio, when the replacement ratio of SSFA is less than 45%.The reasons for this phenomenon are that the f-CaO contained in SSFA slowly hydrates during the carbonation test, requiring the absorption of CO 2 gas, and, at the same time, the Ca(OH) 2 or CaCO 3 produced by hydration can fill the surrounding pores, preventing CO 2 gas from continuing to diffuse into the interiors of the specimens.Therefore, the carbonation resistance of specimens with SSFA is better, and the improvement in the carbonation resistance is more significant with the increase in the replacement ratio of SSFA. SSFA increases, the carbonation depth of the specimens with uncarbonated SSFA significantly decreases, showing that the carbonation resistance of the specimens improves with the increase in the replacement ratio, when the replacement ratio of SSFA is less than 45%.The reasons for this phenomenon are that the f-CaO contained in SSFA slowly hydrates during the carbonation test, requiring the absorption of CO2 gas, and, at the same time, the Ca(OH)2 or CaCO3 produced by hydration can fill the surrounding pores, preventing CO2 gas from continuing to diffuse into the interiors of the specimens.Therefore, the carbonation resistance of specimens with SSFA is better, and the improvement in the carbonation resistance is more significant with the increase in the replacement ratio of SSFA. Compared with the specimens with uncarbonated SSFA, the carbonation depth of the specimens with carbonated SSFA is slightly larger, but the difference in the carbonation depth between the two types of specimens with the same replacement ratio of SSFA does not exceed 18%.The reason for this small difference is that the content of f-CaO in the SSFA after carbonation visibly drops, and the absorption capacity of CO2 is weakened; hence, the anti-carbonation performance of the specimens with carbonated SSFA reduces slightly. Conclusions This paper first explored the influence of the carbonation time on the content of f-CaO in SSFA and then investigated the influence of carbonated SSFA on the expansion performance, mechanical properties and carbonation performance of mortar specimens, and the following conclusions can be drawn. (1) The carbonation treatment of steel slag can effectively reduce the f-CaO content.After 8 h of carbonation, the content of f-CaO in SSFA drops from 3.17% to 0.93%, only 29.34% of the initial content.(2) Compared to the mortar specimens with carbonated SSFA, the specimens with uncarbonated SSFA shows faster and more severe damage and a higher expansion rate.When the replacement ratio of carbonated SSFA is less than 45%, the carbonated SSFA has an inhibitory effect on the expansion development of the specimens.The carbonation treatment of SSFA can improve the replacement ratio of SSFA while ensuring the same volume stability of the mortar specimens. (3) The compressive strength of mortar specimens with uncarbonated SSFA reduces with the increase in the replacement ratio of SSFA.The compressive strengths of the specimens with carbonated SSFA replacement ratios of 60% and 45% are 1.29% and 6.81% higher than those of the specimens with uncarbonated SSFA replacement ratios of 60% and 45%, which indicates that the carbonation of SSFA can have a positive effect in terms of the improvement in the compressive strength of the specimens.Carbonation treatment can improve the replacement ratio of SSFA while ensuring the compressive strength of specimens.Compared with the specimens with uncarbonated SSFA, the carbonation depth of the specimens with carbonated SSFA is slightly larger, but the difference in the carbonation depth between the two types of specimens with the same replacement ratio of SSFA does not exceed 18%.The reason for this small difference is that the content of f-CaO in the SSFA after carbonation visibly drops, and the absorption capacity of CO 2 is weakened; hence, the anti-carbonation performance of the specimens with carbonated SSFA reduces slightly. Conclusions This paper first explored the influence of the carbonation time on the content of f-CaO in SSFA and then investigated the influence of carbonated SSFA on the expansion performance, mechanical properties and carbonation performance of mortar specimens, and the following conclusions can be drawn. (1) The carbonation treatment of steel slag can effectively reduce the f-CaO content.After 8 h of carbonation, the content of f-CaO in SSFA drops from 3.17% to 0.93%, only 29.34% of the initial content.(2) Compared to the mortar specimens with carbonated SSFA, the specimens with uncarbonated SSFA shows faster and more severe damage and a higher expansion rate.When the replacement ratio of carbonated SSFA is less than 45%, the carbonated SSFA has an inhibitory effect on the expansion development of the specimens.The carbonation treatment of SSFA can improve the replacement ratio of SSFA while ensuring the same volume stability of the mortar specimens. (3) The compressive strength of mortar specimens with uncarbonated SSFA reduces with the increase in the replacement ratio of SSFA.The compressive strengths of the specimens with carbonated SSFA replacement ratios of 60% and 45% are 1.29% and 6.81% higher than those of the specimens with uncarbonated SSFA replacement ratios of 60% and 45%, which indicates that the carbonation of SSFA can have a positive effect in terms of the improvement in the compressive strength of the specimens.Carbonation treatment can improve the replacement ratio of SSFA while ensuring the compressive strength of specimens. (4) Mortar specimens with SSFA have better carbonation resistance.When the replacement ratio of SSFA is less than 45%, the carbonation depth of the specimens significantly decreases with the increase in the replacement ratio.Compared with mortar specimens with uncarbonated SSFA, the carbonation depth of mortar specimens with carbonated SSFA is slightly larger, and their anti-carbonation performance is reduced.(5) Carbonation treatment is a beneficial method to improve the stability of SSFA and provide guidance for the future application of mortar.Further study regarding the effect of carbonated steel slag coarse aggregate on the mechanical and expansion properties of concrete is recommended. Figure 3 . Figure 3.The f-CaO content in SSFA with the carbonation time. Figure 3 . Figure 3.The f-CaO content in SSFA with the carbonation time.Figure 3. The f-CaO content in SSFA with the carbonation time. Figure 3 . Figure 3.The f-CaO content in SSFA with the carbonation time.Figure 3. The f-CaO content in SSFA with the carbonation time. Figure 6 . Figure 6.Change in expansion rate with age of specimens. Figure 6 . Figure 6.Change in expansion rate with age of specimens. Table 2 . Quantities of mortar specimens in expansion and compression Table 2 . Quantities of mortar specimens in expansion and compression tests. Table 3 . Grading of experimental sand. Table 4 . Groups of specimens in carbonation test.
9,977.8
2024-07-01T00:00:00.000
[ "Materials Science", "Engineering" ]