text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Long Short-Term Memory Neural Networks for RNA Viruses Mutations Prediction
Viral progress remains a major deterrent in the viability of antiviral drugs. +e ability to anticipate this development will provide assistance in the early detection of drug-resistant strains and may encourage antiviral drugs to be the most effective plan. In recent years, a deep learning model called the seq2seq neural network has emerged and has been widely used in natural language processing. In this research, we borrow this approach for predicting next generation sequences using the seq2seq LSTM neural network while considering these sequences as text data. We used hot single vectors to represent the sequences as input to the model; subsequently, it maintains the basic information position of each nucleotide in the sequences. Two RNA viruses sequence datasets are used to evaluate the proposed model which achieved encouraging results. +e achieved results illustrate the potential for utilizing the LSTM neural network for DNA and RNA sequences in solving other sequencing issues in bioinformatics.
Introduction
Families of viruses are grouped supported by their style of nucleic acid as genetic material, DNA, or RNA. DNA viruses usually contain double-stranded DNA (dsDNA) and infrequently single-stranded DNA (ssDNA). Viruses can replicate using DNA-dependent DNA polymerase. RNA viruses have typically ssRNA and also contain dsRNA. ere are two groups of ssRNA viruses, and they are positive-sense (ssRNA (+)) and negative-sense (ssRNA (−)). e genetic material of ssRNA (+) viruses is like mRNA and might be directly translated by the host cell. ssRNA (-) viruses carry RNA complementary to mRNA and must be changed to positive RNA using RNA polymerase at some point in time. A special case of this type is the retroviruses, which replicate through DNA intermediates utilizing invert transcriptase in spite of having RNA genomes [1]. e strands of DNA are composed of nucleotides. All nucleotides are made of a nitrogen-containing nucleobase, which are either guanine (G), adenine (A), thymine (T), or cytosine (C). Most of the DNA molecules consist of two threads wrapped around each other to form a double helix. DNA strands used this arrangement to generate RNA in a process known as translation. In any case, unlike DNA, RNA is regularly found as a single strand. One sort of RNA is the flag-bearer RNA (mRNA), which carries data from the ribosome, which is where the protein is synthesized. e mRNA sequence is what indicates the chain of amino acids that made up the protein. Moreover, DNA and RNA are the most common components of viruses. A few of the viruses are DNA-based, whereas others are RNA-based such as Newcastle, HIV, and flu [2].
RNA viruses differ from DNA viruses in that they have many mutations, and subsequently, they have higher versatility. is mutation causes continuous development that leads to resistance, so the virus becomes more destructive [3].
Influenza viruses are stuck in negative RNA viruses. Influenza A and B contain eight parts of viral RNA, while influenza C contains seven parts of viral RNA [4]. Several viruses with widespread potential have developed over time.
e 2002 rise of the extremely intense respiratory disorder coronavirus (SARS-CoV), the widespread H1N1 flu in 2009, which resulted in the circulation of flu H5N1 and H5N7 strains, and the later rise of the Middle East respiratory disorder coronavirus (MERS-CoV) outline the current risk of these viruses [1,4]. In spite of the main contrasts in their structure and the study of disease transmission, these widespread viruses share a number of vital properties. ey are zoonotic encompassed RNA respiratory viruses that occasionally transmit between people in their original shape but seem to be transforming to facilitate more productive human-to-human transmission [5].
Machine learning is one of the tools that are used for analyzing the mutation data. Machine learning techniques offer assistance by predicting the impacts of nonsynonymous single nucleotide polymorphisms on protein stability, work, and drug resistance [6].
Learning the rules of depicting mutations that influence protein behavior and utilizing them to gather unused important mutations that will be resistant to certain drugs is one of the purposes that machine learning techniques are used for [7]. Another purpose is to predict the potential secondary structure based on essential structure sequences [8][9][10].
Another trend is to predict the variants of RNA sequences' single nucleotide. RNA sequence is considered to be a series of four separate states and, thus, follows nucleotide substitutions during the evolution of the sequence. In this direction, it has been assumed that the different nucleotides in the sequence evolved symmetrically, and this is justified by using the neutral evolution state of nucleotides. Later, other research denied this assumption and identified the relevant neighbor-dependent substitution processes.
Finally, the prediction of the mutation process helps in developing drugs that can overcome these mutations [11,12]. Long-term memory (LSTM) is a specific recurring neural network structure (RNN) designed for modeling time series, and long-term dependencies are more accurate than traditional networks RNN [13]. LSTM architecture is based on RNNs, which were introduced to address the vanishing gradient problem of classical RNNs [14]. e seq2seq LSTM architecture is an encoder-decoder architecture, which consists of two LSTM networks: the encoder LSTM and the decoder LSTM. e LSTM cell takes an input in the form of the one-hot encoded version at t position, as well as two recurrent states, the hidden state h (t) and the cell state c (t), that are vectors with predetermined dimensions. ese states and the input are regulated by trainable input [15].
Each LSTM cell remembers values of the hidden state and cell state over time intervals and the three gates modify the information received from the previous time step. Finally, the outputs of each gate are combined to update the hidden state and the cell state of the current cell [14]. e LSTM networks deal with texts, and they create dictionaries of unique words for all text data without ordering them. During training, the data from the dictionaries were used to modify the LSTM on the basis of only one word. Every word is represented by one vector value. We create as many input vectors as the number of words in one sequence. In this case, the sequence is one sentence, one set of words. In this study, a machine learning technique is proposed that is based on deep learning. It predicts the probabilities of nucleotide mutations that do not occur in the primary sequence. is paper presents a proof of concept by applying this technique on a set of successive generations of the influenza A Virus (H1N1) from USA and generations of SARS-COV-2. e actual sequence and the predicted sequence are then compared in order to validate the ability of the machine learning technique to predict mutations in RNA sequence evolution and its prediction accuracy. In this method, a training phase is used in which all the input iteration may be a sequence of one generation of the virus, while the output is the sequence of the next generation of the virus. Each feature within the input may be a nucleotide within the sequence corresponding to a feature within the output. e method presented in this paper predicts potential mutations in a sequence based on previous sequences.
is comparison results in a ratio that is calculated based on the number of nucleotides matched between both the expected and actual sequences versus the total number of nucleotides in the sequence.
e main contribution of this study is to predict the next sequence of DNA sequence using the LSTM deep learning utilizing a different manner, which is dividing the long sequences of DNA into amino acids as words, tokenizing them to create an amino acid dictionary of unique words. e results outperform the results of studies in the literature. e rest of the paper is organized as follows. Section 2 provides related work in the application of machine learning techniques in genetic problems, as well as the proposed technique to solve these problems. Methods, Results, and Discussion appear in Sections 3-5, and finally, a Conclusion is presented in Section 6.
Related Work
Predicting strategies has been practiced within the field of genetics for a long time, and one of the trends has been the description of genome sequence changes in the influenza virus after it has invaded humans from other animal hosts. Based on the composition of a few oligonucleotide-based hosts, a strategy has been proposed for predicting changes to the directivity sequence of the virus and control strains that are potentially dangerous when introduced to human populations from nonhuman sources adapted toward distinctive directions [16].
Authors in [17] used another direction depending on the relationship of mutations between 241 H5N1 hemagglutinins of influenza. A virus was decided according to its amino acid and RNA codon sequences with these independent six features and the likelihood or nonappearance of mutation as being dependent utilizing logistic regression. Generally, logistic regression can capture a relationship, but this relationship cannot be captured when only a few mutations are contained in the regression. is means that all the mutations need to be pooled in the first hemagglutinin sequence, followed by meaning of all independents into a single hemmaglutinin sequence for the regression analysis rather than to regress each hemagglutinin sequence with its mutations.
In [18,19], authors go to another direction depending on the randomness. An attempt is made to apply the traditional neural network for modeling the relationship between cause and mutation to predict potential sites of mutation, and then the possibility of mutation of amino acids is used to predict the amino acids that may be developed in the expected positions. e results confirmed the possibility of using the relationship and the cause of the mutations with the neural network model to predict the positions of mutations and the possibility of using the mutated amino acids to predict the amino acids that may be developed.
Authors in [11] depend on prediction host tropism using a machine learning algorithm (random forest) where computational models of 11 influenza proteins were generated to predict the host tropism. Prediction models were trained on influenza protein sequences isolated from both bird and human samples. e model could determine host distension for individual influenza proteins using the features of 11 proteins to build a model for predicting the virus host influenza.
In [12], they developed software for discrimination of pandemic from non-pandemic influenza sequences at both nucleotide and protein levels via the CBA method.
In [20], the possible point mutations of primary RNA sequence structure alignments are predicted. ey predicted the genotype of each nucleotide in the RNA sequence and demonstrated that the nucleotides in the RNA sequence influence adjacent nucleotide mutations in the sequence using a neural network technique in order to predict new strains, and then predicted the mutation patterns using the rough set theory. e data used in this model are several aligned RNA isolates of time-series species of the Newcastle virus. Two datasets from two different sources were used for model validation.
is method results in nucleotide prediction in the new generation exceeding 75%.
In this paper, a seq-2-seq LSTM-RNN model is introduced for predicting next-generation sequence inspired by the recent work in language modeling. is model, the entire genome for training, predicts the next sequence generation with accuracy that outperforms what had been previously reported.
Dataset.
is study presents a proof of concept by applying this technique on a set of successive generations of the Newcastle Disease Virus (NDV) and influenza virus, as described in Table 1.
e first dataset of NDV consists of Eighty-three DNA (RNA reverse transcription) sequences that were obtained from different birds in China over the course of 2005; samples were taken from ill or dead poultry. e data were collected and presented in [21]. e second dataset of influenza virus consists of DNA (RNA reverse transcription) sequences of 4609 H1N1 of influenza A virus from 1935 to 2017. It was obtained from the Medline data bank [22].
Proposed
Model Architecture. In this study, each amino acid is predicted in the virus sequence, and it was demonstrated that the amino acids in the sequence influence adjacent amino acid mutations in the sequence using a LSTM deep neural network technique in order to predict new strains.
We proposed a model for the Virus Mutations Prediction. e aim of this approach was to optimize the prediction accuracy by preparing sequencing data in a new way for optimal accuracy. e proposed approach consists of four main phases. In the first phase, sequences of datasets are preprocessed. In the second phase, once we have preprocessed sequences of data, they are transformed into a format that is suitable for training an LSTM network. In this case, a one-hot encoding of the integer values is used where each value is represented by a binary vector that is all "0" values except the pointer to the word, which is set to 1. In the third phase, the input data are prepared to train on LSTM encoder. After that, it is the role of the decoder to take the output from encoder as integers and transform it to sequences. Finally, the obtained results are evaluated. e overall process of the proposed method is illustrated in Figure 1. Figure 2, to achieve better results with sequencing data, the data preprocessing phase is based on three steps; the first step is the DNA translation, the second is the tokenization, and the third is the padding.
Preprocessing Phase. As shown in
(1) DNA Sequencing. DNA sequencing is the sequence of sequential letters without space. ere are no words in the DNA sequence. We propose a method to translate sequences of DNA to words, and then apply the representation technique of text data without losing any information of the position of each nucleotide in sequences. Figure 3 gives an example of translating a DNA sequence into words. Using a window with fixed size and sliding it through the given sequence with a fixed step's stride equals to zero. is segment is considered as a word and is added to the destination sequence. Finally, a series of words derived from a specific DNA sequence has been produced. e word size is 3 nucleotides.
ree-letter code of RNA may be used to represent each amino acid in the sequence.
(2) DNA Sequence Tokenization for Amino Acid Dictionary. e next step is tokenizing the DNA sequences. It divides a sentence into the corresponding list of words. A dictionary with 64 different words is used as shown in Figure 2. Each sequence in the dataset has four nucleotides bases (A, C, T, G) and each word consists of three nucleotides, so we generate a dictionary from 64 (4 3 ) unique words where words are the keys and the corresponding integers are the values. is is extremely important since deep learning and machine learning algorithms work with numbers.
(3) DNA Sequence Padding. Next, the input will be padded. e reason behind padding the input and the output is that, content sentences can be of varying length; however, the LSTM expects input instances with the same length. In this manner, the sentences are changed into fixed-length vectors. One way to do this is by using the padding. In padding, a certain length is defined for a sentence. In our case, the length of the longest sentence in the inputs and outputs will be used for padding the input and output sentences, respectively. Each word is then represented by a one-hot vector. A sequence of words is produced from the given DNA sequence.
Proposed Prediction Model.
Borrowing the idea from seq-to-seq learning [23], the proposed LSTM Model is composed of connected LSTM-based encoder and decoder networks (Figure 4). e input to the encoder LSTM is the generation sequence (i). e final output hidden states and cell states are concatenated and passed to the decoder, which employs it, beside the following generation sequence (i + 1) (during training), to create a next-generation sequence. Hidden states (h) and cell states (c) capture the context information that will be used to inform the decoder. e decoder consists of an LSTM like the encoder. Each cell receives the cell and hidden states from the previous cell, except for the first cell, which obtains those from the encoder (h and c). e cell (k) in the dense layer is used to predict a probability distribution of word at position t of the nextgeneration sequence.
Hyperparameters used in the model are shown in Table 2; they were set based on a compromise between training time and accuracy on the validation set.
(1) e Model Training. As shown in Figure 5, LSTM bidirectional sequence to the sequence is made of an encoder and decoder. An input sequence (ACT CTA TTG in Figure 5) is given as input to the encoder. Each LSTM cell takes information as input from the previous cell, in the form of a hidden state vector and cell state vector (represented in arrows) and combines it with the one-hot encoded vector. e output of the encoder is the concatenation of the hidden and cell-state vectors. In the decoder, each cell receives as input [i] in the one-hot encoded version and the previous word of the next generation generated by the model, as well as the hidden state and cell-state vectors from the previous cell. It not only passes those two vectors after updating them to the next cell but also feeds the hidden state vector to the dense layer (output layer), which outputs a probability distribution over the next generation word at that position. A SoftMax function is applied to obtain probability distribution. To reduce overfitting, early stopping is used to end training when the validation loss did not decrease.
(2) e Model Evaluation. To use a trained LSTM model to simulate the evolution of a given generation sequence, the model is used as described previously, with a few small modifications. e functionality of the encoder remains the same. e sentence in the original language is passed through the encoder and the hidden state, and the cell state is the output from the encoder. To make predictions, the decoder output is passed through the dense layer. In the tokenization steps, we converted words to integers. e outputs from the decoder will also be integers. However, the output should be a sequence of words of the next generation. Conversion of the integers back to words is done. A new dictionary will be created for both inputs and outputs where the keys will be the integers and the corresponding values will be the words. Finally, the words in the output are concatenated using a space and the resulting string is returned. As shown in Figure 6, in Step 1, the hidden state and cell state of the encoder are used as input to the decoder. e decoder predicts a word, y1, which may or may not be true.
At
Step 2, the decoder hidden state and cell state from Step 1, along with y1, is used as input to the decoder, which predicts y2. en, the process continues.
Results
e model was implemented using Keras library, LSTM layers in Tensorflow 2.1, and Biopython library in Python was used for reading the FASTA file and preprocessing genome sequences. To evaluate the prediction performance for different test cases, two measures, Accuracy and Loss function, are used as shown in Figures 7-10 and Tables 3-6. e proposed model is applied on two datasets of different viruses as described in Table 1. e dataset is divided into training and testing portions. In the training phase, each segment of a sequence is considered as a single training entry. e required output corresponding to the input DNA segment is the next scaled DNA segment from the next generation of the training dataset. For each I/O sequence, the weights are continuously updated until reaching the highest possible accuracy calculated according to the number of correct predicted nucleotides to the total number of nucleotides in the sequence. After that, the next entry is the DNA sequence produced in the current step; the last RNA segment is left for testing.
In the first experiment, the effectiveness of the proposed model is verified by using the influenza virus dataset. e accuracy of this dataset is 98.9%. is accuracy is quite high compared with other approaches such as finding good representations for sequences or feature selection, which was applied before. It is shown that features extracted by LSTM layers of RNN neural network are very useful for the virus mutations prediction. e second experiment aims to compare the proposed model with another model in [16]. e proposed model is applied on the New Castle Disease dataset, which is used in [16]. e resulting accuracy is 96.9%, which outperforms the 70% accuracy in [16].
As noticed, the results of the proposed model show a better performance on the influenza virus dataset than the New Castle Disease dataset. is is because the number of samples in the dataset increased and subsequently, the model learned better. On the other hand, the computational complexity increased.
Discussion
Accurate and fast prediction mutations of RNA viruses can significantly improve vaccine development. In this research, we accomplished improvements by utilizing the LSTM recurrent neural network, a deep learning model with the high power of representing complicated issues. In addition, the ability to deal with a huge number of sequences, compared with the common approach of considering a small number of sequences, could be a critical quality of data mining-based approaches. One hot vector is used to represent the sequences in order to preserve the position of the information for each individual word or amino acid in the sequence. Using H1N1 and New Castle Disease datasets, the proposed model achieved a high-performance prediction. It can be used as a reliable tool to support a study of these types of data. To study these types of data, you should only use the predictions of our model as references. One limitation of this research is that we experimentally selected hyperparameters such as word size, area size, and network architecture configuration. rough the results of several experiments, we realized that these hyper-parameters significantly influence the prediction performance of the proposed model. e computational and financial cost of this method is very low, while the speed, range, and accuracy are remarkably high. Also, these methods can be used as preprocessing expensive and lengthy steps of laboratory experiments. e only requirement of these methods is reliable and comprehensive sequencing data that are becoming increasingly available in the public domain.
Conclusions
e recurrent neural network has shown remarkable implementation in many research fields.
is research succeeded in manipulating A, C, T, and G nucleotides for DNA data. By using a single-hot vector to represent DNA sequences and applying the LSTM repetitive neural network model, this work paves the way for a new horizon where the prediction of mutations, such as the evolution of the virus, is possible. It can help to plan new drugs for potential drugresistant strains of the virus at some point in the recent times of a potential outbreak. Moreover, it can provide assistance in formulating a diagnosis for early detection of cancer and possibly for early initiation of treatment. is work examined the relationship between nucleotides in RNA including the effect of each nucleotide in the genotypic sequence of other nucleotides. e bases for these relationships are investigated and visualized to predict which mutations will emerge over the next generations and are trained by two datasets isolated from two distinct viruses.
is work proved the existence of a correlation between the mutations of nucleotides, and successfully predicted the nucleotides in the next generation. e proposed model achieved significant performance improvements in all evaluation.
Data Availability
Previously reported that the viruses' datasets were used to support this study and are available at [Liu, Hualei, et ese prior studies (and datasets) are cited at relevant places within the text as references [21,22].
Conflicts of Interest
e authors declare that they have no Conflicts of Interest. | 5,587.4 | 2021-06-25T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Cost-utility analysis of treating mild stage normal tension glaucoma by surgery in China: a decision-analytic Markov model
Background Many individuals suffer from normal tension glaucoma (NTG) in China. This study utilized Markov models to evaluate the cost-utility of applying many medications and surgery for mild-stage NTG when disease progression occurred at a mild stage. Methods A 10-year decision-analytic Markov model was developed for the cost-utility analysis of treating mild-stage NTG with surgery and increased application of medication. We hypothesized that all 100,000 samples with a mean age of 64 were in mild stages of NTG. Transitional probabilities from the mild to moderate to severe stages and the basic parameters acquired from the CNTGS were calculated. Incremental cost-utility ratios (ICUR) were calculated for treating all patients with NTG by probabilistic sensitivity analysis (PSA) and Monte Carlo simulation. One-way sensitivity analysis were conducted by adjusting the progression rate, cost of medications or trabeculectomy, cost of follow-up, and surgical acceptance rate. Results The ICUR of treating mild stage NTG with medication over 10 years was $12743.93 per quality-adjusted life years (QALYs). The ICUR for treating mild stage NTG patients with a 25% and 50% surgery rate with medication were $8798.93 and $4851.93 per QALYs, respectively. In this model, the cost-utility of treating NTG was sensitive to disease progression rate, surgical treatment rate, and medication costs. Conclusions According to the results of the cost-utility analysis, it was a reasonable and advantageous strategy to administer a lot of medication and surgery for NTG in the mild stages of the disease. In the model, the greater the probability of patients undergoing surgery, the strategy becomes more valuable. Supplementary Information The online version contains supplementary material available at 10.1186/s12962-024-00523-6.
Background
Glaucoma is the primary cause of irreversible blindness worldwide, affecting approximately 76 million people by 2020 and over 111 million by 2040 [1].The prevalence of glaucoma varies between 2.3 and 3.6% in the Chinese population [2].In China, primary glaucoma is estimated to affect 9.4 million people aged 40 years and above, with 5.2 million blind in at least one eye and 1.7 million blind bilaterally [3].Primary open-angle glaucoma (POAG) is the most common type prevalent in Africans and Caucasians [4].However, several population-based studies demonstrated that the age-adjusted rate of Chinese POAG was similar to that of Western countries [5].
Normal tension glaucoma (NTG), which is generally defined as individuals with glaucomatous optic nerve cupping and field loss accompanied by normal intraocular pressure (IOP) [6], constitutes a significant proportion of POAG [7].The Collaborative Normal-Tension Glaucoma Study (CNTGS), a large multicenter clinical trial, reported that a 30% IOP reduction could delay the visual field (VF) progression of NTG [8].The Low-pressure Glaucoma Treatment Study (LoGTS) reported that VF loss was significantly less likely to occur in treated NTG [9].Currently, IOP reduction is the predominant strategy for delaying the progression of NTG.In the United States, the total annual healthcare costs for one POAG patient range from $1570 to $2070 [10], and the direct medical costs for glaucoma exceeded $2.9 billion [11].Glaucoma imposes a significant economic burden on families and society and reduces the patient's quality of life [12].
According to Tang's study [13], medications were assumed to be prescribed to patients with mild POAG in China.Patients with severe or moderate POAG were assumed to be treated by trabeculectomy.This was the current method of treatment.The average cost of initial treatment and follow-up for treatment for mild stage POAG was $256, while for moderate and severe stages, it was $345 and $230.In our study, this treatment method was regarded as the traditional method.
According to the CNTGS study, a 30% initial sustained baseline IOP reduction is a preferable treatment approach for minimizing costs and disease development, and medication and surgery are the primary methods for decreasing the IOP treatment by 30%.Patients are more willing for medication than surgery in the Chinese population.A substantial amount of medication was required to maintain a 30% reduction in baseline IOP.Nonetheless, medication adherence was insufficient to control the progression of NTG [14].Although surgery cost much more in the first year, the therapeutic efficacy was satisfying and long-lasting [15].
In our study, the intervention consisted of administering several medications and undergoing surgery for NTG when the disease progresses to a mild stage, which was defined as a positive treatment method.To improve patient's quality of life and balance the use of healthcare resources, the Markov model for analytical costutility analysis (CUA) was used to compare the positive treatment approach with the conventional treatment approach.Utility values can be used to calculate qualityadjusted life years (QALYs), which quantitatively represent the patient's quality of life.The Markov model is a common model to perform CUA for decision-making [13].
In the Chinese population, no cost-utility analysis has demonstrated whether treating NTG by decreasing IOP by 30% from baseline was reasonable.For developing health care policy and allocating health care resources for glaucoma management, consideration of the economic burden of NTG and its impact on the quality of life and the treatment strategy for NTG was crucial.The purpose of this study was to use the Markov model to conduct a CEA of positive treatment methods, particularly trabeculectomy in China.
Methods
Because the probability of progression over 10 years was projected from the CNTG study results [16], in this study, a 10-year Markov decision model was developed using Excel (Microsoft, 2019).We hypothesized that all 100,000 NTG samples aged 64 were in the mild stage of the disease.In this model, every subject in the cohort was enrolled with progression risks, medical-related costs, and the benefits associated with each treatment, and their health status changed as the disease progressed.The model included three stages of NTG, ranging from mild to moderate to severe, and each stage was associated with mortality, as shown in Fig. 1.The cycle stage of the model was set at one year, and Monte Carlo simulation, random sampling, and trials were performed.Cost and qualityadjusted life years (QALYs) were calculated for each patient as the disease progressed through this model.A half-cycle [] correction was applied to both costs and benefits [17].In this study, 100,000 samples were run for each path in the Markov cycle tree for Probabilistic sensitivity analysis [13,18].According to Lee et al., the surgical rates in POAG ranged from 28.4% to 34.9% [19], the percentage of treating NTG by trabeculectomy instead of medication therapy was estimated to be between 25 and 50%.Considering the surgical failure rate and the fact that some postoperative patients require medication and maintenance, we set PSA running with five times and ten times the surgery cost improves the model's fault tolerance.
The Incremental cost-utility radios (ICUR) calculation formula is as follows: ICUR was the primary outcome of comparing the two treatments (traditional and positive treatment methods, as indicated in the introduction) used in this study.And ICUR was later compared against the countries' willingness to pay threshold, set at 1 GDP per capita.The per capita gross domestic product (GDP) for China was estimated to be $12692.90 in 2023, according to China's statistical data.If ICUR was less than GDP, the favorable treatment approach was considered worthwhile.If ICUR was less than three times GDP, the positive treatment option was evaluated for clinical application [20].
Ethical statement
This article compiled published articles for data analysis and does not include any studies involving humans/animals/plants.Therefore, informed consent was waived after institutional review board (IRB) approval.This study was a secondary analysis of data from other studies (references 20, 24, 42) that are publicly available, and it did not require ethical approval.
Markov model: medical aspect
As POAG is a long-term, chronic disease, no study directly reported the transition probability of each stage every year.However, according to the findings of CNTGs [16], the 5-year progression rate for patients receiving mild treatment (aimed to lower IOP by 30% from initial) was 20%.At five years, the progression rate of NTG was 20% in the treated group and 60% in the observation group, similar to the 66% reported in Rei's study [21] for Japanese patients.Therefore, we can calculate transition probability for each stage every year based on the formula rate: Multi-year probabilities to rates via the formula rate = − Ln (1-t-year probability) / t years, and then calculate the 1-year probabilities through 1-year probability = e(− rate * 1), where t > 1, we determined that the 1-year progression rate was 4%.Then, we set the baseline MD score to be approximately − 5.9 dB, and the mean MD slope of progression was 0.9 dB/y [22].After that, the transition probability matrix from mild to moderate and moderate to severe stages can be obtained over 10 years (Table 1).The final application of the formula rate of multi-year probabilities was to obtain the transition rate for different stages every year; it was 8.5%/y for mild to moderate and 3.5%/y for moderate to severe.Mild, moderate, and severe stages were defined by Humphrey mean deviation (MD) values between − 0.01 to − 6 dB, − 6.01 to − 12 dB, and − 12.01 to − 20 dB, respectively [23].This represented the transition probability of the treatment for lowering IOP by 30% from the initial level.The time horizon set for modeling was set at 10 years.The frequency of continuous follow-up visits was recommended by the American Academy of Ophthalmology Preferred Practice Pattern [24].When patients were in the mild stage, both the treatment and observation groups had two follow-up visits, including a Visual field test (VF), Optical Coherence Tomography (OCT), Disc photography (DP), Slit-lamp bio-microscopy and Noncontact Tonometer, and three follow-up visits (including VF, OCT, and DP) when the disease progressed and in the remaining years.
To calculate quality-adjusted life years (QALYs), utilities for each glaucoma stage were estimated.The utility was a preference-based measure of the quality of liferelated to a healthy state.Following the Hodapp Anderson-Parrish (HAP) classification criteria, stage utilities were presumed to be 0.80 for people with mild POAG, 0.75 for those with moderate POAG, and 0.71 for those with severe POAG [25].
The plan for traditional treatment method was a follow-up for mild NTG.Patients with moderate or severe NTG were presumed to be treated with trabeculectomy and postoperative medications for six weeks.Among them, 20% were assumed to fail the surgery and require long-term topical medical therapy, and the mild-tomoderate rate was 14.9%/y, and the moderate-to-severe was 5.6%/y, according to Tang's study [13].According to the findings of Cheng's study, timolol and latanoprost were the two most effective IOP-lowering agents in NTG patients, assuming that patients would need dual therapy and some would require triple therapy to achieve target IOP [26,27].As per the Ocular Hypertension Treatment Study, the ratio of patients who were prescribed two (timolol and latanoprost) vs. three (Azopt, latanoprost, and Alphagan) medicines was 3:1 [18,28] For the rest of the years, if the disease progresses persistently, 1.5 times of the normal dosage of medicine would be prescribed.For surgery, trabeculectomy, as the classic surgery [15] Table 2 Costs of Treatment in Clinical Management of Normal Tension Glaucoma * Annual cost of dual therapy was the sum of the mild costs of timolol and latanoprost † Annual cost of triple therapy was the sum of the annual cost of dual therapy plus the mild cost of a third medication, which was taken as the average mild cost of Azopt, Trusopt, and Alphagan-P
Cost input US Costs in Nominal US Dollars as of 2023 Source
Treatment A The annual cost of dual therapy* 307.was chosen as the optimal method.Patients whose IOP were unstable after surgery would receive two (timolol and latanoprost) or three (Azopt, latanoprost, and Alphagan) medicines assumed 3:1.All costs for medication, surgery, and examination were collected in Chinese yuan but converted into US dollars using the 2021 China Statistical Yearbook exchange rate of 7.04 yuan per dollar.
Markov model: economic aspect
This study only considered the direct costs from the payer's perspective.The input included diagnostic tests, medication, and trabeculectomy.The costs for the medications and surgeries (Table 1) were obtained from the tertiary hospital-Huzhou Eye Hospital.These costs are regulated by the Chinese Government and vary little among institutions within the same tier of the healthcare system.Annual medication consumption was estimated based on an analysis by Rylander [29], which included the number of drops per milliliter and common dosing patterns.All costs were given in US dollars using the average 2023 exchange rate (1 US dollar = 7.04 RMB) [30].
Following the National Institute for Health and Care Excellence (NICE) recommendations [31], all costs were discounted at a rate of 3.5% per year, and utility was discounted at the same rate.Major input parameters of the current Markov Model are listed in Table 1, and all the costs of NTG patients are summarized in Table 2.
Because the cost of initial and consecutive treatment was acquired from Tang' study in 2019 [13], we recalculated the cost up to 2023 at 3.5% increase rate per year.
Statistical analysis
We set the surgery rate at 50% and then performed oneway analyses for the Markov model by varying the vital input in a positive treatment method.Based on the complication or failure rate of surgery and medication, ± 20% change cost was calculated.The probability of disease state transfer in response to positive treatment was unstable, with a calculated change rate of ± 50%.[21] Probabilistic sensitivity analysis was also performed.As the standard deviation for a cost input, utility, and translation rate was missing, we assumed that 10% of the standard deviation was based on the mean value.The presumed variation range and distributions for
Results
The ICUR of treating mild stage NTG with medication over 10 years was $12743.93 per quality-adjusted life years (QALYs).The ICUR for treating mild stage NTG patients with a 25% and 50% surgery rate with medication were $8798.93 and $4851.93 per QALYs, respectively.The results of one-way sensitivity analyses at 50% surgery rate with medication strategy are shown in Table 3, and Fig. 2 displays the top three related factors for this model.
Discussion
In NTG, decreasing IOP was significantly more strongly associated with progressive axonal loss and retinal ganglion cell damage [18].In a natural disease state, the progression rate and visual field loss are low; consequently, many patients do not consider this condition seriously enough to seek treatment.However, without appropriate and long-term treatment, the disease progressed more rapidly in 60% of patients compared to 20% of those who received treatment [22].Consideration of the economic burden of NTG and its impact on quality of life was crucial for the development of health care policy and allocation of health care resources for glaucoma management.
The purpose of this study was to use the Markov model to perform CUA of positive treatment methods, particularly trabeculectomy in China.
Our study showed an incremental cost-utility ratio (ICUR) of $12743.93 per QALYs when comparing the positive treatment method by medication to the conventional treatment method.The ICUR for treating NTG patients with a 25 and 50% rate for surgery with medication was $8798.93 and $4851.93 per QALYs, respectively, indicating that increasing the rate of trabeculectomy therapy would be a very worthwhile strategy, consistent with the findings of the previous study [32].Specifically, in Li's model [18], it was cost-effective to treat NTG patients with a mild treatment (medication or surgery).Considering the gradual progression of NTG, medication is more expensive, whereas trabeculectomy ensures greater efficacy [33] while avoiding the need for long-term treatment, thereby reducing cost.Similar to Li's study, the cost-effectiveness decision in our models for NTG was sensitive to the progression of NTG, the costs of eyedrops, trabeculectomy, and follow-up, along with the trabeculectomy rate [18].At the same time, we set up 5 and 10 times the cost of surgery into the model, and the results still indicated that surgery is a worthwhile choice strategy.This indicated that the positive treatment method, especially surgery in our model, has a very high fault tolerance in the event of surgical failure or subsequent treatment.
Surgery would be recommended for glaucoma patients.However, medication seems convenient for patients.According to research, the number of medications, their prolonged use, and exposure to preservatives are risk factors for the development of ocular surface disease in glaucoma patients [34].According to Wiafe's report [32], considering the lack of resources and competing for opportunity costs, lifelong medical therapy was impractical.Additionally, medicines were mainly found in pharmacies in large cities, so patients in rural or remote areas could not procure them easily.Medication adherence was unsatisfactory (about 20%) in a developed country [14].It was reported that the daily cost of an eye drug such as latanoprost was US $0.87 in the developed world [35] and possibly even more in developing countries.Surgery was the cost-effective treatment for the mild stage of glaucoma patients [36].The cost of surgery to reduce intraocular pressure has decreased over time.The cost is higher during the first three years, but when considering the number of years people live and the cost of drugs for those years, surgery is a relatively worthwhile treatment option in developing countries.
As it was a simulated study and with the analogy to many other cost-effectiveness analyses, there are several limitations to our model.Firstly, only direct costs were considered.Indirect costs such as patient's time spent attending follow-up appointments and lost productivity due to time off were not considered.However, we can disregard this because our model set the age at 64.Additionally, patient compliance was not considered in this study, which would affect the medication cost.However, this had little effect on our model because our results indicated that surgery may be more effective.
Conclusions
This was the first study to provide economic evidence on how to treat Chinese NTG patients cost-effectively.A better understanding of risk factors and treatment options for glaucoma patients would be helpful in improving patient's health-related quality of life and optimizing resource allocation.
Fig. 1
Fig. 1 Influence diagram for glaucoma progression from mild to severe stages with death
Table 1
Estimates for utility, mortality, and other parameters [23]eased mortality risk for different groups, odds ratioPeople with mild, moderate, or severe POAG 1.8 Tang et al.[23]
Table 3
Basic result of ICUR and One-way sensitivity analysis for 50% surgery rate Tornado plot of 10-year accumulated incremental costs of the effectiveness of one-way sensitivity analysis.Low-value scenario: Surgery rate 25%; Transfer probability from mild to moderate NTG decrease 50%; Cost of eye drops decrease 20%.High-value scenario: Surgery rate 50%; Transfer probability from mild to moderate NTG increase 50%; Cost of eye drops increase 20%
Table 4
The results of Probabilistic sensitivity analysis of different treatment strategies Table 4 represents the PSA results of running the model with 5 and 10 times the surgery cost, the ICRU with surgery rate set at 25% were $10567.80 and $12780.13,respectively.When the surgery rate was set at 50%, the ICRU were $8391.67 and $12816.40,respectively. | 4,327.4 | 2024-02-12T00:00:00.000 | [
"Medicine",
"Economics"
] |
A New Design of Dual-Axis Solar Tracking System with LDR sensors by Using the Wheatstone Bridge Circuit
— Nowadays, using photovoltaic (PV) cells is among the power generation methods that absorb solar energy and convert it into electrical energy. The sun moves from east to west during the day and its radiation angle changes relative to the Earth in different seasons. So, the output power of PV panels changes as well. The output power of PV panels increases by being located in a position perpendicular to the angle of the sun’s rays. This study aims to design and implement a dual-axis solar tracker (DAST) to increase the output power of the PV panel. This simple system has high efficiency and adjusts the PV panel based on solar radiation by moving simultaneously on two axes. An analog controller is used for its control system. DAST control system is a closed-loop system that uses Wheatstone bridge circuit function along with light-dependent resistors (LDRs). A small DAST was designed and built to validate the proposed system, the performance of which was verified. Based on the experiments, I-V and P-V specifications were obtained. Finally, it was found that the output power of the PV panel using solar tracker was higher than that of the fixed panel. sun [13-15]. This study proposes a DAST based on LDRs, which simultaneously adjusts the PV panel relative to solar radiation on two axes. In the solar tracking system, a simple design of DAST is provided by the Wheatstone bridge circuit
I. INTRODUCTION
he most efficient and fastest solar energy absorption technology is the photovoltaic converter. solar cells convert direct sunlight to direct current through the photovoltaic effect. To maximize the output power of PV panels during the day, they must be kept in the position perpendicular to solar radiation is needed [1].
The geographical location and situation of the sun are constantly changing. The output power of the PV panel depends on the amount and angle of solar radiation, the type and number of cells, the temperature of the cell loads, and the voltage (or battery). In general, fixed solar panels do not receive maximum amounts of solar energy continuously [2,3]. To solve this problem the solar tracking (ST) can be used to maximize the output power of the PV panel. The solar tracker receives maximum amount of solar radiation during the day by M positioning the PV panel perpendicular to the sun's rays [4]. Various methods have been widely used to track the sun's rays. In general, they can be classified in two ways: 1) types of open-loop tracking based on solar movement.
2) types of closed-loop tracking using sensor-based controllers.
In an open-loop system, solar tracking methods are modeled through mathematical models, so that the control algorithm or AI tool is used to get the maximum power tracking point [5,6]. The fuzzy logic controller (FLC) is also an open-loop tracking method. The fuzzy logic control is designed as software to decide the time of the ST system. As a result, the closest position to receive direct sunlight is obtained from the database [7,8].
In a closed-loop system, a variety of active sensors such as LDR are used to detect the sun's position [9][10][11]. Many solar tracker systems have been reported in the literature and they differ according to employed tracking methods, such as sensors-based tracker method, geometric and astronomical equations-based method, artificial intelligent based method, etc. Sensors-based solar trackers are the most proposed systems in the literature due to their simplicity and efficiency/price ratio. Sensor-based solar trackers widely use light sensors such as photoresistors (LDRs), photodiodes, solar cells, pyrometers...to follow instantaneously sun's movement. The most commonly used sensors are LDRs in view of their simple circuit and very low price. For these reasons, many researchers have used this type of sensor in their systems.
The ST system is both single-axis and two-axis. Using a single-axis structure, the ST system follows the sun in the sky from east to west [12]. The ST system with a double-axis structure has higher accuracy and follows the elevation angle of the sun [13][14][15]. This study proposes a DAST based on LDRs, which simultaneously adjusts the PV panel relative to solar radiation on two axes. In the solar tracking system, a simple design of DAST is provided by the Wheatstone bridge circuit A New Design of Dual-Axis Solar Tracking System with LDR sensors by Using the Wheatstone Bridge Circuit First A. Author, Mahdi saeedi, Second B. Author, Reza Effatnejad * T along with LDRs. Using a Wheatstone bridge circuit, an output of the sensitive parameter function is obtained by a resistance sensor. To achieve higher sensitivity, the Walton bridge circuit is used to measure two elements compared to each other [16,17].
Results of experiments have demonstrated the feasibility of the DAST system. The remainder of the paper is organized as follows: Section 2 describes the structure of the DAST system in terms of physical design and control circuit. The controlling method of the DAST system is proposed in Section 3. In Section 4, a scaled-down prototype of DAST system is built and tested. The main conclusions of this article are drawn in Section 5.
A. Mechanical Design
The increasing growth of the market of photovoltaic systems has led to an increasing interest in ST systems. The design of the structure of ST systems is important in terms of performance, strength, and cost. In this paper, the aim is to control the ST system in all directions (East-West-North-South) to keep the PV panel perpendicular to the sun's rays. The DAST system is designed and built in a scaled-down laboratory prototype. The design of this DAST system in larger dimensions (e.g. 10 x) can be used to support single or multiple PV panels on a structure. The mechanical structure of the tracker should be flexible to support the weight of the panel, frame, actuators, shafts, gear mechanisms, and solar measurement devices [18,19].
In addition, the flow of destructive forces (wind power) can affect the structure of the DAST system. Therefore, wind power is considered important in designing the system. The proposed system is suitable for low wind speed (about 20 m/s). If a larger tracker system is designed and a stronger driving system (electric jacks) used, it will be more resistant to stronger winds. To this end, the arms are considered in different directions to withstand wind force. The design of the DAST system is shown in Fig. 1 which consists of two fixed and moving parts [20][21][22].
1) Static part
The static part of the DAST system includes a base frame structure, lower gearing mechanism, control panel, control unit, charge controller and battery, vertical shaft, ball bearings, and micro switches (SWs). The base frame structure keeps all parts of the DAST system fixed and secured on the ground. The arms are considered in different directions to support the DAST system against wind forces, and all joints are integrally connected. The lower gearing mechanism (DC motor with gear mechanisms) is connected to the vertical shaft to bear the axle load along with the gear mechanisms (buck converter) situated in a suitable box on the structure (Fig. 2). The control unit handles the controlling task and coordinates all parts of the DAST system and also the PV panel's movement in all directions. The control panel is intended to change the manual and automatic control of the DAST system. The charge controller is a device used to manage the energy flow in DAST systems. The work of it including overcharge protection, deep discharge protection, system power management. Battery is a device used for storing solar charge in solar systems. The main axis of the horizontal rotation of the PV panel (vertical shaft) allows the rotation of the PV panel to track the angle of the solar radiation from the east to the west. The ball bearings facilitate the rotation of the PV panel on the structure. To limit the horizontal movement of the PV panel, three micro switches are placed in the horizontal actuator control circuit next to the vertical shaft; two of which are SW2 and SW3 situated at the beginning and the end of the rotary motion of the PV panel and the SW1 micro-switch switches it off at night when the PV panel rotates to its original position in the early hours of the day.
2) Moving part
The moving part is connected to the end of the upper part of the vertical shaft and includes the PV panel, upper gearing mechanism, the horizontal shaft, the PV panel holder, the sunlight measurement system (LDRs), and the micro switches. PV panels generate electric power using solar cell modules. The vertical rotational axis of the PV panel (horizontal shaft) allows the PV panel to rotate in order to track the angle of the solar radiation from the north to the south. The upper gearing mechanism (DC motor with gear mechanisms) is connected to the horizontal shaft to bear the axle load along with the gear mechanisms situated in a suitable box on the moving part. The PV holder holds it on the shaft (Fig. 3). The LDRs are located in the four directions around the PV panel to measure sunlight [23][24][25]. Both SW4 and SW5 micro switches are situated in the vertical actuator control circuit next to the horizontal shaft on the moving part which is placed at the beginning and the end of the rotary motion of the PV panel (north-south).
B. Electronic Design
The DAST system operates in both manual and automatic forms using the control panel placed on the chassis. If the 3status key (high -middle -low) is set to high, the DAST system function is automatically performed by the control circuit and LDR sensors. If it is set in the middle position, the system has no function. Setting the Sb key in the middle position means that the system is manually operated, and each push button is responsible for the movement of the DAST system in one of the four directions ( Fig. 1 (b)).
The control of the proposed DAST system is carried out by ICs circuits and transistors. To track the angle of the sun's radiation, it is necessary to determine the position of the sun, and this requires an LDR light detection sensor. LDR resistance decreases as the light intensity increases. Since one sensor is not enough to track the position of the sun, four LDRs in the four directions of the PV panel are used to calibrate the tracker system ( Fig. 1 (a)).
III. CONTROL METHOD
The designed DAST system is a developed closed-loop system in whose op-amps an output of the sensitive parameter function is obtained by a resistance sensor LDR using the Wheatstone bridge circuit.
A. The Wheatstone Bridge
The Wheatstone bridge was first used to measure very low values of resistance. It is more precise compared to many other techniques and simply operates by dividing the voltage. One of the most important and well-known applications of the Wheatstone bridge circuit is measuring the variations of the sensor resistance [16,17]. Fig. 4 (b) is used to measure two elements with the same type to achieve higher sensitivity. This method is used to provide a signal in the output of resistance sensors such as light-dependent resistance sensors and LDRs and so on [26,27]. The resistance x R of a linear resistance sensor can be as follows: Where k is the conversion constant of the sensor, x depends on the measured quantity, and 0 R is the nominal resistance of sensor element ( x = 0). When the resistance in other parts of the bridge is equal to 0 R , the output bridge oB V is equal to In Equation (2) Since the percentage of change in the resistance element is very low ( 1 kx ), the nonlinearity is also low. In contrast, as it increases, it's kx also increases. Nonlinearity with respect to the operating range can be increased up to %8. Given these points, it provides the advantage of a bridge-based measure [26].
B. Solar Tracking Control System Design
In the closed-loop DAST system, the intensity of the sunlight is considered as a reference input signal (Fig. 5). For solar tracking, the position of the sun needs to be detected by optical sensors. The proposed tracking system uses optical sensors to adjust the PV panel based on the angle of the sun. LDR sensors have variable electrical resistance depending on the light intensity and their resistance decreases with increasing light intensity. In there is an imbalance in the voltages generated in the Wheatstone bridge branches by the LDR sensor, a voltage difference is created due to the difference between the angle of the solar radiation and the position of the PV panel ( Fig. 4 (b)).
Then, the voltage generated at the output of the Wheatstone bridge is transmitted to the op-amps and amplified as well. The output voltage of the op-amps by the control circuits activates the relay. The relay rotates the engine of the tracking system in the desired direction, and the PV panel will rotate around its axis so that the ST system automatically places the PV panel perpendicular to the direction of the sun rays. Accordingly, the control system continuously monitors the solar radiation angle and PV panel by LDRs and sends a differential control signal to the relay of the tracker motor until the voltage difference in the bridge branches becomes less than a threshold value [8]. In many DAST systems, a comparator with four-quadrant LDR has been used to determine the position of the sun [13,14]. However, in This paper, the LDR sensors are placed on the four sides of the PV panel at a distance of 1 cm from the panel surface ( Fig. 1 (b)) so that when the angle of the sun changes, the LDR sensor remains in the shadow. Also, the proposed DAST system is more sensitive and faster due to the use of the Wheatstone bridge circuit. Sensors LDR1 and LDR2 are used to calibrate the PV panel relative to the horizontal axis and sensors LDR3 and LDR4 are employed to calibrate the PV panel relative to the vertical axis.
The DAST system has two circuits for controlling the rotary motion of the PV panel in both horizontal and vertical axes and each axis operates in two directions. Each hardware circuit consists of three parts including a Wheatstone bridge with an LDR sensor, op-amps, and control circuits. To achieve maximum power for the PV panel, the astronomical tracking processes proceed simultaneously to make the PV panel perpendicular to sunlight. Fig. 6 is a hardware circuit for DAST system.
1) Hardware Circuit of Horizontal Motion Control
The DAST moves automatically upon a change in the sun's radiation. The horizontal control hardware circuit is used to control the rotary motion of the PV panel in the east (E) and west (W) (Fig. 6). The Wheatstone bridge (horizontal rotational axis) includes R2 to R6 resistors along with optical sensors LDR1 and LDR2. The intensity of the sunlight is measured by the LDR sensor circuit. If there is an imbalance in the voltages generated in the bridge branches by LDR sensors, the VE and VW voltages are considered as the generated voltages in the east and west, respectively. Then, the voltage generated in the bridge output is transmitted to the UE and UW op-amps. The op-amps output voltage is controlled by ICs control circuits and transistors which activates the RE and RW relays, and LEDs, DE and DW turn on and confirm their activation. Relays rotate the M EW tracker engine in the desired direction, and the PV panel will rotate around its axis to place the PV panel perpendicular to the direction of the sun rays. At this time, due to the uniform and even solar radiation to the LDRs, the PV panel stops at its position (Fig. 7). To prevent the PV panel from Fig. 6. The control circuit of the DAST system. falling out of the range, at the end of the route, the SW2 and SW3 micro switches are located in both directions of east and west, respectively.
At night when it is dark, a controlled route for the PV panel is considered to return it back to its initial state as the sun rises (sun radiation from the east). The PV panel starts to move eastward and places it at the initial state in the early hours of the day by reaching the end of the route and hitting with the microswitch SW1 (Fig. 7).
2) Hardware Circuit of Vertical Motion Control
The vertical control hardware circuit is used to control the rotary motion of the PV panel in the north (N) and south (S) (Fig. 6). The Wheatstone bridge (vertical rotational axis) includes R8 to R11 resistors along with optical sensors LDR3 and LDR4. The intensity of the sunlight is measured by the LDR sensor circuit. If there is an imbalance in the voltages generated in the bridge branches by LDR sensors, the VN and VS voltages are considered as the generated voltages in the north and south, respectively. Then, the voltage generated in the bridge output is transmitted to the UN and US op-amps. The op-amps output voltage is controlled by ICs control circuits and transistors which activates the RN and RS relays. Next, LEDs, DN, and DS turn on and confirm their activation. Relays rotate the M NS tracker engine in the desired direction, and the PV panel rotates around its axis to place the PV panel perpendicular to the direction of the sun rays. At that time, due to the uniform and even solar radiation to the LDRs, the PV panel stops at its position (Fig. 8).To prevent the PV panel from falling out of the range, at the end of the route, the SW2 and SW3 micro switches are located in both directions of north and south, respectively.
A. Performance of the DAST system
To investigate the performance of the DAST system and demonstrate its automatic control, a laboratory sample was made and tested. The operation was conducted on 4 June 2020, in Karaj, Iran. The latitude and longitude coordinates for Karaj, Iran is 35.5° North and 51° East, respectively. At this time, the angle of radiation is about 73°. To maximize power, the PV panel should be placed perpendicular to the sun radiation angle. In Fig. 9, the PV panel is positioned at different angles depending on the angle of solar radiation .
Results of the measurement of voltages created for the DAST systems on a sunny day are given in Table 1, and is the resistance of the LDR (Fig. 9, Eq. 2). The intensity of solar radiation is about 100,000 lux in direct sunlight and 65,000 lux in the shade, respectively.
The sun moves from east to west during the day [2,3]. By the movement of the sun, the DAST system simultaneously moves in all directions and locates the PV panel in the direction of solar radiation. The performance of the DAST system is examined in four modes as follows: 1. The motion of the panels to the west 2. The motion of the PV Panel to the north 3. The motion of the panels to the south 4. The motion of the panels to the east As the sun radiates on the surface of the light sensors, their ohmic resistance decreases. The higher the radiation intensity, the lower the sensor resistance [16,17]. As the sun radiates on the surface of the sensors (LDRs) in different modes of the DAST system, their ohmic resistance changes (Table 1).
In the first mode, as the sun moves towards the west ( Fig.9 (a)), the angle of solar radiation on the surface of the PV panel changes. As a result, the LDR1 is gradually placed in the shade and its ohmic resistance increases. According to Fig. 6, due to the increase in the ohmic resistance of the LDR1, the VW and VE voltages are out of balance at the Wheatstone bridge and the VW voltage is higher than the VE voltage ( Table 1). As a result, the PV panel starts moving in the west direction, and when the VW and VE voltages equalize, the PV panel stops. In Fig. 9 (a), the PV panel is not perpendicular to the solar radiation. As the Radiation to the surface of LDR1 and LDR2 sensors is not uniform (VE<VW), the green LED is on and the PV panel rotates westward. in Fig. 9 (b), after the movement of the PV panel, the DAST system stops when it stands perpendicular to the solar radiation angle (VW=VE).
In the second mode, in the early hours of the day, as the sun moves towards the west, the angle at which the solar radiation reaches the earth gradually increases. Like the first mode, the angle of solar radiation on the surface of the PV panel changes, so the LDR4 is gradually placed in the shade and its ohmic resistance increases. According to Fig. 6, due to the increase in the ohmic resistance of the LDR4, the VN and VS voltages are out of balance at the Wheatstone bridge and the VN voltage is higher than the VS voltage ( Table 1). As a result, the PV panel starts moving in the north direction, and when the VN and VS voltages equalize, the PV panel stops. In Fig. 9 (c) After the movement of the PV panel to the west, the DAST system stops when it stands perpendicular to the solar radiation angle (VN=VS).
−
In the third mode, as the sun moves westward in the afternoon, the angle at which the solar radiation reaches the earth gradually decreases. Due to the changes in the angle of solar radiation, the LDR3 is slightly placed in the shade and its ohmic resistance increases. According to Fig. 6, due to the increase in the ohmic resistance of the LDR3, the VN and VS voltages are out of balance at the Wheatstone bridge and the VS voltage value is higher than VN voltage ( Table 1). As a result, the PV panel starts moving in the south direction, and when the VN and VS voltages equalize, the PV panel stops (Fig.9 (C)).
In the fourth mode, after sunset and when there is no solar radiation, the ohmic resistance of the LDR1 and LDR2 increases sharply. As a result, VE and VW voltages decrease ( Table 1). The DAST system then moves the PV panel to the east and places it in the position of the beginning of the day. In Fig. 9 (d), since there is no radiation on the surface of LDR1 and LDR2 sensors, the red LED is on and the PV panel rotates eastward and stops.
B. I-V and P-V characteristics
The power output of the PV panel depends on the type of cells and the number of the cells in the PV panel, the amount and angle of solar radiation, the temperature of cell and voltage loads (or battery).
I-V and P-V characteristics of PV panel are shown in Fig.10, 11. In this proposed system configuration we have selected the 9 number of cells are connected in parallel. The maximum power by PV panel with respect to the Solar Irradiation at temperature of 32 ℃ is 1.28 Wp (volts × amps = power, so 4.1V × 0.312A = 1.279W) [20,28]. C. Output power of the PV panel Fig.12 shows that the output power of the PV panel is measured simultaneously in both modes of fixed solar panel and solar tracker during the day. Measurements are made every 30 minutes from 7:00 a.m. to 19:00 p.m. In the first mode, the PV panel is fixed to the south direction during the day and the sun radiates on the earth's surface at an angle of 73°. At 1:00 p.m., the PV panel is placed perpendicular to the angle of the sun's rays, so its output power reaches its maximum value. In the morning and evening, due to the fact that the PV panel is not at the optimum angle, the amount of its output power decreases, and as a result, it loses a significant amount of energy during the day.
In the second mode, the PV panel is moved by the DAST system and adapts to the angle of the sun's radiation at any given moment to be in the same direction. In this mode, it can be observed that at any time of the day when the angle of the sun changes and the intensity of the sun's radiation increases, the PV panel is moved by the DAST system and is optimally positioned. At this point, the output power of the PV panel reaches its peak value and remains constant for a long time. Maximum generated Energy is about 6.14 Wh/Wp for fixed and about 9.10 Wh/Wp for DAST system. The results reveal that the efficiency gain of the DAST is 48.2% higher than that of the fixed panel [2,25]. V. CONCLUSION The solar tracking system detects the astronomical position of the sun during the day and increases the output power of the PV panel by placing it in a suitable position relative to the angle of the sun's rays. Many solar tracking systems have been developed so far that either have not been able to move on two axes or have been based on geometric and astronomical equations and artificial intelligence, which are expensive. This study presented a new DAST based on LDRs, which adjusts the PV panel relative to the angle of the sun's rays by moving simultaneously on two axes. DAST is a very simple and costeffective control system that utilizes Wheatstone bridge circuit function and LDRs. If this controller is used, it is possible to control PV panels on the metal structure both individually and in an integrated manner. Therefore, the experimental findings of this solar tracking system can help develop solar energy applications. | 6,071.8 | 2021-02-19T00:00:00.000 | [
"Physics",
"Engineering"
] |
Role of Stochastic Petri Net (SPN) in Process Discovery for Modelling and Analysis
For exploitation and extraction of an event’s data that has vital information which is related to the process from the event log, process mining is used. *ere are three main basic types of process mining as explained in relation to input and output. *ese are process discovery, conformance checking, and enhancement. Process discovery is one of the most challenging process mining activities based on the event log. Business processes or system performance plays a vital role in modelling, analysis, and prediction. Recently, a memoryless model such as exponential distribution of the stochastic Petri net (SPN) has gained much attention in research and industry. *is paper uses time perspective for modelling and analysis and uses stochastic Petri net to check the performance, evolution, stability, and reliability of the model. To assess the effect of time delay in firing the transition, stochastic reward net (SRN) model is used. *e model can also be used in checking the reliability of the model, whereas the generalized stochastic Petri net (GSPN) is used for evaluation and checking the performance of the model. SPN is used to analyze the probability of state transition and the stability from one state to another. However, in process mining, logs are used by linking log sequence with the state and, by this, modelling can be done, and its relation with stability of the model can be established.
Introduction
Process mining is an analytical method for finding, monitoring, and improving actual processes by extracting information from event logs that are freely available in modern information systems. It provides targeted facts based on event logs that help in doing research, analysis, and improving the existing business processes.
Presently, the process mining has been observed as an important technology and has been used in the application of business processes and hence applied successfully in many organizations. It is a process that is centric (not-data-centric), truly intelligent (read from historical data), and based on facts (event data rather than theory). In addition to process discovery, the mining process allows automatic discovery of a process model from event logs which provides insight into it and enables various types of model-based analysis. Process acquisition models can be extended with information to predict the elimination of functional conditions. In particular, capturing of activities and waiting times in the business process are required for the understanding of the process's efficiency. In addition, these enrich types used as a source for the prediction algorithms, which are used for predicting time until the process is completed. To provide an explanation of the time forecast, a set order that allows for a good balance between "overfitting" and "underfitting" is used. It is worth noting that the words "overfitting" and "underfitting" are orthogonal to nonfitting. e model is not valid if visual cues do not occur according to that model. Establishing the remaining run time of the business process and its function is an important administrative function that allows for improved resource allocation. is also improves the quality of results when clients inquire about the status and expected completion of a given business process. e three main types of process mining are process discovery, conformance checking, and enhancement. In this research work, attention is placed on the process discovery. In process discovery it is known that, without using any a priori information, event logs produce a process model. In case the event logs have any information about the resource, a model-related resource, for example, social networking, can be discovered, which can be able to show how different people worked in an organization collectively. On the other hand, the process model can be used for analyzing cost, resource utilization, or process performance and automation. Models are used for redesigning a process for planning and controlling in order to make decisions in a process.
ere are two types of process model: (i) formal model used for discussion and documentation and (ii) informal model used for analysis or enactment. In this research, informal modelling is used. ere are certain errors that occur during modelling: (i) when a model defines a prepared version or a fact, (ii) failure to properly do human conduct, and (iii) when the model is at the wrong abstraction level.
For information system, analytical evaluation has become an integral part for the whole process design. Many diverse model specification techniques have been proposed, for example, Petri net, BPMN, UML function diagrams, BPEL, and EPC. Petri net is widely used in business processes either as the first model language or as a basis for validation. e Petri net was first introduced in 1962 by Carl Adam Pete. It can be described as a graphical method of the formal definition of logical interaction between components or the flow of activities in a complex system. PN is particularly well suited for modelling concurrency and conflict, sequencing, conditional branch and looping, synchronization, limited resource allocation, and mutual exclusion. It enables the study of logical properties of modelled system for each individual finite state machine, which can be seen as possible and yet shows an interaction whenever they occur when using idea behind the Petri net given by Professor Petri. Petri net is categorized as being application and theory of Petri net (ATPN) and Petri net and performance modelling (PNPM) which includes stochastic Petri net. Original Petri net did not have the notation of time and was therefore used for studying logical properties. In order to introduce time duration in Petri net, we link event data with transition that is important for availability, reliability, and performance quantification. Hence, that is the main reason why Petri net was extended to have time linked with event occurrence, giving rise to the stochastic Petri net.
Stochastic Petri net was introduced earlier in 1980. It had a graphical representation that was used for modelling of discrete events and the bipartite graph of the transitions and places in SPN is used for knowing event mechanism. It is also worth noting that the time of firing tokens is considered as a random variable and is exponentially distributed. Reachability graph in stochastic Petri net is a continuous firing rate that is linked with each transition and may be markingdependent. If the history of a given process is known, for example, event log with time information, it is possible to extract stochastic performance data and insert it into the model. is research paper is therefore based on generalized stochastic Petri net (GSPN) which does not restrict distributions in a particular way. GSPN is very useful in the case where some events happen in extremely small time. Whereas SPN model handles this situation by introducing immediate transition in the model which has zero firing time, the other transitions are timed transitions which are exponentially distributed. In this research, stochastic reward net is used in order to check survivability and reliability. SRN is basically an extension of GSPN and it was introduced in 1989. SRN is extensively marking-dependent which is used for firing rates and probabilities by giving priority to the transitions. erefore SPN has an advantage in that sense, and it analyzes the probability of state transition and the stability of one state to another. For the process with logs, the log sequence is based on the results of the transition execution and not the complete state results. Hence, this paper links the log sequence with the state for stability of the model and analysis. e remainder of the paper is organized as follows: in Section 2, related work is discussed, in Section 3, preliminary definitions are presented, and Section 4 discusses generalized distributed transition stochastic Petri net model with example. Section 5 elaborates about the stochastic reward net, survivability, and reliability model. Conclusion is presented in Section 6.
Related Work
Some related literature associated with Petri net model and stochastic overall performance for feature information are presented in the literature review. Hu et al. [1] proposed a technique that is primarily based on SPN model which is exponentially distributed for workflow log and depends on transition rate of firing. Anastasiou et al. [2] proposed completely unique strategies, whereby they centered at the location data for generalized stochastic Petri net model for customer modelling flows. In their research work for transition intervals, they used constant hyper-Erlang distributions which shows waiting and run times and also used GSPN to upgrade the other corresponding transitions subnet depicting the same features which the hyper-Erlang distribution has. ey observed at all the transitions independently which did not cause problems in the sequence, but the similarities within the method, especially the many similar transitions, were not considered in their technique.
According to Leclercq et al. [3], non-Markovian stochastic Petri nets and attempts at eliciting exist. ey were looking at a way of removing a model of normally distributed data. In their work, they focused entirely on the anticipation algorithm prepared for maximization convergence. Compared to the method used in this study, they are unable to manage lost data and different performance guidelines. e reconstruction of the model parameters of stochastic structures was also investigated in a study by Buchholz et al. [4], where they dealt with the problem of adjusting the model parameters of the basic stochastic system. Contrary to this study, the distribution of transition system was targeted in advance, although the main purpose was to make GDT_SPN version transitions which were comparable to, for example, incomplete Wombacher and Iacob's [5] statistical estimates' distribution does not have initial process times. Rozinat et al. [6] checked out how to acquire records for simulation models in an attempt to discover data dependencies which are mostly taken into consideration for discovering optimal standard alignment between model and log, which are doing manual replies that make decisions, mean durations, and trendy deviations because before that these were not taken under consideration. e technique proposed in this paper has successfully dealt with noise in a much far better way through building the notion of alignments, which is able to pick out the finest direction through the model for a noisy trace. According to van der Aalst [7], the available process mining techniques are used to consider the noise and probability for creating control flow, and the importance of business process modelling is recognized.
e best way to represent these methods is stochastic Petri net, which is the main research task of the process mining in business modelling. Rogge-Solti et al. [8] proposed an algorithm in their research work for process discovery of each execution and also used different raw event data for discovering various classes of SPN. In their study, they used notation alignment which is based on plug-in in process mining framework.
Preliminaries
Here concepts and some techniques are presented which are used throughout the paper. Primarily, our main focus is towards event logs and PN, SPN, GSPN, SRN and log sequence.
Event Log.
It is a set of collected cases L ⊆C so that each event performs once in the whole log; i.e., for any c 1 , c 2 ϵL such that c 1 ≠ c 2 : z set (c 1 ) ∩ z set (c 2 ) � ∅. If the event log contains time stamps, tracing order must respect these time stamps.
State.
It means a particular condition that a system uses for some specific purpose at specific time.
is a state which shows a multiset of pending obligations.
Sequence.
e most natural way to present the traces in the event log is by sequence. At a point when we need to explain the functional semantics of PN and transition of how a system is made, and the performance is also modelled in terms of sequences, some of the useful operators on sequences are presented. If A is a set, then A * represents a set of all finite sequences over A. A finite sequence over A with length n is a mapping σ ∈ 1, 2, . . . , n { } ⟶ A. In this case, the sequence is represented in string form; that is, σ ∈ a 1 , a 2 , . . . , a n , where a i � σ(i) for 1 ≤ i ≤ n. |σ| shows length of the sequence; that is, |σ| � n. σ ⊕ a ′ � 〈a 1 , a 2 , . . . , a n , a ′ 〉 represents sequence with element a ′ . Similarly, σ 1 ⊕ σ 2 adds sequences σ 1 and σ 2 causing a sequence of length |σ 1 | + |σ 2 |. PN is a graphical representation of nodes based on places and transitions. e arc that connects places by transition is called the input transition arc, and the arc connected from transition to places is called the output transition arc, and the positive number is connected to each arc. Places connected to the transition with the input arc are called input places and vice versa. Each place may have zero or more tokens. Transitions are enabled if each of the input places has at least as many tokens corresponding to the input arc. Enabled transitions can be fired. When we have fired token, the number of tokens that are equal to the input arcs multiplicity is transferred from each input place, whereas the number of tokens which are equal to the output arc multiplicity is deposited in the output places. Petri net transformation of one marking into another which is done by using firing transitions, in reachability set M 0 , shows the initial marking and is defined as the sequences of the firing transitions followed by marking and always starts through initial marking. It is known that a PN is a 5-tuple PN � (P, T, F, W, M). Figure 1 describes a Petri net model as a marking vector that calculates the number of tokens in each net: n 1 , n 2 , . . . , n p , with p representing the number of places. (1) Transition is enabled in case the input places have the required number of tokens. Figure 2 shows that the enabled transitions can be fired by taking a certain number of tokens from the input places and then placing them in the output places, for example.
In real world, Petri net can be used in many situations based on sequences, concurrency, synchronization, and conflict. We present in Figure 3 concurrency/parallelism of stochastic Petri net model, in Figure 4 we present the independency, and Figure 5 shows the synchronization of the stochastic Petri net model. Reachability analysis of stochastic Petri net is present in Figure 6. However, in computer networking, Petri net describes communication protocols. In original definition of PN, concept of time is not included for the performance evaluation of dynamic system; therefore, it is important to present time delay that is linked with each transition in Petri net model. is concept of time delay in transition firing emerges in the SPN.
Stochastic Petri Net.
In SPN, data perspective is used because flow of the information between the tasks can be described by it and each transition is exponentially distributed with firing time. SPN can be described as a workflow net if (i) there is only one place to start and (ii) one place to end and (iii) every node is on track from the start to the end of the task. For a generalized stochastic Petri net (GSPN), the transitions are either timed (firing time represented by rectangular box) or immediate (zero firing time, represented by black bar). Priority is always given to the immediate transition over timed transition for firing. Usually, we use probability mass function in order to complete the firing and break the tie between immediate transitions. In GSPN marking is completed when at least an immediate transition enables the marking to be visible. Inhibitor arcs are also introduced in the GSPN which are used to connect places to transition. At the terminating point of arrow there is a small hollow circle which indicates that it is an inhibitor arc. If input places of the inhibitor arcs contain more tokens than the multiplicity arcs, then at that point transition cannot be fired with the inhibitor arc.
It has been proven that the condition of limited number of transitions is related to continuous time Markov chain (CTMC) with GSPN that can be fired in a limited time with nonzero probability. When stochastic Petri net is used in computer network for performance evaluation, places are denoted with packets or cell in buffer or active user or flow in the system, whereas their arrival and departure are represented with transitions.
Generalized Distributed Transition Stochastic Petri Net
It is based on seven-tuple p, τ, P, W, F, M 0 , D , whereas the basic Petri net has P, T, F, M 0 , where (i) T is a set of transitions which is equal to T i ∪ T t based on immediate and rimed transitions (ii) P: T⇒N + 0 is the priority to transition, where ∀ t ∈ T i : P(t) ≥ 1 and ∀ t ∈ T i : P(t) � 0 (iii) W: T i ⇒R + represents the probability weight to immediate transition (iv) D: T t ⇒D is an arbitrary probability distribution D to timed transition, which reflects duration of corresponding activities In Figure 7 we present the generalized distributed transition stochastic Petri net model for two parallel branches, and the above model shows the conflict between transitions T c and T d .
Case-Based Alignment.
Case-based alignment method is much more powerful than the naïve replays of logs in the model because it ensures that global availability best penalizes asynchronous part of replay. Since there are two execution traces tr 1 and tr 2 of the above model, it is assumed that immediate transitions are invisible, whereas the timed transitions are visible. In this study, the focus is geared towards the invisible transitions in the model alignment which is denoted by τ, whereas tr 2 is not fitted in the model. So, to overcome this, optimal alignment between model and log is found out using the method proposed by Adriansyah et al. [7], which gives sequence of replay movements in the trace and model. erefore, these movements can be either log moves, synchronous moves, or model moves. Table 1 shows the extension traces of the model which are the matching subscript of each event in the transition net. Table 2 gives us perfect alignment for tr 1 based on synchronous or invisible model moves. Also, multiple alignments for trace tr 2 are presented in Table 3. Hence, Table 4 shows the event logs which are based on the optimal alignment of the model and log. e symbol ≫ shows no progress on either side. Cost-based alignment provides the unnecessary moves which are penalized, and the high costs are not included in the optimal alignments.
Stochastic Reward Net
(SRN): basically, SRN is an extension of the GSPN. Reward rate, in stochastic reward net, is linked with each visible marking. erefore, there are many ways of measuring performance. It also gives us many other ways that make specification convenient, which are as follows: (i) Every transition can have enabling function (also referred to as security guard), which only makes the transitions enabled if their marking dependency function is correct. is specification gives a superior way to enhance the graphical representation and makes SRN easy to understand.
(ii) Repeated arcs marking is permitted. is characteristic can be used in a case when transformation of total number of tokens depends on the present marking.
(iii) Marking of dependent firing rate is allowed. is specification enables the firing value of the transition to be specified as a token number in any Petri net. (iv) Transitions can be provided with different priorities, and transitions are enabled only if no other highvalue transition is enabled.
(v) e expectation of traditional withdrawal measures found in GSPN as the inclusion of a transition and the mean number of tokens in a place, the most difficult reward work can now be defined.
Primarily availability of assessment approaches depends on modelling and measurement methods. System availability model-based approach is more effective and inexpensive for analysis and comparison than measurementbased approach. Discrete-event simulation can be used for system modelling, whereas, for analytical modelling, both approaches can be used. Analytical modelling can be categorized into four main parts: (1) Nonstate-space model (reliability graph) Figure 7: Generalized distributed transition stochastic Petri net model for two parallel branches; conflict between T c and T d . Table 4: Event log-based alignment.
e hierarchical models, fixed iterative point models, and nonstate paradigms provide a quick overview of the system basic metrics (reliability, availability, and MTTF) with their proper scanning and the architecture of the system. However, state-space models capture complex functionality and system performance. is approach can also be able to manage failure or fix dependencies and complex connection between the system components. In order to avoid a wide range of problems at the point state level, modelling approach for nonstate-space and state-space models for the other points can also be used.
Availability a(t) shows the probability of correct state at any instant t in operating system without taking into consideration the interval (0, t).
e instantaneous availability a(t) related to system reliability is computed by (2) r(t) shows instantaneous reliability at time t and can be defined as follows: (3) f(x) is the probability density function for random variable x which represents the system lifetime or time to failure; q(x) represents the renewal process rate in the interval (0, t).
w(x)dx represents probability of renewal process cycle that will be completed in the time interval (x, x + dx) · r(t − x) shows the probability of the system which works properly in the remaining time interval (t, x). r(t − x)w(x)dx shows the probability of the case where fault has occupied, and repair or renewal will reassume functioning with no further faults. e concept of a(t) is similar to the reliability r(t) in the case where the system is not repairable.
For a long running time, we have steady-state availability (SSA), where at initial state we have limiting value of a(t) decreasing by 1. erefore, where λ represents the time failure rate and system and μ, which is the repair time, is determined as average of number of repairs over maintenance tie. MTTF (mean time to failure) represents the expected time in which a system functions correctly before its first failure. Mean time to repair (MTTR) shows the expected time in which a system is used for repair. At the point where both mean time to failure (MTTF) and mean time to repair (MTTR) are exponentially distributed, the arithmetic inversion of failure and the repair rate of the system are as follows: In hardware and software of industrial process, stochastic reward net is an appropriate modelling tool. In Figure 8, we present SRN availability model which can specify the system operations by using transitions, arcs, and places, which are the main parts. Stochastic reward net for availability framework is based on three stages: (1) Requirements specifications (2) SRN-based system modelling (3) System analysis
Survivability and Reliability Model for Performance
Checking. Survivability and reliability model can be used to check the performance of transient behavior after the occurrence of failure, attack, and disaster among others. It can be seen that a generalization of recovery time means how much effect can be put during the recovery time. Survivability can check out the performance of the model by using "system average interpretation duration index" in a short form called SAIDI model. e predictive model of SAIDI is where E(X i ) shows the availability model and shows survivability. In short, it can be said that SAIDI is combination of availability and survivability model. Hence, ∅ i means number of failures at section "i;" after failure of section "i," we have (i) X i : time up to full recovery (ii) D i (X i ): energy demanded up to full recovery (iii) M i (X i ): energy not supplied up to full recovery It can also be used to predict the recovery matrix; after coupling with the availability model, the generalized SAIDI form is as follows:
SAIDI �
Customer interrputaion duration Total number of customers .
Conclusion
In conclusion, it is observed that the performance of the system or business process plays an important role in modelling, analysis, and prediction. Nowadays, memoryless model such as exponentially distributed stochastic Petri net (SPN) has gained much attention in research and industry. is paper is based on time perspective for modelling, analysis, and use of stochastic Petri net to check the performance, evolution, stability, and reliability of the model. To know the effect of time delay in firing the transition, we use stochastic reward net (SRN) model. Stochastic reward net (SRN) model can also be used to check the reliability of the model. Generalized stochastic Petri net (GSPN) is used for evaluation and checking of the performance of the model. It is known that stochastic Petri net is used to analyze the probability of state transition and the stability from one state to another, whereas, in mining process, logs are used by linking log sequence with the state which enables modelling to be done and relates it with stability of the model. Generalized distributed transition stochastic Petri net model is used for checking the performance of the model. Case-based alignment is used to check the optimal alignment between the traces. SAIDI model is used to check the survivability and reliability of the generalized stochastic Petri net model. is paper deals with mathematical and theoretical work that shows how checking the importance of stochastic Petri is done in order to know the performance of the discovery process model and analysis. Further work can be done on its practical application.
Data Availability
e data used to support the findings of the study are included within the article.
Conflicts of Interest
e authors declare that there are no conflicts of interest. | 6,102.6 | 2021-06-30T00:00:00.000 | [
"Computer Science"
] |
Plant extracellular vesicles: Trojan horses of cross‐kingdom warfare
Abstract Plants communicate with their interacting microorganisms through the exchange of functional molecules. This communication is critical for plant immunity, for pathogen virulence, and for establishing and maintaining symbioses. Extracellular vesicles (EVs) are lipid bilayer‐enclosed spheres that are released by both the host and the microbe into the extracellular environment. Emerging evidence has shown that EVs play a prominent role in plant–microbe interactions by safely transporting functional molecules, such as proteins and RNAs to interacting organisms. Recent studies revealed that plant EVs deliver fungal gene‐targeting small RNAs into fungal pathogens to suppress infection via cross‐kingdom RNA interference (RNAi). In this review, we focus on the recent advances in our understanding of plant EVs and their role in plant–microbe interactions.
biogenesis, the role of plant EVs in plant-microbe interactions, the mechanisms of EV sRNA loading, and the potential applications of these discoveries for preventing plant disease.
| HETEROGENEIT Y OF PLANT EVs
In mammalian systems, EVs are divided into multiple classes based on their distinct biogenesis pathways and specific protein markers, including exosomes, microvesicles, and apoptotic bodies. 7 Exosomes originate from multivesicular bodies (MVBs), which then fuse with the plasma membrane, to release their intraluminal vesicles (ILVs) into the extracellular space, forming exosomes. 3 Restricted by the size of ILVs, the diameter of exosomes ranges from 30 to 100 nm. 7 Microvesicles originate from the direct outward budding of the plasma membrane. The size of microvesicles ranges from 50 to 1000 nm but can be even larger in some cancer cells. 8 An additional class, apoptotic bodies, arise from blebbing of the apoptotic cell membrane and can be over 1 μm in diameter. 9 In plants, EVs were first observed in the 1960s in chemically fixed carrot cells using electron microscopy. In these initial observations, different sizes of EVs were detected. 10,11 Now, EVs have been discovered in extracellular fluids of leaves, roots, fruits, and imbibing seeds. 6,[12][13][14] In recent years, an increasing number of studies have implied that, like animal cells, there exists a heterogeneous population of EVs in plants ( Figure 1A).
In mammalian systems, tetraspanin proteins, such as CD9, CD63, CD37, CD81, or CD82, are enriched in the membranes of exosomes and often used as exosome biomarkers. 15 The model plant, Arabidopsis thaliana, has 17 TETRASPANIN (TET)-like genes. Despite their limited amino acid sequence similarity with animal tetraspanin proteins, they share conserved structural hallmarks, including four transmembrane domains (TM1-TM4), a small extracellular loop (ECL1), an intracellular loop (ICL), and a large extracellular loop (ECL2). 16 Two of the Arabidopsis TETs, TET8 and TET9, are specifically induced upon infection by fungal pathogen Botrytis cinerea. 6 Meanwhile, both co-localize with the Arabidopsis MVB-marker Rab5-like GTPase ARA6 inside the cell, and also co-localize in EVs which are enriched at fungal infection sites. 6 These TET8positive EVs were highly enriched in the fraction collected at ultracentrifugation speeds of 100,000 g from leaf apoplastic fluid. 6 In density gradient ultracentrifugation, TET8-positive EVs are enriched in the fractions at the density of 1.12-1.19 g/ml, which is consistent with the density of exosomes in animal systems. 17,18 Meanwhile, the plant EV-enriched sRNAs and cargo proteins are also present in the same fraction as TET8. 18 These results suggest that TET8-positive EVs can be considered bona fide plant exosomes, and transport sRNAs.
Tetraspanins play a crucial role in EV formation and function in mammalian cells. 15 In plants, the tet8 single mutant or the tet8 tet9 double mutant plants displayed decreased secretion of EVs and sRNAs, and enhanced susceptibility to B. cinerea infection. 6,18 Further investigation revealed that the amount of an EV-enriched lipid, glycosylinositolphosphoceramides (GIPCs), was dramatically decreased by over fourfold in tet8 total leaf extracts. 19 These results indicate that TET8 mediates the production of EVs in association with GIPCs.
PENETRATION 1(PEN1)-positive EVs represent another class of plant EVs. PEN1 is a plasma membrane-associated plant-specific syntaxin. 20 It was first identified by mutational analysis in Arabidopsis to screen mutants that were disabled in non-host penetration resistance against barley powdery mildew, Blumeria graminis f. sp. Hordei. 21 The secretion of PEN1 depends on an ADP ribosylation factor-GTP exchange factor (ARF-GEF), GNOM, which mediates recycling endosome trafficking rather than MVB pathway. 22 PEN1positive EVs are enriched at a lower ultracentrifugation speed (40,000 g) than TET8-EVs from Arabidopsis leaf apoplastic fluid. 14 Furthermore, PEN1 does not co-localize with MVB marker ARA6 in plant cells and the PEN1-positive EVs are enriched in the gradient fraction of 1.029-1.056 g/ml. 14 Additionally, when TET8-GFP and mCherry-PEN1 were coexpressed in Arabidopsis, distinct GFP-labeled and mCherrylabeled EVs were observed in isolated EVs. 18 These results indicate that TET8-and PEN1-positive EVs represent distinct classes of plant EVs, and likely possess different biogenesis pathways ( Figure 1A).
EXPO, an exocyst-positive organelle is another source of plant EVs ( Figure 1A). It is associated with Exo70E2, which is an homolog of the yeast and animal exocyst protein Exo70, in Arabidopsis and tobacco (Nicotiana tabacum) suspension cells. 23 EXPO does not co-localize with any known organelle markers, including markers of the Golgi apparatus, the trans-Golgi network/early endosome, or MVBs in plants. 23 Immunogold labeling of sections cut from high-pressure frozen samples of wild-type Arabidopsis cells and transgenic BY-2 cells expressing Exo70E2-GFP reveals that EXPOs are spherical double-membrane structures. After fusion with the plasma membrane, EXPO releases single-membrane-bound vesicles to the extracellular space. 23 Several arabinogalactan glycosyltransferases involved in arabinogalactan Oglycosylation have been found in the EXPO, indicating that GATLs could be co-secreted to the apoplast via the EXPO. 24 The mechanism of EXPO biogenesis, and whether the cargoes of EXPO are functional is still unclear.
EVs are also present in fruits, such as grapes and coconuts. 25,26 Exosome-like nanoparticles have been isolated from grape juice using differential centrifugation and sucrose gradient methods. The size of grape EVs is between 50 and 300 nm in diameter, similar to the size of exosomes. They may function in the activation of intestinal stem cell proliferation 660 | HE Et al. and remodeling of intestinal stem cells in response to pathological triggers. 25 Additionally, exosome-like nanoparticles were observed and detected in coconut water by scanning electron microscopy (SEM), fluorescence microscopy, and dynamic light scattering (DLS), although their biological function is unclear. 26 Olive (Olea europaea) pollen grains release nanovesicles during in vitro pollen germination and pollen tube growth, named pollensomes. 27 Electron microscopy analysis has revealed that these pollensomes represent a heterogeneous population of round-shaped nanovesicles. Pollensomes size ranges from 28 to 60 nm in diameter with densities ranging from 1.24 to 1.29 g/ml in a sucrose gradient. Pollensomes may provide an alternative way of protein secretion during the processes of pollen germination and pollen tube growth, which are key steps for successful fertilization in plants. 27
IN PLANTS
In order to elucidate the various functions and distinct subclasses of plant EVs, it is critical to develop efficient and effective EV isolation protocols for use in plant systems. Methods for plant EV isolation are based generally on established mammalian EV separation protocols. Differential ultracentrifugation is the most conventional EV isolation method. 28 In this method, large and medium vesicles and membrane structures are eliminated by successive centrifugations (2000 g and 10,000 g) at increasing speeds, which prevents the artificial creation of small vesicles from large ones by direct high-speed centrifugation. 7 Small vesicles are then sedimented by ultracentrifugation at 100,000 g. 7 However, this final ultracentrifugation step only allows for the enrichment of small-sized EVs and cannot distinguish between different subclasses of EVs or protein aggregates of similar size. A more specific method, density gradient ultracentrifugation, enables further separation of membrane-enclosed vesicles from aggregates of proteins, and the separation of similarly sized EVs with different densities. 29 In plants, two gradients, sucrose and iodixanol, have been used to further separate different EVs. 14,18 Immunoaffinity isolation is the most precise method to isolate specific classes of EVs. It takes advantage of EV surface protein markers such as tetraspanin proteins, CD63, CD9, and CD81. [30][31][32][33] In this method, EV samples isolated by ultracentrifugation are incubated with beads coated with antibodies for EV surface proteins. After washing the beads, only the antibody-specific binding EVs can be isolated. 30,32 This method can further prevent cytoplasmic protein or RNA from contaminating isolated EVs. In plants, a native antibody that can specifically recognize the ECL2 domain of TET8 has been generated to specifically isolate TET8-positive EVs. 18 The EV-localized sRNA and protein cargos are clearly detectable in immunoaffinity purified TET8-positive EVs. 18 To remove the contaminating RNA and protein molecules that non-specifically attach to the EV surface or co-sediment with EVs, nuclease and protease treatments of EVs are widely performed. 29 Both sRNA and protein cargos contained within EVs are protected from nuclease and protease digestion, unless Triton X-100 is added to rupture the EVs, demonstrating that plant EVs can indeed protect nucleic acid and protein cargos for transportation. 6,18 This work has established the initial framework for researching plant EVs.
FUNCTION OF EVs IN PLANT-MICROBE INTERACTIONS
As safe vehicles to deliver functional and regulatory components (such as nucleic acids, lipids, and proteins) to other cells or interacting organisms, EVs play prominent roles in communication between interacting organisms. In animal systems, several parasites have been shown to release EVs into host cells to manipulate host immune responses. 5,[34][35][36] Leishmania, the causative agent of tropical and sub-tropical infections termed the leishmaniases, can release exosomes into macrophages. 35 The incubation of macrophages with Leishmania exosomes selectively induced secretion of interleukin-8, which may facilitate the pathogen infection. 35 The gastrointestinal nematode, or helminth, Heligmosomoides polygyrus utilizes exosomes to deliver miRNAs into mouse host cells to suppress inflammation and innate immune responses during infection. 36 Emerging evidence also indicates that plant EVs are critical for communication with their interacting microbes. Specifically, EVs are important for antimicrobial defense. Infection with fungal pathogen B. cinerea or bacterial pathogen Pseudomonas syringae pv tomato DC3000 stimulates the secretion of plant EVs, which indicate the important role of EVs in plant-pathogen interactions. 6,14 Recently, protein cargos involved in antimicrobial defense, including the glucosinolate transporters PEN3 and NRT1 as well as the myrosinase EPITHIOSPECIFIER MODIFIER1, 26 have been identified inside Arabidopsis EVs. 14 PEN3 is involved in immunity against powdery mildew fungus, Golovinomyces orontii and P. syringae pv tomato DC3000 bacteria, and the plant glucosinolate-myrosinase defensive system is activated only under tissue damage caused by pathogens, insects, or other herbivores. [37][38][39] This suggests that plant EVs may function as concentrated packets of antimicrobial molecules and compounds. The plant EV proteome was also enriched in various immunity-related membrane trafficking proteins, such as PEN1 (Syntaxin-121), Syntaxin-122, and Syntaxin-132, | 661 HE Et al.
further supporting the conclusion that plant EVs are also involved in protein transport during immune signaling. 14
CONTRIBUTE TO SELECTIVE LOADING AND STABILIZATION OF sRNAs IN EVs
Based on plant EV sRNA profiling analysis, a specific group of plant sRNAs were detected in EVs, 6 which suggests that a regulatory process for selective loading of sRNAs into EVs exists in plants. Using mass spectrometry (MS) analysis, He et al. identified a group of RNA-binding proteins (RBPs), including Argonaute protein 1 (AGO1), DEAD-box ATP-dependent RNA helicases 11 (RH11), RH37, RH52, Annexin1 (ANN1), and ANN2 in Arabidopsis EVs that isolated at 100,000 g. 18 These RBPs co-localize with TET8positive EVs, and could be detected by Western blot analysis in these TET8-positive exosomes even after trypsin digestion. 18 Among these RBPs, AGO1 and RNA helicase proteins can specifically bind EV-enriched sRNAs in both total RNA extracts and EV fraction. In immunocapture-purified TET8positive exosomes, only the AGO1-, RH11-, and RH37bound sRNAs, but not AGO2-or AGO4-bound sRNAs were detected, suggesting that AGO1 and RH11/37 contribute to selective sRNA loading into exosomes ( Figure 1B). Annexins bind to sRNAs non-specifically and are not involved in the selective loading process. Moreover, the sRNA levels are reduced in EVs isolated from the ago1 mutant, and the double mutants of both rh11rh37 and ann1ann2, suggesting that all of these RBPs stabilize the sRNAs in EVs. Furthermore, rh11rh37 and ann1ann2 mutants are more susceptible to B. cinerea in comparison to wild-type plants. The expression of fungal virulence-related genes that are targeted by plant secreted sRNAs were de-repressed in B. cinerea that were collected from rh11rh37 and ann1ann2 mutants. 18 These results how that EV-associated RBPs contribute to plant immunity by selective loading and stabilization of sRNAs in plant EVs.
EVs are also involved in arbuscular mycorrhizal symbioses and have been observed in the interface of plants and symbiotic arbuscular mycorrhizal fungus. 13 During the formation and maturation of the arbuscular mycorrhiza, plant MVBs have been observed fusing with the host-derived periarbuscular membrane (PAM) in areas where plant and fungi interact. 13 Whether these EVs contain RNAs cargos, especially sRNAs, remains to be established.
| CROSS-KINGDOM RNAi
Of the many emerging roles of EVs in plant systems, perhaps the most intriguing one is their critical role in cross-kingdom RNA interference (RNAi). Cross-kingdom RNAi is the transport of sRNAs between interacting organisms, which target and silence genes in the counter party. This communication mechanism was first discovered in the fungal pathogen, B. cinerea, which can deliver sRNAs into multiple plant hosts, including Arabidopsis and tomato. Once inside plant cells, these fungal sRNAs hijack the plant RNAi machinery protein, AGO1 to silence host immune response genes. 40 Soon after this initial discovery, cross-kingdom RNAi was demonstrated to be bidirectional, plants also send sRNAs into B. cinerea in order to target and silence key fungal virulencerelated genes. 6,41,42 Since the discovery and characterization of cross-kingdom RNAi, this phenomenon has been observed in a variety of interacting organisms. In addition to fungal pathogens, oomycete pathogen Hyaloperonospora arabidopsidis also transport sRNAs into their plant hosts and utilize host AGO1 to silence plant genes. 43 The parasitic plant, Cuscuta campestris, transports miRNAs into host plants to silence defense response genes. 44 Cross-kingdom RNAi can also function in symbiotic interactions. Plant bacterial symbiont, Rhizobium, although has no conventional RNAi machinery, can generate sRNA-like RNAs from transfer RNA (tRNA) degradation. These tRNA-derived sRNAs are delivered into soybean cells and are loaded into soybean AGO1 to silence soybean genes, which helps to establish the plant-bacteria symbiosis. 45 Outside of plant interaction systems, cross-kingdom RNAi has been observed in animal-pathogen or parasite interactions. For example, the gastrointestinal nematode H. polygyrus sends sRNAs into mammalian gut cells in order to target and silence immunity and inflammation-related genes. 36 The fungal pathogen of mosquito, Beauveria bassiana transfers a miRNA to the host cells and hijacks mosquito AGO1 to silence host immunity gene Toll receptor ligand Spätzle 4. 46 In most cases, the precise mechanisms underlying cross-kingdom RNAi transport remain unclear. However, discoveries in plant-fungal and mammal-parasite interactions both suggest the EVs are a major mechanism of interspecies RNA transport. 6,36 In 2014, Amy Buck's research group discovered that H. polygyrus packages sRNAs into EVs, specifically exosomes, to deliver sRNAs into mouse intestinal epithelial cells. 36 Following this initial finding, a growing number of papers indicates that other parasites utilize the same strategy for sRNA delivery. 47,48 In plant systems, Cai et al. discovered that the host plant Arabidopsis packages fungal gene-targeting sRNAs in TET8-positive EVs for delivery into the pathogen B. cinerea. 6 Specifically, Cai et al, found a specific set of plant sRNAs localized in the TET8-associated EVs. 6 To confirm the EV localization of these sRNAs, a series of verification including nuclease treatment of purified EVs, high-speed density gradient ultracentrifugation, and EV immunoaffinity isolation with TET8 antibody were performed. 18 This work clearly demonstrates that sRNAs are located within TET8-positive EVs. Furthermore, these TET8-positive EVs can be efficiently taken up by B. cinerea fungal cells. 6 After taken up by fungal cells, EV delivered sRNAs are released to suppress critical fungal target genes, including vacuolar protein sorting 51 (Vps51), a large subunit of the dynactin complex (DCTN1) and a suppressor of actin-like phosphoinositide phosphatase (SAC1), which coordinates vesicle trafficking and plays important roles in B. cinerea pathogenicity. 27 In PEN1-labeled EVs, a group of "tiny RNAs," which are 10-17 nucleotides in length and derived mainly from the positive strand of mRNA transcripts, have also been found. However, the biological function of these tiny RNAs is still unclear. 49 A similar phenomenon has been discovered between plants and oomycete pathogen Phytophthora capsici. 50 Under infection by P. capsici, Arabidopsis delivers secondary phasiRNAs from PPR gene clusters into the pathogen, likely using EVs, to silence target genes in the P. capsici. 50 Intriguingly, ingested plant EVs can shape the mammalian gut microbiome through cross-kingdom RNAi. Specifically, miRNAs encapsulated in ginger EVs can be taken up by gut microbiota after ingestion, where they target and silence microbe genes, influencing microbiome community composition. 51 However, bacteria do not have conventional RNAi machinery, it is still not clear how host sRNAs manipulate the expression of bacterial genes. Recent discovery of plant AGO1 protein being secreted together with the sRNAs in EVs led us to hypothesize that host AGO proteins may transport and function with associated host sRNAs in silencing bacterial genes. Taken together, these studies suggest that in plants, as well as animals, EVs are a major mechanism of sRNA transport.
APPLICATIONS
The critical role of EVs in cross-kingdom RNAi can be leveraged into innovative plant protection strategies. In one strategy, host-induced gene silencing (HIGS), plants are genetically engineered to express pathogen/pest gene-targeting double-stranded RNAs, which are processed into sRNAs. Subsequently, these sRNAs are transported into the pest/ pathogen where they silence key virulence-related genes to suppress infection. 52,53 This strategy has been successfully utilized to control both fungal pathogens and insect pests in plants. However, a key drawback with HIGS approaches is that they rely on the generation of transgenic plants, which is still technically challenging in many crop species. Additionally, the cost of overcoming regulatory hurdles necessary for bringing a GMO product to the market further limits the feasibility of the HIGS approach.
An alternative to HIGS, spray-induced gene silencing (SIGS), requires the direct application of pathogen genetargeting RNAs onto plant material, circumventing the need for genetic engineering. Recent discovery of fungal RNA uptake makes SIGS possible to control fungal diseases in crops. 41,54,55 SIGS approaches have been successfully used to prevent fungal infections in both monocot and dicot plants, as well as in postharvest materials. 41,54,56 Because RNA is already present in most food, these RNAs are likely safe for human consumption. Furthermore, SIGS is an eco-friendly alternative to traditional fungicides, as RNAs degrade within 2 days of soil application. 57 Unfortunately, this rapid degradation is a major hurdle that must be overcome before widespread SIGS applications. One strategy for enhancing RNA stability is to package them within nanoparticles, such as clay nanosheets, which can stabilize RNAs on plant tissue for up to 30 days. 58 In clinical contexts, lipid nanoparticles, which complex with RNAs to form liposomes, can package and stabilize therapeutic RNA treatments in the bloodstream. 59 The recently developed mRNA vaccine of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus that causes COVID-19, is also encapsulated in lipid nanoparticles to facilitate the delivery into human cells. 60,61 This strategy may work particularly well in plant-fungal systems, as the liposome encapsulated RNAs mimics the natural delivery system of plant sRNAs in EVs to fungal pathogens.
| CONCLUSION
Though decades of research have been performed on EVs in animal systems, plant researchers are just beginning to scratch the surface of the multitude of complex roles EVs have in plant systems. Though first observed in the 1960s, plant EVs have garnered little research attention until recently. Indeed, reports of EVs in organisms with thick cell walls have previously been largely overlooked because a mechanism for EVs to cross cell walls was unknown. Recent studies in fungi however, clearly indicate that cell walls are viscoelastic and dynamic in nature, and can stretch and accommodate the passage of large molecules, including EVs. 62 Current methods for EV isolation in plants have largely been adapted from existing EV isolation protocols developed in animals. Using these isolation methods, it has been discovered, that, similar to animals, plants possess a variety of EV subclasses derived from different biogenesis pathways. These EVs are critical for the transport of proteins and small RNAs from plants to their microbe partners and pathogens. Further research into the contents of EVs and their roles in specific plant-pathogen communication mechanisms, such as cross-kingdom RNAi, will play a crucial role in the development of novel plant protection techniques. Beyond the obvious agricultural applications of plant EV research, | 663 HE Et al.
increasing evidence indicates that plant EVs can be used in medical applications. As it has already been demonstrated, dietary EVs can impact microbiome composition. 51 This suggests that it may be possible to package therapeutic sRNAs or medications in plant EVs for transport to target cells in animal systems.
The field of plant EVs is still in its infancy. Early advances have indicated the crucial role of EVs in plantmicrobe interactions, especially in cross-kingdom RNAi. We expect new breakthroughs as this field of research matures. Beyond the basic goal of better understanding both plant physiology and plant-microbe interactions, delving deeper into the world of plant EVs can provide novel solutions to problems in both the agricultural and medical sectors, through innovative crop protection strategies and therapeutic delivery systems. | 4,968.6 | 2021-06-08T00:00:00.000 | [
"Biology"
] |
The Global System for Mobile Communications (GSM) for Wireless Home Security with Arduino and Web CAM
— This project presents the global Mobile Communication System (GSM) for Wireless Home Security with Arduino and Web CAM. This study aims to expand the use of Arduino and GSM as one of the tools of home security system. The second is to develop a relatively inexpensive and easy-to-use home security system. The third is to develop a security system with the concept of self-monitoring. The fourth is to make it easier for users to be more sensitive to their home condition by simply receiving SMS. The methodology that has been used in developing this project is the Engineering Design Process model. Generally, this model has 9 phases. Each phase found in this model can help the researcher ensure that the product developed can achieve the set objectives. Researchers have analyzed all data and can conclude that 70 percent of respondents agree that the system designed can reduce theft and improve home security features. Respondents also agreed that this system could be applied in real situations. In addition, all respondents agreed that the system is safe to use, with a total percentage of 86 percent agree and 14 percent strongly agree. The final result could illustrate that this developed system can provide benefits and benefits to users.
I. INTRODUCTION
Today, information and communication technology (ICT) is growing and evolving rapidly. Thus, other technologies also continue to evolve so that the systems developed can compete with the latest technology. Home security systems cannot be fully controlled by humans, but with the advent of the latest technology and microcontrollers, especially on the Internet of Things (IoT), it has given a new face to home monitoring and security systems. [1]. The Global System for Mobile communications (GSM) Wireless Home Security with Arduino and Web Cam is a home security system developed to reduce theft case and increase home security features. This system uses GSM module and Arduino board as main components. GSM or Global System Communication could send a short messaging system (SMS) to users in the event of something in their home. The use of webcam also helps the user to control the situation at home. An output makes the system one of the most effective techniques in securing the risk of home-breaking cases and making them different systems with existing home security systems [2], [3]. The choice of the component which is relevant to the purpose of the study is significant in ensuring that the developed system can benefit the users, in turn, can be used as a home security system.
The main factor in developing this project is based on frequent problems where most of the existing home security systems use the system alarms only, and users cannot find out what has happened in their home until told by neighbors or authorities. In addition, this system also focused on the use of a less complex and easy circuit, so it is easy to install and easy to use by users. The cost used to develop this project is also reasonable in ensuring that it is affordable for the existing security system. Nevertheless, the use of Arduino and GSM software, as well as other electronic components, necessitates the user to have their skills in facilitating the user to use them [4].
II. LITERATURE REVIEW
Studies should be carried out to collect information, obtain and analyze the information needed to develop a quality system. This study is very important in implementing the project. This development process cannot be carried out correctly if the information and knowledge to develop the project is inadequate. Therefore, all the necessary information needed in project development needs to be identified and predefined. Various methods can be used to gather the information necessary, such as information from reading materials such as journals, web, and related books. Additionally, keep an eye on the home security system in the market where it covers the system's advantages and disadvantages. In addition, researchers can also obtain the desired information by interviewing users in Malaysia. All data received will go through the evaluation process and will be analyzed in the process of developing this system [5], [6].
A. Home Security System
Historically, home security systems have long been developed by Marie Van Brittan and her husband, Albert Brown of New York. In 1966, Marie Van Brittan and her husband, Albert, introduced an electronic security system to reduce crime rates at that time. With the knowledge available in the field of electronic engineering, they created a closed circuit camera (CCTV) system to monitor the residence [7]. As we all know that technology is changing so fast; every year, many designs and developments on home security systems are designed. The creators strive to produce security systems using the latest technologies such as Arduino, raspberry pi and sensors. Sensory home security systems and CAM webs require advanced technologies and methods that need to be connected wireless and ensure real-time operation and threat indicators [8]. The final output can be regarded in the development of the current home, where the idea of a good life has changed. At the same time, home security focuses on current technological trends that use digital, wireless, and the Internet of Things [9], [10]. Ordinarily, we have two types of home security systems widely used: webbased systems (WBS) and telephone-based systems (PBS). For phone-based control systems, it is performed by a global system for mobile communication networks. For this webbased system, it is done via the internet or wireless router. Fig. 1 depicted a web-based system diagram. The web servers and home guards play an essential role in Web-based home security systems. The web server controls an interfaced website that recognizes users to control the system. Typically, real-time monitoring of home conditions is done through a web browser through a personal digital assistant (PDA), mobile phone and laptop used to access the internet. The laptop is connected to the internet via a local access network cable or local wireless access network (WLAN). In addition to that, the PDAs will be connected to the internet via WLAN [11]. Mobile phones are connected to internet services provided by mobile service companies. When the sensor detects any movement, a warning message will be sent to the user through the website. To show that, the users can verify the alarm after watching the home situation through devices like cameras installed in their home. Despite this, the Web-based systems require high-cost devices such as laptops or PDAs. Monthly internet service makes this web-based system a high-cost system [12], [13]. Fig. 2 illustrates a telephone-based system where users can activate the home alarm when leaving home. When a person passes through the sensor range, for example, the alarm will be notified through the PSTN system on the door sensor. Finally, the next PSTN will call the alarm monitoring company involved or directly to the homeowner [15], [16]. Fig. 2. System-based PSTN [14].
III. DESIGN AND DEVELOPMENT
The Arduino IDE software was used to generate program codes for this home security system. The pins used for connecting between components and the Arduino board have been set in a program built using this software [17]. The researcher has influenced the pins used in accordance with the design of the system. Built-in program code is based on the programmer's purpose and objectives, where the programmer has been able to detect movement within the sensor area, is turned on. Next, GSM will send a short message to the user to alert the user. The following are the programs used to enable the system to function correctly [18]. Moreover, Fig. 3 depicted the circuit of project development using Arduino Board components like UNO, GSM, and sensor. The process of designing a project is one of the essential elements to consider in ensuring that the design is in line with the system being developed so that the components used work well. Clearly, the researchers do a little bit of research to choose the appropriate design for placing the PIR sensor. Indeed, the researchers have decided to place the PIR sensor on the end of the house wall so that the sensor utilized will work properly [19], [20]. The diagram below shows the prototype design that the researcher has built.
IV. RESULT AND DISCUSSION
The system was developed to improve the home security features and to reduce the risk of home breaking. After informing the respondents, the researcher should seek feedback from the respondents concerning this home security system's effectiveness. Although, the researcher obliges to determine whether the developed system has achieved its objective or not. Moreover, the result explicates the percentage of respondents who agree with the significance of this system. Most respondents agree on this system's efficacy in improving home security features based on the data gathered. The data results are indisputable when 63 percent of respondents gave a 5 (strongly agree) scale to the subsequent statement while another 37 percent rated a 4 (strongly agree). In conclusion, the researcher can be inclined to think that the research objective has been accomplished following the respondents' large scale.
V. CONCLUSION
As technology advances today, the purpose of technology in everyday life is run-of-the-mill. Various benefits of technology have been applied in our daily lives to ensure the well-being of our lives. Therefore, the system is being developed using today's technologies such as sensors and GSM modules, and mobile phones to decrease Malaysia's home breakage rates. With this system, it allows users to monitor home security from time to time throughout their absence. In conclusion, the developed home security system certainly benefits and benefits consumers. The system also enhances home security features, and the consumer can afford it at a lower cost compared to existing security systems on the market. As well, the home breakdown rate could be also be reduced with this home security system. Experience as a Test Engineer in the multinational company. Her research projects have collaborated with a multinational company and government agencies, which contributes to a network that leads to new ideas and concrete research projects. The developed automation projects that focused on Sensor Monitoring, Embedded System, Software, loT and Wireless Communication fields have been successfully adopted by the industry to date. A total of more than a million Ringgit has been generated as an income to the University mainly from the Research Grant, Commercialization of innovative research products and also the services as a principal consultant. Expertise in the agriculture sector with a new invention to improve crop production adopted high technology. She is sincerely dedicated to the very wise in the green project about recycling and reuse of waste. She has won several international awards and national awards. She has developed confidence and interest in researching and teaching areas to enhance creative Innovation in Engineering, Science & Technology.
Hafizul Fahri Hanafi is a senior lecturer of the Computing Department, Faculty of Arts, Computing and Creative Industry at Sultan Idris Education University, Malaysia. He has so many experiences, over 17 years of experience in academia, and an active researcher in many research activities and research grants in recent years. He also contributed so much research peculiarly on augmented reality in education and even human-computer interaction. He was also involved in current government activities that contributed to alleviate digital learning for the rural areas. The expertise of development is needed to enhance Information Technology usage peculiarly on the current demands of mobile applications and development. Furthermore, his expertise also converted into academic articles that also be a part of academician and published in highly-tier journals.
Fatikah Anis Zakaria is currently pursuing Master of Information Technology at Sultan Idris Education University, Malaysia. She received her Bachelor's degree in Education Computer Aided Design Technology from Sultan Idris Education University, Malaysia in 2018. She has a background in technology and holds keen interests in the area of teaching and engineering to assure progressive Innovation in Engineering, Science & Technology. | 2,788.8 | 2021-02-20T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Closed form wave solutions of two nonlinear evolution equations
Abstract The exploration of closed form wave solutions of nonlinear evolution equations (NLEEs) is an important research area in the field of physical sciences and engineering. In this article, we investigate closed form wave solution of two nonlinear equations, namely, the time regularized long wave equation and the (2 + 1)-dimensional nonlinear Schrodinger equation by the modified simple equation method. These equations play significant role in nonlinear sciences. The solutions are obtained in explicit form of the variables in the considered equations. The derived solutions are revealed in the form of exponential and trigonometric functions including solitary and periodic solutions. It is shown that the method is effective and an essential mathematical tool for constructing the closed form wave solutions of NLEEs in mathematical physics.
PUBLIC INTEREST STATEMENT
Nonlinear evolution equations (NLEEs) frequently arise in formulating fundamental laws of nature and a wide variety of problems naturally arising from solid-state physics, plasma physics, ocean and atmospheric waves, meteorology etc. Closed form solutions to NLEEs play a significant role in nonlinear science, especially in nonlinear physical science, since it can provide much physical information and more insight into the physical aspects of the problem. Therefore, numerous techniques have been developed by several groups of mathematicians and physicists to examine closed form solutions to NLEEs. In this article, we use the modified simple equation method to extract fresh and further general exact traveling wave solutions to the time regularized long wave equation and nonlinear telegraph equation. Thus, we obtain closed form wave solutions of these two equations among them some are new solutions. We expect that the new exact traveling wave solutions will be helpful to illuminate the connected phenomena.
The modified simple equation (MSE) method is a recently developed straightforward, effective and rising method and getting popular day by day. The objective of this article is to execute the MSE method to construct the closed form soliton solutions to the TRLW equation and the (2 + 1)-dimensional NLSE. The rest of the article is prepared as follows: In Section 2, MSE method is delineated. In Section 3, the method is implemented to examine the NLEEs indicated above. In Section 4, results and physical explanations are discussed and in Section 5 conclusions are provided.
The modified simple equation MSE Method
In order to describe the MSE method, let us consider a nonlinear evolution equation in two independent variables x and t in the form: where u = u(x, t) is an unknown function and is a polynomial of u(x, t) and its partial derivatives wherein the highest order derivatives and nonlinear terms are involved and the subscripts are used for partial derivatives. The important steps of this method are presented in the following: Step 1: Initiating a compound variable , we combine the real variables x and t: where c is the speed of the solitary wave.
The wave transformations (2.2) allow us in reducing Equation (2.1) into an ODE for u = u( ) in the form: where is a polynomial in u( ) and its derivatives, the prime stands for the derivative with respect to ξ.
Step 2: Assume the solution of (2.3) can be revealed of the form: where a i i = 0, 1, 2, 3, … N are arbitrary constants to be determined such that a N ≠ 0 and ψ(ξ) is an unknown function to be evaluated later, such that � ( ) ≠ 0. The characteristic and uniqueness of this method is that, ψ(ξ) is not known function or not a solution of any predefined differential or algebraic equation, whereas in the sine-cosine method, Exp-function method, tanh-function method, (G ′ /G)-expansion method, Jacobi elliptic function method etc., the solution are introduced in the form of known function. Therefore, it is not possible to speculate in advance what kind of solutions one may obtain through this method. Thus, it might be possible to achieve some fresh solution by this method.
Step 3: We determine the positive integer N arises in (2.4) by balancing the highest order of linear and nonlinear terms appearing in (2.3).
Step 4: Compute the necessary derivatives u � , u �� , … and insert Equation (2.4) into (2.3) and then we account the function ( ). The above procedure yields a polynomial in (1/ψ(ξ)). Equating the coefficients of same power of this polynomial to zero, yields a system of algebraic and differential equations that can be solved to get a i i = 0, 1, 2, 3, … N , ψ(ξ) and the value of other needful parameters.
This completes the determination of solutions to the Equation (2.1).
Formulations of the solutions
In this section, MSE method has been put to use to examine the closed form solutions directed to solitary wave solutions to the TRLW equation and the (2 + 1)-dimensional NLSE.
The TRLW equation
In this sub-section, the MSE method has been implemented to examine the closed form solutions to the TRLW equation, which is one of the alternative forms of the KdV equation (Islam et al., 2015): Where u, t and x denote the amplitude, time, and spatial coordinate, respectively, and α is a nonzero constant. The first term is the evolution term, and the third term is the nonlinear term, while the fourth term is the dispersion term. The third term uu x accounts for steepening of the wave and the dispersion term represented by u xtt spreading the wave. Nonlinearity tends to localize the wave while dispersion spreads it out. The solitons are the result of a intricate balance between dispersion and nonlinearity.
The traveling wave transformations = (x − ct), u(x, t) = u( ) where c is the wave speed, converts the Equation (3.1) into an ODE of the form Integrating (3.2) with respect to ξ once and the setting the constant of integration to zero, reduces to Balancing u 2 and u″, yields N = 2.
(3.1) In (x, t) variables the above closed form wave solution can be written as follows: Again if we choose c 1 = 1 and c 2 = −1∕ , the closed form solution (3.14) turns into . Therefore, the closed form wave solution of the TRLW equation in (x, t) variables as follows: Case 2: When a 0 = − 2(1+c) and a 2 = − 12c 2 , solving Equation (3.6) and (3.7), we get a 1 = ± 12c √ and c = c. Then by setting the values of a 0 , a 1 and a 2 in (3.4), we obtain (3.14) Converting the solution into hyperbolic function by using the exponential function identity, the close form solution (3.17) becomes Since c 1 and c 2 integral constants, setting c 1 = 1 and c 2 = 1∕ into (3.18), provides
The above closed form soliton solution of the TRLW equation in (x, t) variables becomes:
Again if we set c 1 = 1 and c 2 = −1∕ , the solution (3.18) becomes . Thus, in (x, t) variables, the closed form traveling wave solution of the TRLW equation becomes:
The nonlinear Schrodinger equation
In this sub-section, we derive the closed form solitary wave solutions of the (2 + 1)-dimensional NLSE (Jawad et al., 2014) by means of the method described inSection 2: where q = q(x, y, t) is a complex valued function, i = √ −1 and a, b and c are non-zero real parameters wherein a and b dissipation coefficients and c self-phase modulation. The first term represents the evolution term, the second and third terms represent the dissipation while the fourth term represents nonlinearity. The balance between these linear term and nonlinear term formulate the solitons. The mathematical model of the nonlinear Schrodinger Equation (3.21) arises as an approximate (3.17) u( ) = − 2(1 + c) 1 − 6c 1 e c 1 e + c 2 + 6c 2 1 e 2 (c 1 e + c 2 ) 2 , iq t + aq xx − bq yy + c|q| 2 q = 0 Taking homogeneous balance between the highest-order derivative term u ″ and the highest-order nonlinear term u 3 yields n = 1.
Since c 1 and c 2 are constants of integration, one might randomly pick their values. Therefore, if we pick c 1 = 1 and c 2 = 1/θ from Equation (3.33), we attain exponential form wave solution and simplifying this exponential solution, we derive the following closed form solitary solution of the nonlinear Schrodinger equation: On the other hand, if we pick c 1 = −1 and c 2 = 1/θ, from Equation (3.37) and q (x, y, t Sazzad Hossain et al., Cogent Physics (2017) Case 2: When a 0 = 0 anda 1 = ± √ −2 1 , and ω = 2(aα -bβ) then substitute these values of a 0 , a 1 and ω in (3.25) does not satisfy one of the algebraic Equation (3.28) hence the solution must be rejected.
Discussion and physical explanations
In this section, we have discussed about the obtained solution of the TRLW equation and the (2 + 1)-dimensional NLSE. Using the MSE method, we achieved the solitary wave solutions from (3.13) to (3.20) of the TRLW equation. These solutions are generally closed form traveling wave (3.39) and q (x, y, t)
solutions which includes periodic wave solution, soliton solution, kink shape wave solution, bell shape soliton solution, and singular solution. When the center position of the solitary wave is imaginary then singular solitons can be connected to solitary waves. Since this type of solution has the nature of spike and therefore it can probably provide an explanation to the formation of Rogue waves. Kink type soliton solutions are important to transfer signal and information in optical fiber. Periodic traveling waves also play an important role in various physical phenomena, including reaction-diffusion-advection systems, impulsive systems, self-reinforcing systems, etc. Mathematical modeling of many intricate physical events, for instance physics, mathematical physics, computer science and many more phenomena resemble periodic traveling wave solutions. From the above solution, it has been detected that the solutions (3.13) and (3.14) provides periodic wave solution where the solutions (3.15) and (3.16) gives soliton solution. The solutions (3.18)-(3.19) and the solution (3.20) shows the nature of bell shape soliton and singular soliton, respectively, where (3.17) represents kink shape solution. The periodic wave solution (3.13) and (3.14) is represented by the Figures 1 and 2 for = 1, = 1, c 1 = 1, c 2 = 2 and a 0 = 0 within the interval −5 ≤ x, t ≤ 5. The solution (3.17) shows the shape of kink type solution in Figure 3 for = 1, = 1, c 1 = 1 and c 2 = 2 within the interval −5 ≤ x, t ≤ 5.The bell shape soliton solution (3.19) for = 1, = 1, c 1 = 1 and c 2 = 1∕ within the interval −5 ≤ x, t ≤ 5 is corresponding to the Figure 4. The singular soliton solution (3.20) for = = 1, c 1 = 1 and c 2 = −1∕ within the interval −10 ≤ x, t ≤ 10 is represented by Figure 5. From the solutions of the (2 + 1)-dimensional nonlinear Schrodinger equation, it is observed that the solutions (3.34)-(3.39) are singular periodic solutions where the solution (3.33) represents periodic wave solution. The solution (3.33) is represented in Figure 6. It shows the periodic solution with = 1, = 1, = 1, k = 1, a = 2, b = 1, c 1 = 1 and c 2 = 2 within the interval −3 ≤ t ≤ 3. The singular periodic solution (3.39) for = 1, = 1, = 1, k = 1, a = 2, b = 1andc = 1 within the interval −10 ≤ x ≤ 10 and −5 ≤ t ≤ 5 is given by Figure 7. The figures of others solutions are similar to singular periodic solution type and ignored these figures for simplicity.
Conclusion
In this article, the modified simple equation method has been successfully implemented to establish the closed form solitary wave solutions of the TRLW equation and the (2 + 1)-dimensional NLSE. The solutions are verified to check the correctness by inserting them back into the original equation and found correct. Here, we achieved the value of the coefficients a 0 , a 1 etc. without using any symbolic computation software such as Maple, Mathematica, etc. The used method is much simpler in comparing to other methods because this method is straightforward and its calculation procedure is very concise. Therefore, the applied method is quite efficient and practically well suited and could be more effectively used to solve various NLEEs which regularly arise in science, engineering and other technical arenas. | 2,998.8 | 2017-01-01T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Spin Polarization and Magnetic Properties of VGaON and VGaONInGa in GaN: GGA+U Approach
Electronic structure of a defect center containing the gallium vacancy and substitutional oxygen atom at nitrogen site (VGaON) in zinc blende and wurtzite GaN was analyzed within GGA+U approach. The +U term was applied to d(Ga), p(N), p(O), and d(In). Neutral VGaON is in the stable high spin state with spin S = 1. The defect structure is strongly dependent on geometry of the defect and the charge state. Two spin structures, which arise due to two different configurations in VGaON, with ON either along the c-axis or in one of three equivalent tetrahedral positions in wurtzite structure were analyzed. The weak ferromagnetic coupling between centers was found. The strength of magnetic coupling is increased when there is a complex containing VGaON with additional substitutional indium atom at the second neighbor to vacancy gallium site (VGaONInGa). Magnetic coupling between VGaONInGa is ferromagnetic due to strong spin polarization of p electrons of the nearest and distant nitrogen atoms.
III-nitride materials such as GaN and InGaN have found their applications in advanced solid-state lighting technologies [1][2][3] and optoelectronic devices including diodes or solar cells [4][5][6]. Moreover, the ferromagnetism (FM) in GaN or InN without doping by transition metal atoms was recently observed [7][8][9][10][11]. This FM was ascribed to the formation of native defects, such as cation vacancies or their complexes. For example in Ref. [9], analysis of the characteristics of hysteresis curves in irradiated GaN showed that the coercive field increases in line with the increase of concentration of gallium vacancy (V Ga ). The stable FM was discovered in n-type GaN, and it can be due to the presence of non-intentional donors, such as oxygen.
Measurements using positron annihilation spectroscopy of V Ga and its complexes containing the gallium vacancy and substitutional oxygen atom at nitrogen site V Ga O N have found them to be dominating defects in as-grown n-GaN [12][13][14][15][16][17][18][19][20][21]. Both defects were intensively studied in experiments for their optical properties [12,13,16,17,19,21] indicating defects as possible sources of the green (GL) and yellow (YL) luminescence in GaN. In Ref. [12], Son et al. detect spinpolarized V Ga O N by electron paramagnetic resonance and suggest that YL and GL bands can be explained by 0/− and −/−2 V Ga O N -optical transition levels. Moreover, in Ref. [13], two electronic structures were observed, which arise due to two different configurations of V Ga O N in wurtzite (w) GaN, one with O N either along the c-axis (axial configuration, referred below as a) and other one in one of three equivalent tetrahedral positions (basal configuration, referred below as b).
The increasing localization of the defect wave function has the opposite effect on the stability of the local magnetic moment and on the collective magnetization: in the former case, the increased localization stabilizes the high-spin (HS) state, while the coupling through the overlap of the wave functions of the neighboring defects is decreased [27,30,31,35]. For both the V Ga and the V Ga O N complexes, strong localization can lead to stable local spin moments [26,27,35] but it does not guarantee automatically a stable interaction between them [31,35]. Partial delocalization of defect-induced bands may reduce the stability of HS state defect but also be responsible for the long-range magnetic interactions. This stabilization can be due to p-d exchange interaction of impurity like Mn [36] or the spin polarization of p electrons in low In-content InGaN.
Both In concentration and microscopic In distribution strongly influence the electronic structure and physical properties of InGaN [37]. The localization of the valence band maximum (VBM) states and the domination of the light emission of InGaN with low In content were observed [37]. Similar to GaN, V Ga -complexes were also suggested to be the important non-radiative defects in InGaN quantum well [38,39]. Strong effects in electronic structure of V Ga O N -hydrogen complex were found in p-type InGaN for high In content [33].
To check the hypothesis that magnetic coupling between V Ga O N -complexes can be more stable in InGaN, in the present paper, we study V Ga O N -structure in zb (zinc blende)-and w-GaN and InGaN using GGA+U approach. After presenting the details of calculations in Sect. 1, the justification of the chosen approach is underlined. Next, we present results of calculations of the formation energies of defects (Sect.
Details of Calculations
Calculations based on the density-functional theory were performed using the ultrasoft pseudopotentials [40], the Perdew-Burke-Ernzerhof GGA exchange-correlation potential [41], including the +U term implemented in the QUANTUM-ESPRESSO code [42] along the theoretical framework developed in Ref. [43]. We employed ultrasoft atomic pseudopotentials and chose 3d, 4s, and 4p orbitals for Ga; 4d, 5s, and 5p for In; 2s and 2p for N; and O as valence orbitals. The plane wave basis with the kinetic energy cutoff (E cut ) of 40 Ry provided a convergent description of the analyzed properties. The Brillouin zone summations were performed using the Monkhorst-Pack scheme with a 2 × 2 × 2 k-point mesh [44]. Methfessel-Paxton smearing method with the smearing width of 0.136 eV was employed for obtaining partial occupancies. The zb 216-and 512-atoms and w 128-, 192-atoms supercells were considered, and ionic positions were optimized until the forces acting on ions were smaller than 0.02 eV/Å. The spin-orbit coupling was neglected. Formation energies were calculated according to Ref. [45] for N-rich conditions [30]. The +U corrections were imposed on d(Ga) and p(N) [30,31] The band gaps E gap calculated within LDA/GGA are shrunk and amount only to about 1.4 and 1.6 eV for the zb-and wcrystals, respectively (see Fig. 1 and Refs. [30,31]). Moreover, both LDA and GGA give too high calculated [46]. Inclusion of large U(Ga) = 10 eV term solves the problem with the correct position of d(Ga) [46][47][48], but E gap is still about 2.5 eV (see Fig. 1). Our previous works showed that increasing U(N) from 0 to 5 eVopens the band gap of w-GaN from 1.6 to 3.0 eV [30,31], but with d(Ga) centered about 13 eV below the VBM, similar to LDA/GGA results [30,46]. Here, we systematically analyze how the U(Ga) and U(N) terms affect the electronic structure in both the zb-and w-crystals. On-site +U parameter, varying from 0 to 10 eV, was applied separately on d(Ga) and p(N), and then together on both d(Ga) and p(N) orbitals, where U(N) = 5 eV. The energy E gap calculated as a function of U is shown in Fig. 1.
We found that U(Ga) = 3.0 eV along with U(N) = 5 eV reproduces the experimental E gap of 3.2 and 3.4 eV for both zband w-GaN [49] (Fig. 1, (3, 3′)) and the binding energy of Ga 3d level centered about 15.5 eV below the VBM-in agreement with Ref. [50]. These values are also in agreement with HyF results [51]. Such an underestimation of the band gap and band structure follows from the sublinear dependence of the LDA/GGA total energy on the occupation [43]. Moreover, the sensitivity of E gap in both U(Ga) and U(N) is explained by the orbital compositions of both the VBM and the minimum of the conduction band (CBM).
GGA+U calculations give the lattice constants a zb = 4.57 Å and (a w = 3.19 and c w = 5.2 Å) for zb-and w-GaN, respectively. These values are very close to the experimental data of a-
GGA+U vs HSE Results
It was noted above that HyF [16-18, 21, 22, 26-28, 32, 33] calculated V Ga -structure is in agreement with GGA+U results for U(N) = 5 eV [30,31]. Here, in order to verify the agreement further, we perform calculations for Heyd, Scusseria, and Ernzherof functional, based on the PBE functional where parameter α is a fraction of the exchange that is replaced by Hatree-Fook exchange [54]. Calculations were done for isolated V Ga O N and V Ga O N -V Ga O N (3NNs axial configuration in notation of Sect. 2.4) for w-GaN. Screening parameter α = 0.25 was set to reproduce band gap of~3.0 eV. Nevertheless, the results indicate that energies of spin polarization (ΔE PM −FM ) (defined in Sect. 2.1) agree to within 0.05 eVor less, and energies of magnetization (ΔE AFM−FM ) (defined in Sect. 2.2) agree to within 0.005 or less. That shows the good agreement for calculations of magnetic properties within these two approaches.
We note that the problem of choosing the α parameter for getting accurate defect levels is still an open issue [55,56], as well choosing the U parameter in GGA+U approach [30].
Formation Energy of Defects
Formation energy of charged V Ga O N was calculated. One geometry of a-V Ga O N was considered for cubic GaN and two aand b-V Ga O N configurations were analyzed for w-GaN. Because the aim was to understand the influence of In doping on spin-polarized properties, the number of configurations of complex V Ga O N was chosen: In Ga as second nearest neighbor to V Ga when forming O N -In Ga (referred below as o) or N-In Ga (referred later as n) chains where these O N and N are the nearest possible positions to V Ga neighbors. Hence, the two geometries as referred as a-o-and a-n-V Ga O N In Ga were considered for zb-crystal. And four geometry configurations referred as a-o-, a-n-, b-o-, and b-n-V Ga O N In Ga were analyzed for w-GaN. In this work, the formation enthalpy of InGaN was calculated and the binding energy, E bind , defined as a difference in the total energies of compounds that contain V G a O N In G a or (V G a O N and In G a ) were taken into consideration.
Formation energy E form of a defect was calculated by formula taken from Ref. [45] The first two terms on the right-hand side are the total energies of the supercell with and without the complex, respectively. n i is number with the +(−) sign corresponding to the removal (addition) of atoms. E VBM is the energy of the VBM of bulk GaN, and ε F is the Fermi energy referenced to this E VBM . The energy E VBM is determined from the total energy difference between the pure crystal with and without a hole at the VBM in the dilute limit by algorithm from Ref. [45]. μ i are the variable chemical potentials of atoms in the solid, which in general are different from the chemical potentials μ i (bulk) of the ground state of elements (Ga bulk, In bulk, and N 2 , O 2 ). Chemical potentials of the components in the standard phase are given by total energies per atom of the elemental solids: μ(Ga bulk) = E tot (Ga bulk), μ(In bulk) = E tot (In bulk), while μ(N bulk) = E tot (N 2 )/2 and μ(O bulk) = E tot (O 2 )/2 (ΔH f (NO) was neglected. In N-rich condition, μ(Ga) = E tot (Ga bulk) + ΔH f (GaN) and μ(In) = E tot (In bulk) + ΔH f (InN) are taken, where ΔH f is the enthalpy of formation per formula unit, and it is negative for stable compounds. ΔH f at T = 0 K is obtained by considering the reaction to form or decompose a crystalline GaN and InN from its components and dependent on an cohesive energy, E coh , of Ga, In, N, and O. The obtained results for E coh of Ga, N, and O were shown in Refs. [30,57]. Calculated E coh (In) and ΔH f (zb-GaN), ΔH f (InN) are 2.56 (2.5 [58]) and − 1.24 (−1.27), −0.36 (−0.32) eV [59]), (experimental values presented in brackets).
The last term, E correct , includes two corrections. The first one, ΔE PA , is the potential alignment correction of the VBM. The VBM in the ideal supercell and in the supercell with a (charged) defect differs by the electrostatic potential and is obtained by comparing the potential at two reference points far from the defect in the respective supercells with (P[D q ]) and without (P[0]) the defect, Second correction is an image charge correction as expressed by 2-order Makov-Payne form: where α M is the lattice-dependent Madelung constant, which for hexagonal structure is 3.5, W is the supercell volume, and ε is the static dielectric constant. E MP was calculated to be 0.2, 0.4 eV for charged defects (q = − 1, − 2). Results of calculations are presented in Sect. 2.2.
Results
This section summarizes the obtained results for the defect structure and formation energy and discusses magnetic interaction between defects.
Electronic Structure and Spin Polarization of V Ga O N
The defect states of V Ga O N stem from the result of the interaction between the vacancy orbitals and the O-impurity states. Local atomic configuration of this defect has the C 3v point symmetry. In both the zb-and the w-structures, the vacancy is tetrahedrally surrounded by three N and one O atoms, and the respective defect states are localized on the resultant broken sp 3 bonds (Fig. 2) that split into a singlet a g and a higher in energy quasitriplet "t 2 ". Energies of a g and "t 2 ", calculated below as relative to the VBM, depend on the crystal structure, geometry of defect and the charge state. a g is a resonance state with the valence bands."t 2 "-V Ga O N is located in the band gap and splits into a doublet e 2 and a singlet a 1 with a splitting energy of about 0.2, 0.5, and 0.7 eV for the zb-GaN, a-and bgeometries of V Ga O N in w-crystals, respectively (see Fig. 3, the green lines). The energy splitting contains the contributions of a weak perturbation by the w-symmetry in w-GaN and the U-induced so-called quasi-Jahn-Teller (JT) effect [30].
In the case of non-vanishing spin polarization, the exchange coupling splits e 2 into spin-up e 2↑ and spin-down e 2↓ states by the splitting exchange energy defined as Δε ex = ε(e 2↓ ) − ε(e 2↑ ), where ε is the energy of the defect level, and a 1 into a 1↑ and a 1↓ (Fig. 3, the blue lines). The Δε ex , in general, depends on the symmetry of defect, the charge state and U. The e 2↓ and a 1↓ of neutral V Ga O N in zb-GaN are localized in the band gap at 2.6 and 1.9 eV above the VBM, respectively. The e 2↑ and a 1↑ are resonances with the VBM (Fig. 3a). According to this point of view V Ga O N is a deep acceptor containing two holes.
As presented above in Section 2.3, in w-crystal, the defect can exists in two different geometries tagged as a-and b-V Ga O N . According to GGA+U calculations, single-electron level representations of electronic structures are different for a-and b-V Ga O N (Fig. 3 b, c, left panels). Structure of a-V Ga O N in w-GaN is similar to the one of zb-GaN, e 2↓ and a 1↓ are 2.45 and 1.65 eV with the respect to the VBM, and e 2↑ and a 1↑ are hybridized with the valence bands (Fig. 3b). Introducing O atom into basal plane of defect leads to strong symmetry perturbation, e 2↓ is split by 0.5 eV into two a 2↓(1) and a 2↓(2) singlet states. Thus, "t 2↓ " in this case is a composite band containing a 1↓ , a 2↓(1) , and a 2↓(2) levels located about 1.4, 2.3, and 2.8 eV above the VBM, respectively (Fig. 3c).
The energies of V Ga O N levels strongly depend on the charge state q. Single-electron energy levels of V Ga O N q for q = 0, − 1, and − 2 with their respective charge states are shown in Fig. 3. The physics behind the calculated electronic structure of charged defects is determined by the following counteracting effects [30]: (i) the intracenter Coulomb repulsion is dominant in the nonspin-polarized calculations. Without spin polarization, the energy of e 2 , a 1 increases by~0.5-0.6 eV with the q changing from 0 to − 2 (the levels are shown in green color in Fig. 3); (ii) The effect of the value exchange splitting, for example, in zb-GaN with q changing from 0 to − 2, Δε ex decreases from 2.6 to 0 eV; (iii) the U-induced potential which is attractive (repulsive) for occupied (unoccupied) orbitals [43]. This effect is clearly seen for e 2↓ that decreases with q changing from 0 to − 1 (Figs.3 a, b). These results are in agreement with HyF calculations from Ref. [17] where negative-U eff behavior of V Ga O N was observed.
"t 2 " defect state is occupied by 4, 5, and 6 electrons for q = 0, − 1, and − 2, respectively. As shown in Figs Table 1. Generally, ΔE PM−FM assumes the maximal value when "t 2 " is occupied with 4 electrons with the spin S = 1, and it vanishes when "t 2 " is fully occupied. Moreover, the energy of antiferromagnetic state (AFM) of single defect was considered also in the analysis [23]. Every considered geometry of the neutral V Ga O N is the magnetic centrum in HS state with the local magnetic moment (Table 1). Finally, V Ga O N −2 is the nonspin-polarized centrum (in this case ΔE PM−FM = 0). According to our calculations, b-configuration stabilizes the The geometry of the defect affects also the localization of the V Ga O N states. The effect is displayed in the plots of the density of spin polarization, Fig. 2a,e-f. Figure 2 indicates that the V Ga O N states are dominated in all cases by the localized and spin-polarized contribution of the three sp 3 orbitals of the N nearest neighbors because O-atom is more electronegative than N-atom and two electrons with the opposite spins are located on sp 3 oxygen orbital. In contrast to GGA method in which electrons that occupy for example, "t 2↓ " are spread over four p orbitals of the nearest neighbors [31,60], GGA+U calculations showed that the partial occupancy is avoided [31,60]. Moreover, in the case of b-V Ga O N , one can observe the anisotropy for three sp 3 of N orbitals (Fig. 2d, f). The Uinduced symmetry breaking of e 2 level, i.e., the quasi-Jahn-Teller effect, was observed in b-V Ga O N . In a-V Ga O N contributions of the three N neighbors to the V Ga states of the vacancy, wave function are almost equal, whereas for b-V Ga O N the wave function is dominated by the two basal N ions located in the (x,y) plane, and the contribution of the remaining N ion is strongly reduced, see Fig. 2d-f.
Formation Energy and Transition Levels
The calculated E form of V Ga O N and assumed ε F at the VBM are given in Table 2. E form in all configurations is the same. The same trend was observed also in Ref. [32]. The obtained values 1.64, 3.54, and 5.73 eV in w-GaN are close to results of HSE (1.9, -, and 5.3 eV [17]) and HyF (B97-2-functional) (1.5, 4.0, and 6.2 eV) [32]. Our calculations demonstrate that E form is similar in all geometries.
The change in the defect charge state is determined by the transition level ε(q 1 /q 2 ), defined as the Fermi energy relative to the VBM at which formation energies of the q 1 and q 2 charge states are equal. We find ε(0/−) are 1.84 and 1.9, 2.1 eV for defects in zb-GaN, and a-, b-geometry in w-GaN, respectively, and ε(−/−2) are 2.1 and 2.2, 2.4 eV for defects in zb-GaN, and a-, b-geometry in w-GaN, respectively, which is consistent with V Ga O N energies shown in Fig. 3. Comparison of the calculated energies with the results for other exchange functionals is shown in Table 3. Table 3 contains dataset for only a-V Ga O N . Comparable values of ε(0/−) and ε(−/−2) were obtained with GGA+U and different HyF exchangecorrelation functionals. For example, our results are close to the obtained by HSE06 approach (with 20% exact exchange) [17,18].
Spin Polarization of V Ga O N In Ga
In Sect. 1.3 the geometrical configurations of V Ga O N In Ga were discussed. Although our calculations show that binding energy in such a complex is low,~0.1-0.3 eV, the formation energy is 1.9 eV which is a little higher than in the case of V Ga O N and considerably lower than in the case of V Ga [30,34]. Because the formation energy for different geometries (aor b-) is similar, both V Ga O N and V Ga O N In Ga can exist in nonequivalent atomic configurations.
In this section, the results for calculated energy of spin polarization, local magnetic moment, spin density, and density of charge of V Ga O N In Ga are presented. Moreover, the effect of addition of In impurity on the spin-polarized properties is analyzed by comparing these results with similar results for V Ga O N . Next, in order to get a clearer picture of the influence of In on electronic structure, the complex V Ga O N nIn Ga , where n = 2, 3, and 4, is investigated in zb-GaN.
The defect states of V Ga O N In Ga stem from the result of the interaction between the vacancy orbitals and the O-and Inimpurity states. Table 2 The calculated E form of V Ga O N assuming ε F at the VBM (N-rich condition) Strong effect on the electron structure was observed for wurtzite crystal (see Fig. 3b, c, right panels). As with Oatom in basal plane, also in this case for both a-and b-V Ga O N In Ga , "t 2 " is split into three singlet states a 1↓ , a 2↓(1) , and a 2↓(2) due to strong tetragonal perturbation generated by In atom in the crystal structure. a 2↓(2) of b-n-V Ga O N In Ga is higher in energy, and it is a resonance state with the conduction bands (Fig. 3c).
Calculated ΔE PM−FM values for different configurations of V Ga O N In Ga are given in Table 1. According to our calculations, the inclusion of In Ga stabilizes HS state of the defect by 0.1-0.16 eV in comparison to V Ga O N (see Table 1). For example, in zb-GaN, the differences in the spin polarization energies of V Ga O N and V Ga O N In Ga are 0.08 and 0.14 eV, for q = 0 and − 1, respectively. The same energy gain takes place in w-structure, for example, the differences between ΔE PM −FM (b-n-V Ga O N In Ga ) and (b-V Ga O N ) are 0.16 and 0.08 eV, in cases q = 0 and − 1, respectively. Generally, the increase of ΔE PM−FM is in agreement with the changes of electron structure. Shifting up the single-electron levels (a 2↓ (2) in particular) to the CBM leads to the increase in the energy of exchange splitting, and therefore in the energy of ΔE PM−FM . The above results obtained for V Ga O N In Ga are typical: the stability of the HS state predicted for V Ga O N In Ga is 10-12% higher than that predicted for V Ga O N.
The increase in discrepancy of ΔE PM−FM was observed with the rising of In content. ΔE PM−FM is 1.7, 1.78, 1.88, 1.9, and 1.8 for n = 0, 1, 2, 3, and 4. For n = 4, ΔE PM−FM is smaller than for n = 2 and n = 3.
The spin (Fig. 2b, c, h-j) and electron (Fig. 2k, l) contour densities of V Ga O N In Ga are shown in Fig. 2. Like in the case of V Ga O N , V Ga O N In Ga states are localized mainly on the p orbitals of the three N atoms (Figs. 2 b, c, h-j). Spin density of V Ga O N In Ga is more delocalized, since they comprise longrange tails that involve p orbitals of distant N ions or even p orbital of oxygen (refer to V Ga O N 2In Ga , see Fig. 2c). These orbitals constitute the VBM (Fig. 4) of GaN. V Ga O N In Ga -spinup states are resonances degenerate with the valence bands, i.e., both a 1↑ and e 2↑ hybridize with the upper part of the VBM, which forces also a partial delocalization of their wave functions. It is evident from Fig. 2 that enhancing correlation effects by addition of In atom leads to the decrease of the localization of the spin density on the broken bonds and to the increase of its axial anisotropy.
The contour (Fig. 2k, l) is plotted in the (100) plane and shows that a large contribution to the electron density comes from the nitrogen atoms and strong ionic bonds resulting from sp 3 hybridization. The spherical symmetry around anions is observed, which indicates that the bonds in GaN:V Ga O N In Ga are dominated by the ionic component.
By calculating the contributions of individual atoms projected onto relevant atomic orbitals to the total DOS (Fig. 4a), one finds that the main contribution comes from the defects states p(N) of the N nearest neighbors of V Ga (Fig. 4b)-in agreement with Fig. 2. Both p(N) and p(O) orbitals also build the VBM of GaN (see Fig. 4b-d). The contribution of the d(In) orbitals to the spin density is non-negligible due to the substantial contribution of d(In) to the VBM and defect states (see Figs. 2 and 4f).
Magnetic Interaction of
In the present section, we study V Ga O N -V Ga O N and V Ga O N In Ga -V Ga O N In Ga defect pairs and analyze the impact of crystal distortion on their properties by comparing the results of magnetic interaction calculations. The magnetic coupling between vacancy complexes is discussed as the possible origin of the experimentally observed FM in GaN. Electronic structure of a defect pair is determined by three factors: (i) the distance between vacancy complex, (ii) the relative orientations of complexes with respect to each other and to the crystal axes, and (iii) the charge state. V Ga O N -V Ga O N and V Ga O N In Ga -V Ga O N In Ga configurations were considered, in which the defects are the third nearest neighbors (3NNs) and fourth nearest neighbors (4NNs) with respective spatial separation of about 5.2 and~6.5 Å (it is a distance between V Ga in the relaxed structure). In w-GaN, the defects can be located either in the same (x,y) basal plane perpendicular to the c-axis, which is referred to as the xy-case, or they can be oriented along the caxis, which is denoted here as the c-case. Finally, we mention that the defect pair in such configurations has eight nearest neighbor atoms (two O and six N atoms). Because the goal of work was to analyze the FM in GaN, we do not consider Ref. [15,20] 1.1 LDA Ref. [14] 0.65 1.2 LDA here 1NNs configurations with the seven nearest neighbor atoms. When the complex is spin polarized, the orientation of the two spins can be ferromagnetic (FM), ferrimagnetic (FiM, when spins of the two complexes are different because the defects are in different charge states), and antiferromagnetic (AFM). In the latter case, the total spin vanishes but the spin polarization energy is finite. Energy of magnetization ΔE AFM(FiM)-FM is defined as a difference between the total energies of AFM (FiM) and FM states and is positive when FM coupling is stable. In summary, two and four configura- Table 4. In Table 4, we only briefly highlight the results. The obtained values are a little larger than those obtained by HSE06 approach in Ref. [26] due to shorter spatial separation between defects in our work. Table 4).
The dominant mechanism of magnetic interaction between V Ga O N is determined by the interplay between the counteraction of the bonding-antibonding (BA) and the exchange of spin-up-spin-down (Δe ex ) splittings. Table 4 Energy of magnetic coupling ΔE AFM(FiM)-FM together with total magnetic moment (in μ B ) of the complex in the charge state q (it is a sum q 1 and q 2, where) obtained for zb-and w-GaN calculations. For simplicity, both ΔE AFM−FM and ΔE FiM-FM are denoted by ΔE AFM−FM and the actual spin configurations and values of magnetic moment are given in the columns "μ" comprised of long-range tails which involve p orbitals of distant N ions (Fig. 5). Moreover, N neighbors of the complex are not equivalent, as displayed by the dissimilar contributions to the spin density. Indeed, the contribution of the axial N (or O) ions is almost vanishing (Fig. 5 a, c, e, g). Non-negligible spin density of In orbital demonstrates the important contribution of In atom into magnetic interaction between defects.
Crystal Structure zb-and w-GaN and Relaxation
All above results are demonstrated for atomic relaxed structures. The increase of FM stability when In content grows can be explained by the distortion effects in crystal structure due to the atomic displacement in the process of relaxation. As it can be seen in Figs. 2 and 5, significant and complex perturbations in the crystal structure of GaN are experienced when vacancy is introduced and it can be attributed to the fact that the radius of In is larger than the one for Ga and the radius for N is larger than the one for O. The interatomic distances between the neighbors of a complex defect are mainly determined by the lattice constants of the host crystal (but here, for low In content, it was neglected), and also by the atomic relaxations, i.e., displacements of the neighbors. In the studied compounds, the outward relaxation of the nearest neighbors elongates bonds by about 5-12%, i.e., 0. Fig. 6, it follows that distortion (change in bond length) for the V Ga O N In Ga is larger than that for the V Ga O N . But we note that structural distortions are more complex. Although, the displacements of the second and third neighbors are an order of magnitude smaller, the effect of atomic relaxations around defects, involving not only the nearest but also more distant neighbors, cannot be neglected. The states of the defect complexes are determined by the overlap of the N and O dangling bonds given by the N-N (N-O) distances. In ideal structure (after relaxation without defect), N-N is equal to 3.18 Å. In V Ga O N , the N-N and N-O dangling bonds are 3.55 and 3.48 Å, respectively. But in V Ga O N In Ga , these values are shorter; three N-N are 3. 58, 3.52, and 3.48 Å, respectively, and N-O is 3.42 Å. It implies that the defect states are more localized, and the energy of spin polarization of such defects is higher.
Summary and Conclusions
In summary, spin states of V Ga O N and V Ga O N In Ga complexes in both zb-and w-GaN, and magnetic coupling between them, were studied within GGA+U calculations. The U(Ga) =3 eV and U(N) = 5 eV terms were imposed on d(Ga) and p(N) leading to the correct band gap of GaN.
Charge states q from 0 to − 2 of V Ga O N were considered. In both crystal structures for neutral V Ga O N with S = 1, high-spin configuration is stable. Wave functions of V Ga O N have a multi-orbital character, being composed of three p(N) and one p(O) orbitals of vacancy neighbors. But the main contribution to the spin density comes from sp 3 N orbitals.
Two different electronic structures, which arise due to two different geometry configurations of V Ga O N , with O N either along the c-axis and in one of three equivalent tetrahedral positions in w-GaN, were analyzed. The latter geometry configuration assumes stronger stability of HS state and more delocalization of defect state.
Introducing In Ga as second neighbor to V Ga on the one hand imposes changes to the electronic structure, on the other gives rise to the delocalized wave function of the defect as the crystal structure is perturbed, and finally contributes to long-tail spin density distribution. Magnetic moments originate mainly from sp 3 N orbitals, and the contribution of p orbitals of distant N, d(Ga), and d(In) states is about 20%.
Various relative orientations of the defects and several charge states (q = 0 and q = − 2) were considered, and consequences regarding the observed FM in GaN were pointed out. Using a relation predicted from mean field model, T c = 2zS(S + 1)J/3k B , the room temperature of FM implies that (assuming z = 6 neighbors and S = 1) the coupling constant J(V Ga O N In Ga ) = 0.01 and J(V Ga O N ) = 0.0045 eV, i.e., 920 K and 415 K, respectively. These values have a limited reliability as the distribution of defects is random and the coupling depends on the distance between defects.
Comparing the obtained results with experiments, we note that, according to the results, the observed collective ferromagnetism in GaN systems [7][8][9][10][11] can originate from magnetic interaction between V Ga O N defects. And in Ga-rich InGaN alloys, we predict even stronger FM. | 7,875 | 2019-02-08T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Application Research of Vision Sensor in Material Sorting Automation Control System
Based on the PLC in the material sorting automation system, the paper builds an automatic material sorting system based on visual sensors. The control system involves various sensor technologies such as photoelectric sensors and vision sensors, mechanical technology, electrical and electronic technology, frequency converter technology, motor drive technology, air pressure control technology, human-machine interface technology, etc., and integrates the application of various technologies in training. On the platform, students can familiarize themselves with the process of detecting, transmitting, and processing the automatic production line and the control process of the system through the practice of this training platform. They can be familiar with the mechatronics technology. The system first detects the metal through the inductive sensor, detects the colour through the photoelectric sensor, and finally detects whether the processing is qualified by the machine vision system, and obtains the target data, and finally controls the robot to realize the sorting action, and completes the storage operation of the stereo warehouse.
Introduction
The goal of this research is to design a set of automatic material sorting control system based on PLC and vision sensor for training, which is from a company in our city designed and developed to meet the needs of the comprehensive training of students in the school's electromechanical profession. It can improve students' practical ability and ability to quickly adapt to work. At the same time, through the research of this topic, it can improve their design ability and teaching level.
The training platform of the automatic material sorting system studied in this subject involves various sensor technologies such as photoelectric sensor and visual sensor technology, microelectronic technology, mechanical technology, electrical and electronic technology, inverter technology, motor drive technology, pneumatic control technology, etc. It can integrate the application of various technologies on the training platform, so that students can familiarize themselves with the execution process of automatic production line detection, transmission and processing and the control process of the system through the practice of this training platform, and can master the mechatronics technology. The automatic material sorting training platform is installed on the guide rail of aluminum alloy, which can simulate various control links in actual production, such as material feeding, conveying, sorting, etc., so that practitioners can obtain real automation production experience and shorten the distance between school teaching and actual production of the enterprise [1].
System design
This design adopts a modular design method to analyze and design each control unit module of material sorting. The Mitsubishi FX3U series PLC is used as the controller. It mainly consists of a feeding unit, a conveying detection unit, a handling robot unit, a sorting storage unit and the like. The control system adopts PLC to control the feeding cylinder to push the material onto the conveyor belt; the belt is controlled by the frequency converter to transfer the material to the end of the conveyor belt; in the transmission process, the inductive sensor is used to detect whether it is metal material, and then the photoelectric sensor detects the color of the material which is then detected by the visual sensor to determine whether the material pattern is processed, so that the target data of the sorting is obtained; then the handling work is carried out by the handling robot, and the unqualified one is directly transported to the scrap area, and the qualified transport is carried to the transport platform. Finally, the automatic loading operation of the sorting and three-dimensional warehouse is completed by the loading platform, and the white materials are sequentially stored in the third warehouse, and the black materials are sequentially stored in the second warehouse, and the metal materials are sequentially stored in the first warehouse.
Sorting platform composition
The structure of the automatic material sorting system is shown in Figure 1. It is mainly composed of seven-unit workstations: PLC control terminal, feeding unit station, transmission unit station, detection unit station, human-machine interface unit station, handling robot unit station, and classified storage unit station, which is a flexible detection and sorting system. Each unit can be an independent part and is a complete mechatronic system. The handling unit selects the robot to grab the material, the transmission unit selects the inverter and the three-phase asynchronous motor equipment, and the classified storage and storage unit adopts the servo, stepping and stacking cylinder control, and the feeding unit is selected in the feeding unit [2].
System layout
Power unit: Three-phase AC power main switch (with leakage and short-circuit protection), single-phase power socket for module power supply and power supply for external devices, the power connection between modules is connected by terminals. Button unit: Buttons and indicators with a variety of different functions, mode switch, emergency stop button, start and stop button, robot and sorting mechanism zero button. The electrical circuit unit consists of six modules: Among them, the transmission and detection unit are composed of a belt controlled by an AC motor, and various sensors and visual sensor modules are installed around the periphery. Handling robot unit: A robot controlled by a stepper motor and cylinder. Classified storage unit: consists of a transport platform and a stereo warehouse.
Figure 2.
Overall layout of the system.
System functions
The system workflow is shown in Figure 3. Firstly, the origin reset of the robot and the transport platform is performed. Select the switch to select the manual mode on the button panel, click the robot to rotate forward on the touch screen, the X axis is right, the Y axis is up, the robot moves after a certain distance, and then press the robot to reverse. The robot returns to the origin, the X axis returns to the origin, and the Y axis returns to the origin. Waiting for the origin indicator to light up, you can start automatic operation. The selection switch is adjusted to the automatic mode, and when the start button is pressed and the feeding device detects the material, the PLC pushes the push cylinder to feed, and the feeding belt rotates forward; When the photoelectric sensor of the robot conveys detects, the material will send a signal to the PLC. The PLC will drive the robot arm to open the gripper first, and then extend the gripper to lower the grip. The gripper lift arm retracts, and the arm rotates to the right limit, when the arm extends, the gripper drops. The material is placed on the transport platform, and the material is judged according to the signal from the sensor. The X-axis and the Y-axis are simultaneously sent to the silo of the material, 4 and then sent to the silo by the stack cylinder, and then returned to the original position. At the same time, the robot returns to its original position and waits for the next process. The sensor is identified according to the material characteristics, color, processing graphics and other characteristics of the material. The corresponding solenoid valve is controlled by the PLC to make the cylinder move, the material is sorted, the qualified feed is sent to the silo, and the unqualified feed is sent to the defective area [3].
Vision sensor design based on material sorting automation system
Machine vision is a branch of artificial intelligence that is rapidly evolving. The machine vision is to use the machine instead of the human eye to make measurements and judgments. The machine vision sensor converts the ingested target into an image signal through a machine vision product (i.e., an image capturing device, divided into CMOS and CCD), and transmits it to a dedicated image processing system to obtain shape information of the target. The brightness, color and other information are converted into digital signals; the image system performs various operations on these signals to extract the features of the target, and then controls the device actions in the field according to the result of the discrimination [4].
Hardware Design
The appearance of the coaxial light source is shown in Figure 4. In vision sensors, the scheme of light source and illumination plays a crucial role, not simply illuminating objects. The combination of the light source and the lighting scheme can highlight the characteristics of the object. It is possible to distinguish between the part that needs to be detected and the part that is not important, and increase its contrast. Transmitted or reflected light is typically used in vision applications. For reflected light, the relative position of the light source and the optical lens, the texture of the surface of the object, and the geometry of the object should be fully considered [5]. The system is to detect the upper pattern of the circular material. In order to obtain better results, the coaxial light source can be used. The coaxial light source can eliminate the shadow caused by the unevenness of the surface of the object, thereby reducing the interference part and adopting the spectroscope design to reduce the light. The choice of industrial lenses is critical because the resolution of the lens directly affects the quality of the image. To purchase a lens, you must first understand the relevant parameters of the lens: resolution, focal length, aperture size, sharpness, depth of field, effective image field, and interface form, etc. This system uses a C-Mount lens for Omron 2/3-inch camera components, model 3Z4S-LESV-1214H.
Sensor Controller.
The measurement process is shown in Figure 5. The sensor controller includes an image capture card and a processor. The capture card sends the image to memory, then calculates and analyses it. This system uses the OMRON FH-L550 sensor controller. When the FH receives the measurement trigger signal from an external device such as a PLC, it will perform the image input from the camera registered in the measurement process in the order of the processing items registered in the measurement flow.
Implementation of automated sorting control algorithm
After the above analysis, the following is the research and analysis of the automatic sorting algorithm according to the order sorting operation time. Considering the time when the material is out of the sorting area for manual packing, there is time reserved for the staff to act. We will have two stages. With a certain distance d , then d=vt and v are the moving speeds of the conveyor belt, and t is the time interval for the sorter to sort the material. So the length of a shelf is: There are k 1 orders in front of the k th order, so the distance between the "virtual domain" of order k and the entry point O is: When the order k "virtual domain" reaches the sorting entrance, the distance i L of the i th stage to the entry point O can be determined: The material of the i variety corresponds to the sorter number i , and its position i f from the sorting entry point is fixed. Therefore, the total distance that a shelf reaches its sorter is total k i i T L L f , and the belt speed is v . We can convert the distance into the corresponding amount of time, that is, the total time required for a stage to reach its sorter is: Therefore, the time required for each shelf to reach its designated sorter can be calculated. We can generate a time list of the order at the same time as the order is generated. Then the storage rack is arranged neatly on the conveyor belt according to this table. At the same time, we import the time parameters of each sorter action that is about to sort the volume materials into the PLC, and complete the control condition interlocking [6], when the belt passes the corresponding at the time, the sorter acts, and the material is sent to the belt. The order is continuously sorted and sorted to achieve the purpose of automatic sorting, which meets the requirements of the previous assumptions.
Programming
The control requirements of the system are to first detect the material and color of the workpiece, and then test the machining pattern of the workpiece. The main process of the program design is: when the feeding unit detects the materials, the feeding starts, the pushing cylinder moves, pushes the material onto the conveyor belt, and the inverter controls the motor to start the belt running, first carries out the material inspection, and then carries out the color detection. Finally, the visual inspection is qualified, and the qualified ones are placed in the three-dimensional warehouse in turn, and the excess and the unqualified ones are placed in the sub-grade. The program is designed by the programming software GX Works 2 software, a new project is created, and the ladder input is completed according to the logic of the design. The next step is to compile the program. If there is no syntax error, you can debug the program. If you find any problems, modify it and improve it. Finally, download the program to the PLC and monitor the PLC components, intermediate relays and parameters.
Interface design
After the save is completed, the user can use the compile function to check whether the screen plan is correct. If there is no error in the compilation result, the offline simulation function can be executed. If you need to perform online simulation, you can use the work button in the figure below after connecting the device. The design of the man-machine interface is completed in the same way, and finally the control of the automatic building system is completed by the PLC. The touch screen configuration draws the stop button to display the corresponding PLC address, as shown in Table 1.
System components and functions
The automatic material sorting system for training is shown in Figure 7. The system mainly consists of a feeding unit, a conveying and detecting unit, a handling robot unit, a sorting storage unit, a humanmachine interface unit and the like, and a pneumatic circuit and an electric control device corresponding thereto. When the system is running, after the fiber sensor in the supply storage tank detects the material, the push cylinder quickly pushes the material onto the conveyor belt, and the AC motor drives the AC motor to drive the belt forward, and then sends the material to the sensor detection unit. The unit consists of a variety of sensors and visual sensors installed near the conveyor belt. It can judge the material, color, and whether the processed image is qualified and the position is in place. Finally, the workpieces detected by the actuator are sorted and processed, and the visual inspection fails and the materials of each type of material exceeding the number of warehouses in the warehouse are transported by the robot to the defective placement area for disposal. The loading platform is placed in different positions in turn. The main white plastic is placed in the 3rd floor of the 123 warehouses, the black plastic is placed in the 2nd floor of the 456 warehouses [7].
Material Inspection Processing Procedure
When the photoelectric sensor in the storage unit of the feeding unit detects the material, the cylinder action pushes out the material, and the inverter controls the operation of the AC motor, thereby driving the belt transmission, and installing a plurality of detecting sensors and visual lenses on the upper side and the two sides of the belt to form System detection module. The material passes through the silver metal sensor, and the inductive sensor is responsible for detecting whether it is metal. It is connected to the X15 end of the PLC. When the silver metal material arrives, the X15 contact is turned on and sent to the end of the belt. The black and white color detection of the material is realized by a reflective photoelectric sensor. The system uses a photoelectric sensor to detect black and white materials and is connected to the X16 end of the PLC. When the black material arrives, the X16 contact is not connected. When the white material arrives, turn on the X16 contact. The processing pattern detected by this system is "Rob", if it is qualified, it will be turned on X30, and if it is not, it will be turned on X31. When the material is transferred to the end of the conveyor, the X17 contact is turned on and then carried by the PLC control robot.
Online testing
After the system is started, download the program to the PLC, download the interface designed by the touch screen to the touch screen, and connect the connection line between the PLC and the touch screen. Manual operation test: adjust the mode selection switch to the manual gear, open the manual control of the touch screen device, and test the functions of each part in turn, including the number of pulses of the robot and the XY axis. For example, when the push cylinder is extended, observe whether the push cylinder is Extend, the magnetic switch changes. Automatic operation test: Switch the switch to automatic mode, press the start button, the system will run automatically. Press the robot to return to zero, observe whether the robot returns to the origin, and whether the origin indicator is lit. Also observe the zero return of the sorting and conveying platform. Main test of the feeding unit: When there is material in the storage tube, the operation of the cylinder is whether it is fed according to the program control requirements. It mainly tests whether the conveyor belt runs and stops as required, detects whether the signals fed back by each sensor are accurate, and adjusts the light source, focal length and contrast of the photoelectric sensor amplifier and visual sensor. It mainly tests whether the robot can move in place to grab the material and place the material on the transport platform and send the waste to the waste area to see if the position of the movement is accurate.
Conclusion
The system adopts the modular design of the structure, and can modify the module and modify the program according to the control requirements. The design of the system has met the basic requirements, completed the trial operation of the training equipment, and can operate stably and accurately. The system can automatically control material sorting, effectively save energy, improve management level, save manpower and material resources, can complete teaching and training tasks well, and provide certain reference value for actual production. | 4,307.8 | 2020-04-15T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Pouchitis Is Associated with Paneth Cell Dysfunction and Ameliorated by Exogenous Lysosome in a Rat Model Undergoing Ileal Pouch Anal Anastomosis
Background: Pouchitis is a common complication of restorative proctocolectomy and ileal pouch anal anastomosis (IPAA) for ulcerative colitis (UC), significantly affecting the postoperative quality of life. Paneth cells play an important role in the maintenance of gut homeostasis. This study aimed to investigate the role of Paneth cells in the pathogenesis of pouchitis. Method: Endoscopic biopsies from the pouch body and terminal ileum of UC patients undergoing IPAA with or without pouchitis were obtained to analyze Paneth cell function. Acute pouchitis was induced with 5% dextran sulfate sodium (DSS) for seven consecutive days in a rat model of IPAA. The Paneth cell morphology was examined by immunofluorescence and electron microscopy. The effect of exogenous lysozyme supplementation on pouchitis was also investigated. The fecal microbiota profile after DSS and lysozyme treatment was determined by 16s rRNA ITS2 sequence analysis. Result: Abnormal mucosal lysozyme expression was observed in patients with pouchitis. The rat model of pouchitis showed increased pouch inflammation, increased CD3+ and CD45+ T cell infiltration, and decreased tight junction proteins, including ZO-1 and Occludin. There is a significant deficiency of Paneth cell-derived lysozyme granules in the rat model of pouchitis. Supplementation with exogenous lysozyme significantly ameliorated pouchitis, lowering the levels of inflammatory cytokines such as TNF-α and IL-6 in the pouch tissue. 16s rRNA analysis revealed a higher Lachnospiraceae level after lysosome treatment. Conclusions: Paneth cell dysfunction is prominent in patients and rat models of pouchitis and may be one of its causes. The decrease in Lachnospiraceae, a characteristic of dysbiosis in pouchitis, could be reserved by lysosome treatment. Lysozyme supplementation shows promise as a novel treatment strategy for pouchitis.
Introduction
Pouchitis is the most common postoperative complication of ileal pouch anal anastomosis (IPAA) for ulcerative colitis (UC), interfering with the pouch function and the patient's quality of life.Idiopathic pouch inflammation is still etiologically undetermined.Recent studies suggest that dysbiosis with abnormal immune responses is a key factor in pouchitis [1][2][3].However, the extensive use of antibiotics and probiotics cannot completely cure pouch inflammation, and 10-15% can still evolve to recurrent or chronic antibioticrefractory pouchitis [4,5].Therefore, new therapeutic strategies for pouchitis are required.
The Paneth cell is a secretory lineage lying in the crypts of the small intestinal epithelium [6].The main components are α-defensin, lysozyme, and phospholipase A2, which inhibit the excessive expansion of the microbiota and maintain gut homeostasis by releasing them into the intestinal lumen [7].Lysozyme is a β-1,4-N-acetylmuramoylhydrolase that enzymatically processes the glycan skeleton of bacterial cell walls [8].The constant secretion of lysozyme contributes to the normal colonization of commensal microbiota and the host defense against enteric pathogens [9][10][11].Paneth cell dysfunction has been implicated in the impairment of the mucosal barrier and in susceptibility to intestinal disorders such as irritable bowel syndrome (IBS) [11], necrotizing enterocolitis (NEC) [12], and Crohn's disease (CD) [13], all characterized by microbiota dysbiosis.
However, the role of Paneth cells in pouch physiology and pouchitis has not been adequately investigated.We hypothesized that abnormal Paneth cell function, characterized by abnormal lysozyme secretion, triggered microbiota dysbiosis to aggravate pouchitis.This study aimed to explore the role of Paneth cell dysfunction and the subsequent dysbiosis in the pathogenesis of pouchitis.
Ethical Considerations
This study was approved by the Ethics Committee of Jinling Hospital (2021NZKY-011-01).Written informed consent was obtained from all patients.The experimental animal protocols were approved by the Animal Ethics Committee of Jinling Hospital.
Patient Cohorts
Patients with an ileal pouch were prospectively recruited between 1 September and 31 December 2021 from the Inflammatory Bowel Disease Center of Jinling Hospital.The inclusion criteria were as follows: (1) post-colectomy diagnosis of UC; (2) with pouch endoscopy; and (3) ileostomy closure within 10 months.The exclusion criteria were usage of any antibiotics, antifungal agents, bacterial probiotics, or fungal probiotic therapy within 4 weeks before stool sampling.
The patients' clinical information was collected during outpatient visits.Mucosal biopsies of the pouch and the terminal ileum were obtained during the pouch endoscopy.A normal pouch was defined based on clinical, endoscopic, or histological criteria.Individual pouch disease activity was scored with the Pouchitis Disease Activity Index (PDAI), and pouchitis was defined as PDAI ≥ 7 [14].
Animal Models
Male Sprague-Dawley rats weighing 350-380 g were housed in a specific pathogenfree animal facility at room temperature with a 12 h light/dark cycle and provided with standard rat chow diet and water.Animal care and experiments were conducted according to international guidelines on animal research and ethics.The animals were fasted for 24 h before surgery, and IPAA was performed according to established protocols [15].
After surgery, the model had a 30-day recovery period.The mice were then randomly divided into three groups: control group (no intervention), DSS group, and DSS + Lyso group (n = 5 per group).Rats in the DSS group were administered with 5% DSS (relative molecular mass: 36,000-50,000; MP Pharmaceuticals, Thessaloniki, Greece) for 7 consecutive days.Rats in the DSS + Lyso group were additionally given oral gavage with 100 mg/kg lysozyme (L6876; Sigma-Aldrich, Burlington, MA, USA) dissolved in 5% bicarbonate buffer 7 days before DSS administration and continuously throughout the experiment.
Sample Collections
Weight changes and fecal scores were recorded daily.The fecal score was scored according to the previous study [16].Briefly, a lack of stool was scored as 1, diarrhea as 2, a blob of stool as 3, textured stool as 4, and normal stool as 5.The fecal samples were collected prior to animal sacrifice.For tissue harvesting, the rats were sacrificed under anesthesia on the 7th day after DSS treatment.The pouch and pre-pouch tissues were harvested and washed with ice-cold PBS for further analysis.Part of the tissues were fixed in 10% neutral formalin for histological assessment, and the rest were immediately snap-frozen in liquid nitrogen and stored at −80 • C for subsequent experiments.
Histological Assessment
The tissues were embedded in paraffin and stained with hematoxylin and eosin (H & E).The pouch specimens were assessed as previously described [17].Erosion was scored as follows: 0, negative; 1, focal erosion; 2, erosion in several regions; 3, extensive erosion.Ulceration was scored as follows: 0, none; 1, focal ulceration of the mucosa in half of the superficial regions; 2, total mucosal ulceration at multiple foci; and 3, extensive mucosal ulceration extending to the muscularis mucosa or beyond.Intra-epithelial inflammation was evaluated by counting the number of lymphocytes in 100 epithelial cells at the tips of the villi.Villous atrophy was scored as follows: 0, none; 1, mild; 2, moderate; or 3, severe with villous flattening.Edema at the lamina propria was evaluated as: 0, none or 1, positive.
Quantitative Real-Time Reverse-Transcription PCR
Total RNA was extracted from the pouch tissues using an RNA Extraction Kit (Sigma-Aldrich).The RNA was denatured in the presence of an oligo dT primer and then reverse transcribed with a cDNA Reverse Transcription Kit (P111/P112; Vazyme Biotech Co., Ltd., Nanjing, China).Lysozyme, IL-17, IL-6, IL-10, INF-γ, and TNF-α mRNA were detected using the SYBR Green qPCR kit (R223-01; Vazyme Biotech Co., Ltd.).The expression of each mRNA was normalized to GAPDH.The data were calculated using the 2-∆CT method.The primers are listed in Supplementary Table S1.
Electron Microscopy
For electron microscopy (EM), the pouch tissue was immersed in 3% glutaraldehyde fixation solution in 0.1 mol/L KH 2 PO 4 at pH 7.4.Next, the samples were rinsed in 0.01 M PBS (pH = 7.4), fixed in PBS containing 1% starved acid for 2h, and rinsed in phosphate buffer.Dehydration was performed in a graded series of ethanol, followed by infiltration with an embedding agent at 37 • C overnight and polymerization at 60 • C for 48 h.The Leica Ultra-Thin Slicer (LeicAUC7, Leica Microsystems, Wetzlar, Germany) was used for ultrathin sections (60-80 nm thick), and double-staining with uranium lead was used to locate Paneth cells.The tissue sections were examined using a Hitachi electron microscope (HT7700, Hitachi, Tokyo, Japan).
Microbiota DNA Extraction and High Throughput Sequencing
Technical support was from Shanghai Genesky Biotechnology Company (Shanghai, China).The fecal genomic DNA was extracted from fecal samples using the QIAamp ® DNA Stool Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol.Bacterial genomic DNA was used as a template to amplify the V3-V4 hypervariable region of the 16S rRNA gene using the forward primer (5-CCTACGGGNGGCWGCAG-3) and reverse primer (5-GACTACHVGGGTATCTAATCC-3).Thirty nanograms of genomic DNA samples and the corresponding fusion primers were loaded to PCR, and the corresponding PCR parameters were set for amplification.The PCR products were purified using Agencourt AMPure XP magnetic beads (Beckman Coulter, Brea, CA, USA), dissolved in Elution Buffer, and labeled to complete the database construction.An Agilent 2100 BioAnalyzer (Agilent Technologies, Santa Clara, CA, USA) was used to detect the range and concentration of the fragments in the library.Qualified libraries were sequenced by selecting an appropriate platform (HiSeq/MiSeq) according to the size of the inserted fragments.The tags were stitched into reads based on the overlapping relationship between the reads.Based on the OTU and the species annotation results, the species complexity of the samples and the differences in species between groups were analyzed.OTU representative sequences were taxonomically classified using the Ribosomal Database Project (RDP) Classifier v.2.2, with a minimum confidence threshold of 0.6, and trained on the Green Genes database v201305 using QIIME v1.8.0.
Statistical Analysis
The continuous variables in accordance with normal distribution were reported as mean ± SD, the nonparametric continuous variables were reported as median (interquartile range, IQR), and the categorical variables were reported as frequency and percentage.The Student's t-test was used for intergroup comparisons of continuous variables conforming to normal distribution; otherwise, the Mann-Whitney test was used.A two-sided p value < 0.05 was considered statistically significant for all analyses.SPSS software (SPSS 22.0) was used for all analyses.
Abnormal Mucosal Lysozyme Expression in Patients with Pouchitis
To explore the role of Paneth cells in pouchitis, twelve patients were enrolled (five with a normal pouch and seven with pouchitis).The patient characteristics are summarized in Table 1.Seven patients (58.33%) were male and three (25.00%)had a 3-stage IPAA.The median disease duration before IPAA was 48 months.The median age of the pouch was 12 months.The mean of PDAI was 2.60 in patients with normal pouches and 11.00 in patients with pouchitis.None of the patients had a family history of IBD or had undergone surgery for colorectal carcinoma.
Mucosal inflammation was revealed by H & E staining in the tissues with pouchitis, as described by Shen et al. [18] (Supplementary Figure S1).The immunofluorescence of the biopsy tissue showed decreased lysozyme-positive cells in the inflamed pouch tissue, which was consistent with the results of the previous study [19] (Figure 1A).Although the pouch tissue had a higher lysozyme mRNA level than the terminal ileum, pouchitis (PS) tissue has a lower lysozyme level compared to a normal pouch (NP) (1.35 ± 0.29 vs. 1.00 ± 0.00, p = 0.03; 0.78 ± 0.45 vs. 1.35 ± 0.29, p = 0.04, respectively; Figure 1B).Mucosal inflammation was revealed by H & E staining in the tissues with pou as described by Shen et al. [18] (Supplementary Figure S1).The immunofluoresce the biopsy tissue showed decreased lysozyme-positive cells in the inflamed pouch which was consistent with the results of the previous study [19] (Figure 1A).Althou pouch tissue had a higher lysozyme mRNA level than the terminal ileum, pouchi tissue has a lower lysozyme level compared to a normal pouch (NP) (1.35 ± 0.29 vs 0.00, p = 0.03; 0.78 ± 0.45 vs. 1.35 ± 0.29, p = 0.04, respectively; Figure 1B).
Paneth Cell Dysfunction in a Rat Model of Pouchitis
To explore whether there is a functional impairment of Paneth cells in experi pouchitis, we used a rat model of DSS-induced pouchitis.Immunofluorescence s revealed a lower lysozyme expression in the DSS group than that of the control (Figure 2A).The integrated optical density of cell fluorescence was calculated to es
Paneth Cell Dysfunction in a Rat Model of Pouchitis
To explore whether there is a functional impairment of Paneth cells in experimental pouchitis, we used a rat model of DSS-induced pouchitis.Immunofluorescence staining revealed a lower lysozyme expression in the DSS group than that of the control group (Figure 2A).The integrated optical density of cell fluorescence was calculated to estimate the amount of lysozyme in the pouch tissue, indicating that the Paneth cell-derived lysozyme was significantly reduced in the pouches of the DSS group compared with the control group (74.81 ± 16.06 vs. 147.94± 26.67, p = 0.015; Figure 2B).And compared with the pre-pouch tissues, the pouch tissues presented a lower level of lysozyme expression in the DSS group (Figure 2A,B).There were no significant differences in the expression of lysozyme between the control and the DSS group in the pre-pouch tissue and also no significant differences between the pre-pouch and pouch tissues in the control group (Figure 2A,B).
Microorganisms 2023, 11, x FOR PEER REVIEW 6 of 14 pre-pouch tissues, the pouch tissues presented a lower level of lysozyme expression in the DSS group (Figure 2A,B).There were no significant differences in the expression of lysozyme between the control and the DSS group in the pre-pouch tissue and also no significant differences between the pre-pouch and pouch tissues in the control group (Figure 2A,B).Electron microscopy showed malformations of the lysozyme secretory granules and the cavitating granules in the Paneth cells (Figure 2C).The proportion of normal particles Electron microscopy showed malformations of the lysozyme secretory granules and the cavitating granules in the Paneth cells (Figure 2C).The proportion of normal particles in each group of five fields was calculated, indicating a lower rate of normal secretory granules in the DSS group compared with the control group (37.20 ± 7.36 vs. 88.00 ± 5.74%, p < 0.001) (Figure 2D).In the pouch of the DSS group, lysozyme had decreased, confirmed through Western blotting and qPCR, as compared to the control group (0.05 ± 0.02 vs. 0.21 ± 0.09, p = 0.05; 0.35 ± 0.30 vs. 1.0 ± 0.0, p = 0.002, respectively; Figure 2E-G).
Oral Lysozyme Ameliorated Pouch Inflammation in a Rat Model of DSS-Induced Pouchitis
Then, we examined whether exogenous lysozymes could ameliorate pouch inflammation.The experimental diagram is shown in Figure 3A.The representative gross anatomy of the pouch is shown in Figure 3B,C.The dynamic changes in body weight, shown in Figure 3D, indicated a higher body weight after lysozyme treatment compared with the DSS treatment alone group.
Oral Lysozyme Ameliorated Pouch Inflammation in a Rat Model of DSS-Induced Pouchitis
Then, we examined whether exogenous lysozymes could ameliorate pouch inflammation.The experimental diagram is shown in Figure 3A.The representative gross anatomy of the pouch is shown in Figure 3B,C.The dynamic changes in body weight, shown in Figure 3D, indicated a higher body weight after lysozyme treatment compared with the DSS treatment alone group.Compared with the control group, a higher histological score and lower fecal score were observed after DSS administration, which was improved by the lysozyme supplement under DSS treatment and is shown in Figure 3E,F.In the DSS group, the H & E staining (Figure 3G) showed that the villi of the pouch epithelium were blunt, and the mucosa was irregularly arranged with more inflammatory cell infiltration, suggesting that inflammation was present in the pouch body but not in the pre-pouch ileum.After lysozyme supplement under DSS treatment, the pouch inflammation was significantly reduced compared with the DSS group (Figure 3F,G).
Inflammatory Response and Epithelial Barrier after Lysozyme Treatment
Immunofluorescence showed that more CD3+ and CD45+ T cells infiltrated the intestinal epithelium after DSS treatment than that of the control group, mostly in the villi (Figure 4A-C).The DSS + lysozyme group showed less CD45+ infiltration than the DSS group (Figure 4C), but no significant differences in CD3+ infiltration were observed between the DSS group and the DSS + lysozyme group (Figure 4B).The expression of the tight junction proteins Occludin and ZO-1 was marked by immunofluorescence in each group (Figure 4D).Quantification analysis showed decreased Occludin and ZO-1 expression after DSS treatment as compared to the control group, while their expression was increased after lysozyme treatment (Figure 4E-G).
The levels of inflammatory cytokines were also examined.The expressions of tumor necrosis factor (TNF)-α, interleukin (IL)-6, interferon (IFN)-γ, and interleukin (IL)-17 mRNA in the pouch were all significantly elevated in the DSS group compared with the controls (Figure 4H-K).After lysozyme treatment, the expressions of TNF-α and IL-6 were significantly reduced.IL-10 expression was significantly different between the DSS and the control group, as well as between the DSS + lysozyme and the DSS group (Figure 4L).Specific data for between-group comparisons of all experiments are provided in the Supplementary Materials.
Gut Microbiota Profile after DSS and Lysozyme Treatment
Sequencing of the 16S ribosomal RNA gene tags from the fecal samples was performed to examine alterations in the microbiota profile.The 16S rRNA amplicon did not show any changes in α-diversity among groups.Increased β-diversity was observed in the DSS group compared with the control group (Figure 5A,B).In addition, lysozyme supplementation significantly reduced the bacterial diversity (p = 0.030).
Bacteroidetes and Firmicutes are dominant in both cohorts, with a minor prevalence of Fusobacteria and Proteobacteria (Figure 5).At the phylum level, DSS treatment significantly increased the abundance of Proteobacteriax (p = 0.002) and Elusimicrobia (p = 0.010) compared to the control group, both of which decreased after lysozyme supplement in response to the DSS treatment (p < 0.05), whereas the abundance of Firmicutes was significantly increased in the DSS + lysozyme group compared with the DSS group (p = 0.001).
Oral lysozyme supplement contributed to a significant increase in Lachnospiraceae (p = 0.030), represented by Dorea, Blautia, and Roseburia (p = 0.002; p = 0.016; and p = 0.018, respectively), which have been shown to be beneficial for intestinal homeostasis.More details of the microbial changes in each group are shown in the Supplementary Table S2.
Discussion
Although the microbiota profile after IPAA has been extensively studied [20][21][22], the pathogenesis of pouchitis is not completely understood.Current observations suggest that the multi-aspect interplay between dysbiosis and abnormal mucosal immune activation is crucial to pouchitis [23,24].Whether the alterations in microorganisms in the pouch canal of unusual constructions are the cause or consequence of pouchitis requires further clarification.Paneth cell abnormalities are characteristic [25] of intestinal disorders and are used as predictors of the early recurrence of CD [26].Decreased antimicrobial peptide secretion by Paneth cells can weaken the mucus barrier, subsequently affecting the intestinal microecology [9,27].
Although Paneth cell dysfunction has been observed in the pathology of ileitis, aberrant lysozyme production in pouchitis has not been elucidated.Previous studies have suggested that the ileal pouch has different expressions of antimicrobial peptides [19,28].This study provides evidence of Paneth cell dysfunction and the downregulation of lysozyme in patients and a relevant animal model of pouchitis for the first time.Due to their active secretory function, Paneth cells are susceptible to endoplasmic reticulum stress (ERS), which activates the unfolded protein response (UPR) [29].However, if there is extensive ERS in Paneth cells, the signaling typically shifts to a pro-apoptotic state.Notably, Shen et al. reported that increased crypt apoptosis is a feature of autoimmune-associated pouchitis [18].Our study suggested that the aggravation of pouch inflammation was correlated with lower lysozyme levels in the Paneth cells.Exogenous lysozyme treatment decreased proinflammatory cytokine levels and reversed mucosal barrier impairment.Thus, Paneth cell dysfunction might be a new mechanism of pouchitis, in line with other studies on lysozyme supplementation for treating intestinal inflammation [8,30].
Dysbiosis has also been observed in DSS-induced pouchitis.Studies have consistently suggested a reduction in Lachnospiraceae in pouchitis, which is consistent with our findings [22,31].This also indicates that enteric lysozyme can reduce the number of Lachnospiraceae, representative producers of the short-chain fatty acids (SCFAs) acetate and butyrate, which can facilitate the development of probiotic bacterial consortia that restore in vivo microbiome functions [32,33].Further studies focusing on the role of Lachnospiraceae in pouchitis pathogenesis are warranted.Moreover, several studies have shown the effectiveness of fecal microbiota transplantation (FMT) and probiotics in the treatment of pouchitis [34,35], which further highlights the role of microbes in the pathogenesis of pouchitis.
This study has several limitations.First, the sample size is small, and additional prospective studies with a more detailed classification are needed to elucidate the role of Paneth cells in pouchitis.Second, the limitations of the rat model prevented us from examining the role of Paneth cells in pouchitis from a genetic perspective.Other genetically modified models may prompt a more detailed investigation of Paneth cell dysfunction rather than simple drug interventions.Third, we did not examine the changes in the microbiome after lysozyme treatment in normal pouches, which may have caused ambiguity in the current study.Therefore, future studies investigating the role of Paneth cells in pouchitis should include multi-dimensional design schemes.
Overall, an interaction between the host and microbes with an altered inflammatory immune response exists in the pouch environment.The decreased secretion of lysozymes by Paneth cells might be one of the pathological characteristics of pouchitis, which also leads to a lower abundance of Lachnospiraceae that can be restored by oral lysozyme supplementation.Exploration of a new etiological perspective may be beneficial for patients who do not respond well to traditional antibiotic therapies.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/microorganisms11122832/s1, Figure S1: (A) H & E staining of biopsies tissues (Bar = 100 µm); (B) Histopathological score of each group; Table S1: The list of Primers; Table S2: Details of the microbial changes in each group.
Figure 1 .
Figure 1.Abnormal lysozyme expression in clinical samples of pouchitis.(A) Immunofluor assay of lysozyme in pouch tissues (bar = 100 µm); sections stained for indicated markers lysozyme) and counterstained with DAPI (Blue).(B) Relative expression of lysozyme level o tissues (TI: terminal ileum; NP: normal pouch; PS: pouch with pouchitis).Data are shown mean ± SD with 5 samples on TI, 5 samples on NP, and 7 samples on PS.The asterisk ind statistically significant difference (* p < 0.05, ns: not significant).
Figure 1 .
Figure 1.Abnormal lysozyme expression in clinical samples of pouchitis.(A) Immunofluorescence assay of lysozyme in pouch tissues (bar = 100 µm); sections stained for indicated markers (Green: lysozyme) and counterstained with DAPI (Blue).(B) Relative expression of lysozyme level of biopsy tissues (TI: terminal ileum; NP: normal pouch; PS: pouch with pouchitis).Data are shown as the mean ± SD with 5 samples on TI, 5 samples on NP, and 7 samples on PS.The asterisk indicates a statistically significant difference (* p < 0.05, ns: not significant).
Figure 2 .
Figure 2. Paneth cell dysfunction in a rat model of pouchitis.(A) Immunofluorescence assay of lysozyme in the intestinal tissue of each group (Bar = 100 µm); (B) integrated optical density of Paneth cell fluorescence; (C) proportion of normal particles in each section; (D) electron microscopy revealed normal and contracted appearance of Paneth cells (bar = 10 µm); (E) Western blot of the lysozyme protein in different groups; (F) greyscale analysis of lysozyme in each group; (G) relative expression of lysozyme level in each group.Data shown are representative of 3 independent experiments; mean ± SD derived from 6 rats per group.The marks indicate a statistically significant difference (* p < 0.05, ** p < 0.01, *** p < 0.001, ns: not significant).
Figure 2 .
Figure 2. Paneth cell dysfunction in a rat model of pouchitis.(A) Immunofluorescence assay of lysozyme in the intestinal tissue of each group (Bar = 100 µm); (B) integrated optical density of Paneth cell fluorescence; (C) proportion of normal particles in each section; (D) electron microscopy revealed normal and contracted appearance of Paneth cells (bar = 10 µm); (E) Western blot of the lysozyme protein in different groups; (F) greyscale analysis of lysozyme in each group; (G) relative expression of lysozyme level in each group.Data shown are representative of 3 independent experiments; mean ± SD derived from 6 rats per group.The marks indicate a statistically significant difference (* p < 0.05, ** p < 0.01, *** p < 0.001, ns: not significant).
Figure 3 .
Figure 3. Exogeneous lysozyme ameliorates pouchitis in a rat model of IPAA.(A) Schematic diagram of the experiment; (B,C) pouch configuration and gross pathology; (D) change in body weight; (E) fecal score; (F) histological score; (G) H & E staining of pouch tissue in each group (Bar = 20 µm).Data shown are representative of 3 independent experiments; mean ± SD derived from 6 rats per
Figure 3 .
Figure 3. Exogeneous lysozyme ameliorates pouchitis in a rat model of IPAA.(A) Schematic diagram of the experiment; (B,C) pouch configuration and gross pathology; (D) change in body weight; (E) fecal score; (F) histological score; (G) H & E staining of pouch tissue in each group (Bar = 20 µm).Data shown are representative of 3 independent experiments; mean ± SD derived from 6 rats per group.The asterisk indicates a statistically significant difference (* p < 0.05, ** p < 0.01, ns: not significant).
Figure 5 .
Figure 5. Bacteria biodiversity and composition after DSS and lysozyme treatment in the rat mo of pouchitis.(A) α-diversity; (B) Shannon diversity; (C) Simpson diversity; (D) relative abunda of bacteria composition at the phylum level; (E) relative abundance of bacteria composition at family level; (F) relative abundance of bacteria composition at the genus level (n = 4 in each gro
Figure 5 .
Figure 5. Bacteria biodiversity and composition after DSS and lysozyme treatment in the rat model of pouchitis.(A) α-diversity; (B) Shannon diversity; (C) Simpson diversity; (D) relative abundance of bacteria composition at the phylum level; (E) relative abundance of bacteria composition at the family level; (F) relative abundance of bacteria composition at the genus level (n = 4 in each group). | 5,899.6 | 2023-11-22T00:00:00.000 | [
"Medicine",
"Biology"
] |
Coal dust alters β-naphthoflavone-induced aryl hydrocarbon receptor nuclear translocation in alveolar type II cells
Background Many polycyclic aromatic hydrocarbons (PAHs) can cause DNA adducts and initiate carcinogenesis. Mixed exposures to coal dust (CD) and PAHs are common in occupational settings. In the CD and PAH-exposed lung, CD increases apoptosis and causes alveolar type II (AT-II) cell hyperplasia but reduces CYP1A1 induction. Inflammation, but not apoptosis, appears etiologically associated with reduced CYP1A1 induction in this mixed exposure model. Many AT-II cells in the CD-exposed lungs have no detectable CYP1A1 induction after PAH exposure. Although AT-II cells are a small subfraction of lung cells, they are believed to be a potential progenitor cell for some lung cancers. Because CYP1A1 is induced via ligand-mediated nuclear translocation of the aryl hydrocarbon receptor (AhR), we investigated the effect of CD on PAH-induced nuclear translocation of AhR in AT-II cells isolated from in vivo-exposed rats. Rats received CD or vehicle (saline) by intratracheal (IT) instillation. Three days before sacrifice, half of the rats in each group started daily intraperitoneal injections of the PAH, β-naphthoflavone (BNF). Results Fourteen days after IT CD exposure and 1 day after the last intraperitoneal BNF injection, AhR immunofluorescence indicated that proportional AhR nuclear expression and the percentage of cells with nuclear AhR were significantly increased in rats receiving IT saline and BNF injections compared to vehicle controls. However, in CD-exposed rats, BNF did not significantly alter the nuclear localization or cytosolic expression of AhR compared to rats receiving CD and oil. Conclusion Our findings suggest that during particle and PAH mixed exposures, CD alters the BNF-induced nuclear translocation of AhR in AT-II cells. This provides an explanation for the modification of CYP1A1 induction in these cells. Thus, this study suggests that mechanisms for reduced PAH-induced CYP1A1 activity in the CD exposed lung include not only the effects of inflammation on the lung as a whole, but also reduced PAH-associated nuclear translocation of AhR in an expanded population of AT-II cells.
Background
Studying mixed exposure to polycyclic aromatic hydrocarbons (PAHs) and foreign particles, such as coal dust (CD) is extremely important in occupational health. Exposure to CD is associated with lung scarring and inflammation [1]. However, the impact of CD exposure on lung cancer in miners is difficult to interpret because most of them are smokers and consequently have exposure to carcinogenic PAHs in the cigarette smoke, which would normally induce cytochrome P4501A1 (CYP1A1) activity. Increased pulmonary CYP1A1 activity is associated with increased DNA adducts from PAH exposure and elevated risk of lung cancer [2]. Therefore, if inhalation of coal dust changes PAH-induced CYP1A1 activity, this could modify lung cancer risk. Establishing whether coal dust is a modifier of PAH metabolism and carcinogenesis is important for designing and interpreting epidemiology studies of lung cancer in coal miners.
CD does change PAH-induced CYP1A1 activity in a rodent model. In models of mixed exposure to respirable CD and PAHs, CD suppressed PAH-mediated cytochrome P4501A1 (CYP1A1) induction in rats [3]. Using the model PAH, β-naphthoflavone (BNF), our laboratory demonstrated decreased BNF-induced CYP1A1 induction in the rat lung 2 weeks after CD exposure [3,4]. The decrease in PAH-induced CYP1A1 expression in the CDexposed lung was demonstrated using western blots from microsomes isolated from whole lung digests [3,4]. In addition, decreased CYP1A1 induction was demonstrated in alveolar septa in histologic sections of lung using immunofluorescence [3,4]. In histologic sections, the alveoli near the bronchioloalveolar junction demonstrated the greatest decrease in PAH-induced CYP1A1 expression [3]. Because alveolar type II (AT-II) cells strongly express cytokeratins 8 and 18 [5], AT-II cell hyperplasia was demonstrated by morphometric measurement of cytokeratin 8/18 immunofluorescence. In addition, co-localization of CYP1A1 and cytokeratin 8/18 expression in alveoli allowed detection of CYP1A1 expression in intact AT-II cells within the lung [3,4], and demonstrated decreased PAH-induced CYP1A1 in AT-II cells. Thus, the CD-exposed lung contains an expanded population of AT-II cells that are refractory to PAH-induced CYP1A1.
CYP1A1 is induced through activation of the aryl hydrocarbon receptor (AhR). Mature AhR is usually located in the cytoplasm complexed with chaperones including: heat shock protein 90, AhR-interacting protein [6], and P23 [7]. Prototypical ligands for AhR are the substrates for CYP1A1 and include PAHs, such as benzo(a)pyrene [8], BNF and 2,3,7,8,-tetrachlorodibezo-p-dioxin (TCDD) [9][10][11]. After ligand binding, AhR is released from the associated cytoplasmic proteins and the AhR nuclear localiza-tion signal is exposed [11]. This causes translocation of AhR to the nucleus where it dimerizes with the AhRnuclear translocator (Arnt), forming a heterodimer complex containing the bound substrate. The AhR/Arnt heterodimer with bound substrate binds to the xenobiotic responsive element on the CYP1A1 enhancer, overcoming the repressive effect of nucleosome and initiating the CYP1A1 transcriptional process [12]. Therefore, the PAHinduced translocation of AhR-substrate complex to the nucleus is a key mechanism by which the CYP1A1 induction is initiated. Accordingly, investigating the translocation of AhR to the nucleus is extremely important in exploring the mechanism of CD-mediated suppression of CYP1A1 induction [3]. Because AT-II cells are a specific cell type demonstrating CD-mediated suppression of PAH-induced CYP1A1 [3,4], in this study, we investigated the hypothesis that AT-II cells from CD-exposed rats have decreased AhR translocation to the nucleus following in vivo PAH exposure.
Animals
Male viral antigen negative, Sprague-Dawley (Hla:(SD)CVF) rats (220-270 g BW at exposure time) were purchased from Hilltop Labs (Scottdale, PA). Rats were kept in an AAALAC-approved barrier animal facility at NIOSH, where food and water were supplied ad libitum. Rats were housed in ventilated shoebox cages on autoclaved hardwood (Beta-Chip; Northeastern Products, Inc., Warrensburg, NY) and cellulose bedding (ALPHA-dri; Shepherd, Watertown, TN) in HEPA-filtered, ventilated cage racks (Thoren Caging System, Inc., Hazleton, PA). The rats were acclimatized for one week before starting the experiment. The experimental protocol was reviewed and approved by the Institutional Animal Care and Use Committee.
Experimental Design
The experimental design was based upon 1) the lung burden of 10-25 mg CD/g lung wet weight identified in coal miners with coal workers pneumoconiosis [13,14], and 2) a previous study showing that within this approximate range in the rat, AT-II cell hyperplasia was greatest at 40 mg/rat which is ~26.7 mg/g lung wet weight (the approximate lung wet weight for rats is 1.5 g). Sixteen male rats were randomized into 4 equal groups using a research randomizer program http://www.randomizer.org. Rats were intratracheally instilled with sterile saline or CD particles (CD; 40 mg/rat, ~160 mg/kg BW) in saline. Eleven days later, rats were injected intraperitoneally with BNF (50 mg/kg BW) suspended in filtered corn oil to induce CYP1A1. The BNF was injected daily for 3 days. Corn oil was injected as the control. Twenty-four hours after the last BNF or corn oil injection, rats were sacrificed and AT-II cells were isolated by digesting lung tissue with elastase as described later.
CD characterization
Respirable CD was prepared from coal acquired at the Pittsburgh coal seam. The CD was characterized and fractionated to produce respirable CD particles as previously described [3,15]. These particles were less than 5 microns in diameter, had a mass aerodynamic diameter of 3.4 microns and a surface area of 7.4 m 2 /g. Numerically, silica particles comprised 2.3% of the total particle number. The particles contained 0.34% total iron [3,15]. The CD particles were additionally characterized for the PAH (see below).
Coal Dust PAH Analysis Gas Chromatography and Mass Spectrometry
Methylene chloride, hexane and DMSO extracts of 100 mg CD were analyzed using gas chromatography with mass spectrometry (GC/MS). Analysis was performed using an HP 5890 Series II GC with an HP 5972 Series MS as the detector. The GC/MS conditions are described in Table 1. All samples were run with blanks and in both scanning and selected ion monitoring (SIM) modes. The ions selected for monitoring were chosen based on a PAH cocktail QTM mixture from Supelco (#47930-U). The ions are the most discriminating ions for each PAH compound, and usually the most abundant.
HPLC DCM Sample Preparation
In addition, the methylene chloride extract of CD was analyzed using an HPLC clean-up method. The column used for this method was a Jordi Gel DVB 500 A 500 mm × 10 mm column supplied by Alltech (Cat # 100567). After contaminants were eluted off the column the PAHs can be eluted and collected using a six-port Valco valve. This fraction was then concentrated. The flow rate for the HPLC method is 1.5 ml/min and fractions are collected based on retention time and a fluorescent detector with an excitation wavelength of 254 nm and an emission wavelength of 400 nm. The HPLC system used was a GBC HPLC system consisting of the pump GBC LC1150, photodiode array GBC #LC5000, autosampler GBC #LC1650, inline degasser GBC #LC1460, and a Schimadzu fluorescence detector #RF-551. The software used to control the system and collect data was WinChrome v.1.32. For each run, 200 μl of sample was injected onto the column.
Intratracheal Instillation
CD suspensions were made up daily from heat-sterilized samples using nonpyrogenic sterile 0.9% saline (Abbott Laboratories, North Chicago, IL). Suspensions were vortexed directly after preparation and shaken well before instillation. The CD particles were suspended in sterile saline at a concentration of 133.3 mg/ml as previously described [3]. Rats received a single intratracheal instillation of either 0.3 ml of this suspension (40 mg/rat) or 0.3 ml of saline (vehicle) by intratracheal (IT) instillation as previously described 14 days before necropsy [16][17][18]. Because of the black color of CD, its distribution to both left and right lung was verified at necropsy.
Preparation of BNF
To prepare the BNF (Sigma Adrich Co., St. Louis, MO) suspension, the corn oil (Mazola) vehicle was sterilized by filtering with non-pyrogenic Acrodisc (Pall Gelman Sciences, Ann Arbor, MI) 25 mm syringe filter (0.2 μm in diameter). Solutions of 5% BNF (Sigma, St. Louis, MO) in sterilized corn oil (50 mg/ml) were prepared 24 h before injection. The suspension was vortexed and then soni-
Necropsy of Rats
Rats were euthanized by IP injection of sodium pentobarbital (Sleepaway ® , Fort Dodge Animal Health, Fort Dodge, IA). The abdomen was opened by incision along the midline and the lungs and attached organs, including tracheobronchial lymph node, thymus, heart, aorta, and esophagus, were removed.
Isolation of AT-II Cells
To assure that both normal and hypertrophied AT-II cells were isolated in this study, AT-II cells were isolated using a well-established method based upon cell attachment, rather than cell size [18]. In brief, this procedure is based upon the fact that elastase digestion of the lung predominantly yields cells with Fc receptors and AT-II cells, which do not have Fc receptors. The cells with Fc receptors are removed during initial plating on IgG-coated dishes [18]. This procedure produces a high yield of a purified AT-II cell population that is independent of cell size but is not considered quantitative.
Evaluation of AT-II Cell Purity
The purity of AT-II cells was determined by staining the lamellar bodies with the lipophilic fluorescent Phosphine 3R dye (Roboz Surgical Instrument, Washington, DC) as previously described [19]. Briefly, the technique involves mixing 9 parts of cell suspension (1 × 10 7 cell/ml) with 1 part of 0.02% phosphine 3R dye (prepared by 10 mg dye dissolved in 10 ml PBS) followed by equilibration for 2 minutes. Forty μl of this mixture were spread over a slide and viewed under a fluorescent microscope using a FITC filter cube with 477 nm absorption and 512 nm emission. Six images were captured from each slide for counting AT-II cells. The percentage of AT-II cells from the total number of cells was calculated. In addition, since AT-II cell hypertrophy is a consequence of particle exposure [3,4], the cells were examined to assure that hypertrophied AT-II cells were recovered.
Fresh AT-II cell suspension (containing 1 million cells) was used for electron microscopy while the rest was fixed in freshly prepared 2% paraformaldehyde. Later on the same day, cytospin slides were prepared from the fixed cells using 50,000 cells/slide. Cytospin slides were stored in the refrigerator until stained.
Detection of AT-II Cell Viability
The percentage viability of AT-II cells was determined by trypan blue exclusion as previously described [20].
Electron Microscopy of AT-II Cells
Freshly prepared AT-II suspensions (containing 10 6 cells) were used for electron microscopic identification of AT-II cells as previously described [19]. Briefly, the cell pellets were preserved in Karnovsky's fixative, then postfixed at 4°C in 2% osmium tetroxide for 1 h, mordanted in tannic acid, and stained overnight at 4°C in 0.5% aqueous uranyl acetate. The samples were then dehydrated in alcohol and embedded in Epon. The sections were placed on 200mesh grids and stained for 5 min with Reynold's lead citrate and for 20 min with 3% uranyl acetate. Photographs were taken on a Jeol 1220 transmission electron microscope at 80 kv. AT-II cells were identified by the presence of lamellar bodies.
Quantitative Immunofluorescence for AhR in the Nuclei and Cytoplasm of Isolated AT-II Cells
Quantitative indirect immunofluorescence with co-localization is a quantitative procedure which allows the measurement of the amount of one fluorochrome that is within the same spacial location as a second fluorochrome using digital images [21]. In contrast to studies in isolated cells [22], in preliminary studies, AhR could not be demonstrated in intact lung tissue sections. Therefore, in this study, we isolated AT-II cells and used digital photomicroscopy, a specific nuclear stain (Cytox Green), and morphometric measurement to measure nuclear co-localization of AhR.
Immunofluorescent staining for AhR
Slides were stained using indirect immunofluorescence to detect and quantify AhR to determine its nuclear and cytoplasmic localization in isolated AT-II cells exposed to CD and then to PAH. Indirect immunofluorescence was conducted as previously described [22], with minor modification. Briefly, the area containing the cells on the cytospin slides was encircled by a hydrophobic marker. This allowed circumscribed application of antibodies to the marked area. Slides were then immersed in 2% paraformaldehyde to assure fixation before washing. Then slides were washed in distilled water followed by rinsing in PBS for 5 minutes. To avoid non-specific binding, slides were blocked by application of a few drops of freshly prepared filtered 1% bovine serum albumin (BSA) (Sigma Aldrich Co.).
A polyclonal rabbit anti-rat AhR antibody (BioMol Research Laboratories, Inc., Plymouth, PA) was diluted 1:20 with 1% BSA and 100 μl of the solution was applied on each cytospin slide for 1 h. For the negative control, the primary antibody was replaced by normal rabbit serum. Then 100 μl of an Alexa Fluor ® 594 donkey anti-rabbit antibody (Molecular Probes, Eugene, OR) diluted 1:20 with PBS was applied on each cytospin slide for 1 h in the dark. Therefore, the antigen sites for AhR fluoresced red under a fluorescence microscope. The slides were then washed using distilled water and covered with anti-fade gel mount (Biomeda Corp., Foster City CA) before applying a cover slip.
Staining of AT-II Cell Nuclei
In order to identify nuclear AhR, the nucleus was stained with a green color using Cytox Green stain according to the manufacturer's direction (Molecular Probes, Inc., Eugene, OR).
Digital Photomicroscopy
For all immunofluorescence studies, five images were captured by a researcher blinded to the exposure status. The slides were first examined with an Olympus fluorescence photomicroscope (Olympus AX70; Olympus American Inc., Lake Success, NY) using two filters: (i) a green cube (460-500 nm excitation) and (ii) a red cube (532.5-587.5 nm excitation). Criteria for specific staining were: (i) absence of staining in the negative controls, (ii) absence of background staining, and (iii) distinct cellular localization of AhR. Then, photomicrographs were acquired for quantitative morphometric analysis using a 40× objective and a Quantix cooled digital camera (Quantix Photometrics; Tucson, AZ) with QED camera plugin software (QED Imaging, Inc., Pittsburgh, PA). The digital camera settings for contrast, brightness and gamma were held constant during the capture time of all images.
Morphometric measurement of immunofluorescence
Morphometric measurements were made using commercial morphometry software (Metamorph Universal Image Corp., Downingtown, PA). By using this software, the area of fluorescence for AhR in the cytoplasm and in the nucleus of AT-II cells was measured and expressed as μm 2 .
The most important parameters deduced by the aid of morphometric analysis included the following: 1) The proportional nuclear AhR expression is the area of AhR expression in the nucleus relative to the total AT-II cell nuclear area (areas where Cytox green is present). This measurement is calculated from the following formula: where, P is the proportional AhR localization in the nucleus (Cytox green).
R is the percentage of AhR colocalized within the Cytox green nuclear matrix.
T is the total AhR area (in the nucleus and cytosol).
G is the total nuclear area (Cytox green).
2) The proportional cytosolic AhR expression is the area of AhR expression in the cytoplasm relative to the total nuclear area (areas where Cytox green is present). This measurement is calculated from the following formula: where, L is the proportional AhR localization in cytoplasm determined by the equation.
M is the percentage of AhR area not colocalized with the cytox green nuclear marker.
T is the total AhR area (in the nucleus and cytosol).
G is the total nuclear area (Cytox green) determined by Metamorph software.
This measurement is important because it determines the amount of AhR remaining in the cytoplasm after induction by the CYP1A1 inducer, BNF.
3) The percentage of AT-II cells with AhR localized in the nucleus was manually calculated from distinct images. The AT-II cells with AhR localized in their nucleus and the total number of AT-II cells was counted in 5 images per rat. The cells expressing AhR in the nucleus have a yellow color due to colocalization of green and red fluorochromes.
4) The total area of AhR expression per AT-II cell. The total expression area (in nucleus and cytoplasm) of AhR was measured as micrometers squared and was divided by the number of AT-II cells.
Statistical Analyses
All analyses were performed with SAS/STAT software, Version 8.2 utilizing the Proc Mixed procedure. Dependent variables were analyzed using two-factor analysis of variance using two-factor mixed model analysis of variance (CD/Saline by Oil/BNF). All pairwise comparisons where appropriate were performed with Fisher's LSD. All results were considered statistically significant at P < 0.05.
Coal Dust PAH Analysis Results
Several PAHs were identified in the CD including phenanthrene, naphthalene, pyrene, fluoranthene and fluorine ( Table 1). The concentrations of PAHs identified in 100 mg CD are described in Table 1. From that table, the amount of different types of PAHs was calculated per 40 mg CD, which is the dose used for IT instillation. The total amount of PAHs calculated per CD-exposed rat was 2.436 μg. Since BNF-treated rats received 50 mg/kg of BNF, the dose of BNF is 1250 μg for rats with an average weight of 250 g. Therefore, the dose of BNF was more than 500 times the total PAHs quantified in CD.
The Number and Purity of Isolated AT-II Cells
AT-II cells stained with phosphine 3R contained bright green lamellar bodies that are absent in other pulmonary cell types ( Figure 1A and 1B). Hypertrophied alveolar type II cells were recovered from the coal dust-exposed but not control rats using this isolation procedure ( Figure 1A).
Using transmission electron microscopy, lamellar bodies were also visualized within AT-II cells ( Figure 1C). The number of AT-II cells obtained by isolation and counted by a Coulter multisizer ranged from 12.8 -19.9 million/ rat and was numerically higher but not significantly increased by exposure (Figure 2A). The purity ranged from 76.5 -80.1% ( Figure 2B) and was unaffected by exposure. Because the isolation method does not recover all alveolar type II cells from the rat lung, no attempt was made to quantify the numbers of enlarged type II cells.
General Morphometric Findings
The fluorochrome used for detection of AhR was Alexa Fluor ® 594 (Molecular Probes, Eugene, OR), which fluoresced red at the sites of expression. Cytox green produced green nuclear fluorescence. Therefore, nuclear expression of AhR produced yellow fluorescence (Figure 3).
Localization of AhR in the AT-II Cell Nucleus
BNF caused an increase in proportional AhR nuclear localization in rats without CD exposure (P = 0.027). In rats exposed to CD, the proportional AhR nuclear localization was not significantly affected by BNF compared to exposure to CD alone ( Figures 3C, D and Figure 4A).
Localization of AhR in AT-II Cell Cytosol
The proportional AhR expression in AT-II cell cytosol showed non-significant reductions in rats receiving BNF compared to rats without BNF exposure in both the saline and CD groups ( Figure 4B).
Percentage of AT-II Cells with nuclear localization of AhR
In rats that received IT saline instead of CD, the percentage of AT-II cells with AhR localized in the nucleus was significantly increased by BNF (P = 0.0008). However, in rats that received IT CD, the percentage of AT-II cells with AhR localized in the nucleus was not significantly affected by BNF ( Figure 4C).
Proportional AhR Expression in Cytosol versus Nucleus
By morphometric analysis, the proportional area of AhR in the cytoplasm was greater than the proportional area in the nucleus in all treatment groups (Figures 4D) (P < 0.05).
Total area of AhR expression per AT-II cell
In rats that received IT saline, the total expression area (in nucleus and cytoplasm) of AhR (measured as square micrometer per AT-II cell) was significantly increased (P < 0.036) by BNF. In rats that received IT CD, this area was not significantly changed by BNF exposure ( Figure 4E).
Discussion
Previous data from our laboratory showed that exposure to foreign particles, such as CD and silica suppresses the PAH-mediated induction of CYP1A1 in rat lung [3,4,23]. By histopathologic assessment, particle exposure produces AT-II cell hyperplasia and hypertrophy in the lungs of rats exposed to either respirable silica or respirable coal dust [3,4,23]. Previous studies indicated that AT-II cell hypertrophy and hyperplasia were related processes and a response to alveolar type I cell injury in the particleexposed lung [24]. Using immunofluorescent co-localization of CYP1A1 and the AT-II cell markers cytokeratins 8/ 18 in CD-exposed rats, studies from our laboratory showed that these hyperplastic and hypertrophic cells expressed decreased amounts of PAH-induced CYP1A1 protein. This suggested that CD exposure resulted in appearance of a new population of AT-II cells with diminished capacity for CYP1A1 induction [3,4]. Since CYP1A1 is induced by ligand activation of the AhR, which is localized in the cytoplasm in the inactive state, we isolated AT-II cells from in vivo CD-exposed rats to localize and quantify AhR in the cytoplasm and nucleus after in vivo treatment with the specific CYP1A1 inducer, BNF.
AhR translocation is essential for CYP1A1 induction by PAHs. After being stimulated by an inducer, the AhR is translocated to nucleus and binds to another protein called AhR nuclear translocator (ARNT) forming a heterodimeric protein complex [25]. This complex binds to the xenobiotic responsive element (XRE) located at the enhancer region of CYP1A1 gene producing conformational changes in chromatin structure and initiates CYP1A1 transcription [26].
Morphometric analysis of immunofluorescence staining in this study showed that the proportional AhR expression in AT-II cell nuclei, an indicator of AhR localization in the nucleus, was significantly increased in BNF-exposed rats compared with vehicle control rats. Moreover, the total area of AhR expression per AT-II cell showed a significant increase in BNF-exposed rats compared to vehicle controls. This finding is consistent with many studies involving BNF as a specific potent inducer for CYP1A1 through activation of AhR and served as a positive control for the immunofluorescent procedure used in our study [27][28][29][30][31][32].
Conversely, in CD exposed rats, the localization of AhR in AT-II cell nuclei, measured as a proportional AhR expression in the cell nucleus, was not significantly affected by BNF. In addition, in CD-exposed rats, the percentage of AT-II cells with nuclear AhR expression was not significantly affected by BNF. These findings suggest that the mechanisms of CD-induced inhibition of CYP1A1 induction may involve failure of the CD-exposed cells to translocate AhR after PAH exposure. One possible explanation for the alteration of AhR localization in AT-II cell nucleus of CD-exposed rats after being activated by BNF is AhR proteolysis. Since CD contains small amounts of PAHs and ligand-activated AhR is rapidly proteolyzed resulting in an immediate reduction of the AhR number [33][34][35][36], it is possible that the prior exposure to low levels of PAH decreased the AhR pool in the cell. Gu and co-workers suggested that this ligand-dependant degradation of AhR aims to ameliorate the response of the cell to environmental changes, thereby protecting the cell from prolonged exposure to excessive concentrations of agonists as an adaptive mechanism [33]. However, the amount of PAHs Representative immunofluorescent images showing the expression of AhR in AT-II cells in rats exposed to saline or CD with and without BNF http://www.particleandfibretoxicology.com/content/6/1/21 quantified in CD was 500 times less than the dose of BNF injected. Furthermore, the BNF used in this study was injected daily for three days to repeatedly activate AhR. Thus, increased AhR proteolysis due to prior PAH-activation in the CD-exposed rats seems unlikely.
The decreased AhR nuclear translocation in alveolar type II cells after CD exposure may well be a general response of alveolar type II cells to inflammation associated with lung-deposited particles, since CYP1A1 induction and its dependent activity (EROD) were inhibited by exposure to another type of occupational dust, crystalline silica, which does not contain PAH [23]. Histopathological and bronchoalveolar lavage examinations showed that both CD and silica were associated with pulmonary inflammation [3,23,[37][38][39]. Inflammation can downregulate CYPdependent activity in the liver [40][41][42]. In the liver, a number of different CYPs are suppressed by inflammation and reduced CYP activity involves both transcriptional and post-transcriptional mechanisms [43]. Because exposure to CD is associated with pulmonary inflammation [3], the mechanism of suppressed PAH-induced CYP1A1 activity in the particle-exposed lung is likely to involve proinflammatory mediators known to influence CYP1A1 activity. In particular, in vitro NF-κB suppresses PAHinduced CYP1A1 activity by preventing acetylation of the CYP1A1 promoter and thereby reducing transcription [8,44,45]. In addition, nitric oxide binds to the catalytic heme moiety of CYP1A1, leading to post-transcriptional downregulation of CYP1A1 activity [46]. Indeed, our previous studies showed, that PAH induced CYP1A1 activity was inversely related to CD-induced pulmonary inflammation [3].
However, in both the silica and coal dust-exposed rat lungs, alveolar type II cells proliferate following pulmonary inflammation and are often refractory to CYP1A1 induction [3,23]. A previous in vitro study with cells of apparent alveolar type II cell origin indicated that proliferating cells had less constituitive EROD activity than confluent (quiescent) cultures [47]. In a keratinocyte cell line, differentiated cultures had more AhR mRNA and more TCDD-inducible CYP1A1 than did proliferating cultures [48]. Similarly, in vivo rat skin cells that remain refractory to CYP1A1 induction are predominantly basal cells, the progenitor population of the skin [49]. In this study, we have investigated the hypothesis that like proliferating keratinocytes, alveolar type II cells from CD-exposed rats have decreased AhR translocation to the nucleus following in vivo PAH exposure. Our findings are consistent with these previous findings for other proliferating cells and indicate that the alveolar type II cells of the CD-exposed rat lungs have decreased capacity for BNF-induced AhR nuclear translocation. Thus, suppression of PAH-induced CYP1A1 activity in the CD-exposed lung is associated with both inflammation and the expansion of an alveolar type II cell population with reduced capacity for BNF-induced AhR nuclear translocation.
Since CYP1A1 induction is dependent upon AhR nuclear translocation [9,50], the alveolar type II cells of the particle-exposed lung would be expected to produce fewer CYP1A1-dependent metabolites of PAH carcinogens. It is these PAH metabolites which are believed to cause the DNA adducts which may initiate lung cancer and AT-II cells are potential progenitor cells for lung cancer [51,52]. Thus, our findings indicate CD exposure prior to PAH exposure decreases the inducibility of PAH metabolism in the lung, at least in part because there is diminished capacity for AhR nuclear translocation in the type II cells. However, if DNA adducts have already caused DNA mutations in alveolar type II cells, expansion of initiated cells is integral to the process of cancer promotion [53,54]. Studies are still needed to determine the effect of CD upon the lung that has been previously exposed to PAH carcinogens. It may well be that CD is actually a complex modifier of PAH carcinogenesis with both the negative modification of PAH-induced metabolism indicated in this and previous studies [3,4,23] and the capacity for promotion through expansion of initiated cells.
Taken together, our results demonstrate that exposure of rats to CD modifies nuclear translocation of AhR in AT-II cells after subsequent BNF exposure. This provides an Morphometric quantification of AhR in the nucleus and cytoplasm of isolated AT-II cells Figure 4 (see previous page)
Morphometric quantification of AhR in the nucleus and cytoplasm of isolated AT-II cells. (A) The proportional
AhR in the nucleus was significantly increased (letter a; p < 0.05) in the saline plus BNF group compared to the saline plus oil group but no increase in nuclear AhR was observed after BNF in co-treated rats. (B) No significant change in the proportional AhR expression in cytoplasm of rats receiving BNF compared to those receiving oil was observed in either the IT saline or CD groups. (C) The percentage of AT-II cells with AhR localized in the nucleus was significantly increased (letter a; p < 0.05) in BNF-exposed rats without CD exposure but not in BNF and CD exposed rats. (D) The proportional AhR expression in AT-II cell cytosol was significantly higher (P < 0.05) than that in the nucleus in all treatment groups (letters e, f, g, and h, respectively). (E) The total area of AhR expression per AT-II cell was significantly increased (p < 0.05) in rats receiving saline plus BNF compared to those receiving saline plus oil (letter a; p < 0.05) but BNF causes no changes in rats receiving CD. Results are mean + SE, n = 4 in all groups except in saline plus oil group (n = 3).
explanation for at least some of the diminished CYP1A1 induction observed in the particle-exposed lung upon subsequent BNF exposure. | 7,011.2 | 2009-08-03T00:00:00.000 | [
"Biology",
"Medicine"
] |
GASOLINE: a Greedy And Stochastic algorithm for Optimal Local multiple alignment of Interaction NEtworks
The analysis of structure and dynamics of biological networks plays a central role in understanding the intrinsic complexity of biological systems. Biological networks have been considered a suitable formalism to extend evolutionary and comparative biology. In this paper we present GASOLINE, an algorithm for multiple local network alignment based on statistical iterative sampling in connection to a greedy strategy. GASOLINE overcomes the limits of current approaches by producing biologically significant alignments within a feasible running time, even for very large input instances. The method has been extensively tested on a database of real and synthetic biological networks. A comprehensive comparison with state-of-the art algorithms clearly shows that GASOLINE yields the best results in terms of both reliability of alignments and running time on real biological networks and results comparable in terms of quality of alignments on synthetic networks. GASOLINE has been developed in Java, and is available, along with all the computed alignments, at the following URL: http://ferrolab.dmi.unict.it/gasoline/gasoline.html.
Computational complexity of GASOLINE
In this section we analyze the time complexity of GASOLINE. To simplify the analysis, we suppose that the size of the complexes returned at the end of each execution of GASOLINE is W , we suppose also that we have N input networks with the same number of nodes, n, and edges m. We define also the following variables: • γ s : the number of Gibbs sampling iterations of in the bootstrap phase; • γ e : the number of Gibbs sampling iterations in each extension step of the iterative phase; • k: the average degree of a node (k = m n ); • γ i : the number of iterations of the iterative phase; • γ x : the number of executions of GASOLINE.
Time complexity will be expressed as a function of n, N and W . Through the analysis, we will assume that the generation of random numbers and the computation of the orthology score between two proteins are done in constant time O(1).
First, let's analyze the bootstrap initial phase, whose goal is to find an optimal alignment of protein seeds. The generation of the initial alignment requires time O(N ). The computation of transition probabilities for all the proteins of the selected network at each iteration of Gibbs sampling costs O(nN ), while the computation of the alignment score requires O(N ). Therefore, the time complexity of the initial phase is: Since, in practice, N << n we can write: Now, let's consider the iterative phase, which removes and adds nodes to the current local alignment iteratively.
Suppose, for simplicity, that the alignment grows in the following way: at the beginning the size of aligned complexes increases from 1 up to W, then in the following extension and removal steps it switches from W to W-1 and viceversa.
The last assumption fits quite well the behavior of GASOLINE in the context of real biological networks, since our algorithm yields an alignment of complexes of a certain size and then tries to adjust it by replacing bad parts according to a goodness score.
Let's start with the extension step, which can be divided into three phases: 1. The computation of seeds' adjacent nodes; 2. The execution of Gibbs sampling; 3. The extension of seeds.
Let's suppose that networks are represented through adjacency lists. Under this assumption, the adjacent of a node can be found in O(k) time. As regards Gibbs sampling, the generation of the initial alignment requires O(N ) time.
The computation of the transition probabilities and the alignment score depends on the size of seeds. Let L the current size of the aligned complexes. The transition probability of a protein is computed as the product of two components: orthology similarity score and topology similarity score.
Computing the orthology score for a protein of the selected network at each iteration of Gibbs sampling costs O(N ) as in the bootstrap phase.
In order to compute topology scores efficiently, topology vectors are built before starting Gibbs sampling, for all seed's adjacent nodes of all aligning networks. The construction of topology vector of a single protein can be done in O(L) time, assuming that adjacent lists are implemented by using hash tables with buckets, thus providing constant-time access (in average) to an element of the list. So, the overall cost of building topology vectors is N × O(kL 2 ), supposing that the total number of a seed's adjacent nodes is O(kL). Under this assumptions, the orthology score of a protein can be computed in O(N L) time and the transition probability for all the proteins of the selected network requires O(nN L) time. Finally, the computation of the alignment score requires O(N L) time.
Summing up, the overall cost of the extension step we obtain: Assume N << n and k << n we can rewrite the equation as: Since in the worst case L = O(n) we can deduce that: The removal step simply consists in computing the minimum value of a function (Goodness score) overall the L sets of aligned proteins in the current alignment. For each set, the Goodness score can be evaluated in O(N L), which is the time required to compute the internal degree of all the proteins within the set. So, the cost of removal step is: Assuming that the extension step is performed W − 1 times at the beginning and γ i − 1 times later on and the removal step is executed γ i times, the overall cost of the iterative phase is: We can assume, without loss of generality, that W = O(γ i ) and W << n, so: Finally, all preprocessing steps can be performed in linear time, by considering the degree and the number of orthologous proteins for the proteins of all networks. Post-processing phase consists in filtering highly overlapping complexes and can be done in constant time.
By combining equations (1) and (4) and considering preprocessing operations, the overall cost of γ e executions of GASOLINE is: From the results of the analysis, it follows that the running time of GASO-LINE is polynomial in n. In fact, γ x is at most equal to n since at each execution of the algorithm different protein seeds are considered. Moreover, in all applications, γ s and γ e are in the range 200-400 and can be considered constant. Therefore, the final complexity is O(n 2 W ). We can distinguish three cases: 1. If networks are very similar, then the average size W of complexes found in each execution is high, so W = O(n) and the algorithm requires O(n 3 ) time. This is the worst case; 2. If networks are very distantly related, then W = O( √ n) and the running time is O(n 2.5 ) this is the average case.
3. If W is independent to the size of networks we can suppose its size constant W = O(1). Therefore, the running time will be O(n 2 ) that is, the best case of our algorithm. | 1,657.6 | 2014-06-09T00:00:00.000 | [
"Computer Science"
] |
Limited Evaluation Cooperative Co-evolutionary Differential Evolution for Large-scale Neuroevolution
Many real-world control and classification tasks involve a large number of features. When artificial neural networks (ANNs) are used for modeling these tasks, the network architectures tend to be large. Neuroevolution is an effective approach for optimizing ANNs; however, there are two bottlenecks that make their application challenging in case of high-dimensional networks using direct encoding. First, classic evolutionary algorithms tend not to scale well for searching large parameter spaces; second, the network evaluation over a large number of training instances is in general time-consuming. In this work, we propose an approach called the Limited Evaluation Cooperative Co-evolutionary Differential Evolution algorithm (LECCDE) to optimize high-dimensional ANNs. The proposed method aims to optimize the pre-synaptic weights of each post-synaptic neuron in different subpopulations using a Cooperative Co-evolutionary Differential Evolution algorithm, and employs a limited evaluation scheme where fitness evaluation is performed on a relatively small number of training instances based on fitness inheritance. We test LECCDE on three datasets with various sizes, and our results show that cooperative co-evolution significantly improves the test error comparing to standard Differential Evolution, while the limited evaluation scheme facilitates a significant reduction in computing time.
INTRODUCTION
Scaling artificial neural networks (ANNs) up to solve large complex problems achieved a big success in various machine learning problems. The backpropagation and stochastic gradient descent algorithms are conventional methods for training ANNs [15]. An alternative approach, Neuroevolution (NE) [6], employs evolutionary algorithms to optimize the topology and/or weights of the ANNs. The NE algorithms do not require the gradient information, and perform remarkably well in optimizing ANNs based on the direct interaction with their environment; specifically, in the cases where good decision instances are noisy or not known for supervised learning [6,34].
There are mainly two types of NE approaches: direct and indirect encoding [6]. Direct encoding aims to evolve the network parameters directly representing them within the genotype of the individuals; whereas, indirect encoding aims to evolve the specifications to define the developmental process of an ANN represented within the genotype. The indirect encoding methods can help improving the scalability of the evolutionary process for large networks, since they can reduce the parameter size. On the other hand, the NE with direct encoding presents a challenging opportunity for stimulating the research in large-scale optimization, but also contributes to understanding the evolutionary dynamics of ANNs by suggesting successful evolutionary strategies to evolve ANNs.
The task of evolving direct-encoded large networks is challenging due to 1) the scalability of the evolutionary methods to perform the optimization process efficiently on high-dimensional search spaces, and 2) the time requirement for evaluating the individuals on a large number of training instances. The Cooperative Co-evolution (CC) is an effective approach for optimizing large-scale problems [24]; and the Limited Evaluation (LE) is an advantageous method for reducing the number of instances of fitness evaluations [21]. In this work, we propose a Limited Evaluation Cooperative Co-evolutionary Differential Evolution (LECCDE) algorithm that employs the CC arXiv:1804.07234v2 [cs.NE] 6 May 2018 and LE approaches to perform accelerated evolution in optimizing high-dimensional ANNs with direct encoding.
With respect to the previous works, the work presented in this paper contributes as follows: 1) it considers the post-synaptic neurons as the building blocks of an ANN, and performs the subcomponent decomposition of the CC scheme by assigning the pre-synaptic weights of each post-synaptic neuron to a subpopulation; 2) it demonstrates the effectiveness of the CC in optimizing large-scale ANNs, and compares with the standard Differential Evolution (DE) optimization; 3) it shows that the LE scheme enhanced with the CC achieves better accuracy results than standard DE for evolving large networks, while reducing the time required for the fitness evaluation.
Three datasets were chosen to evaluate the performance of the proposed algorithm on supervised learning tasks. We used a fully connected feed forward ANNs with one hidden layer, with a total number of parameters in the order of thousands. We refer to these ANNs as "large-scale" in the sense of NE with direct encoding, and to distinguish them from the specialized networks used in Deep Learning (DL) approaches [14].
The rest of the paper is organized as follows: in Section 2, we provide the background knowledge and a brief literature review on the topics of DE, CC, and NE; in Section 3, we discuss the proposed algorithm in detail; in Section 4, we present the experimental setup; in Section 5, we provide the numerical results; and finally, in Section 6, we discuss the conclusions.
RELATED WORK
In this section, we provide a brief overview of the background and related work.
Differential Evolution
The DE algorithm is a powerful yet simple population-based search algorithm for continuous optimization [32]. A candidate solution set consists of N P individuals represented as D-dimensional realvalued vectors x i ∈ R D where integer i ∈ [1, N P]. An initial population of individuals is randomly sampled from the domain ranges of each dimension x min i, j and x max i, j where x i, j is the jth dimension of ith individual.
In each generation д, an individual x д i , ∀i ∈ (1, 2, · · · , N P), called the target vector, is selected. The mutation and crossover operators are applied to generate a trial vector u д . The trial vector is evaluated, and replaced with the target vector by the selection operator, if the fitness value of the trial vector is greater than or equal to the target vector.
The mutation operator generates a mutant vector v д i by perturbing a randomly selected vector using the scaled differences of the other two randomly selected vectors. The magnitude of the perturbation is controlled by the parameter called scale factor (F ). This strategy is referred to as the "rand/1" strategy, and is provided by the following equation: where r 1 , r 2 , and r 3 are mutually exclusive integers different from i, and selected randomly from the range of [1, N P]. There are various alternative mutation strategies proposed in the literature [4,22].
The crossover operator recombines the target vector with the mutant vector, controlled by the parameter called crossover rate (CR). The binomial (uniform) and exponential crossover operators are the two most commonly used crossover operators [25]. The binomial crossover operator is provided in Equation (2): where integer j ∈ [1, D] refers to the jth dimension of the vectors, functions rand() and randi() uniformly samples real and integer values within the specified ranges respectively. The selection operator performs a comparison of the fitness values of the target and trial vectors, and replaces the target vector with the trial vector in the next generation if a better fitness value is achieved by the trial vector. This is referred to as synchronous update, since the replacements are performed at the end of the generation when the process for all individuals is complete. The asynchronous version of the update is implemented by performing the replacement immediately within the same generation. The asynchronous update allows a newly replaced trial vector to be used by other individuals within the same generation.
The settings of the parameters in DE plays an influential role in the behavior of the algorithm for balancing the trade-off between the exploration and exploitation [3,22]. A recent survey by Karafotias et al. reviewed the approaches for parameter tuning and control in evolutionary algorithms [11]. Neri
Cooperative Co-evolution
While the dimensionality of a problem increases, the performance of the evolutionary algorithms tend to decrease [17,18]. The CC schemes were proposed for scaling evolutionary algorithms to higher dimensions using a divide-and-conquer strategy. In the CC, the subcomponents of a large-scale problem is decomposed and assigned to a number of subpopulations, that are evolved separately [24]. Cooperation in co-evolution arises during the fitness evaluation, where the subcomponents are merged together to assign a global fitness score to a candidate solution.
The three aspects that play a key role in CC are problem decomposition, subcomponent evolution, and subcomponent co-adaptation [37]. The maximum number of subpopulations can be generated by splitting a D-dimensional problem into D subgroups, assigning each subcomponent (dimension) to one subpopulation. Alternatively, the number of subcomponents in each subpopulation can be chosen arbitrarily to make the evolutionary optimization process manageable by reducing the dimensionality per subgroup. However, an arbitrary assignment of subcomponents may not be effective for solving non-separable problems. Ideally, the problem should be decomposed in a way that the interdependency between the subcomponents in different subpopulations should be minimized.
The existing knowledge about the problem domain can be beneficial in the problem decomposition process. If the interdependencies of the subcomponents are known, the problem can be decomposed based on this knowledge. This also relates to the separability property of the problem. If the problem is separable, then the problem can be decomposed into its separable subcomponents. If there is no/uncertain knowledge of the problem domain, then automated methods can be used to identify the interactions of the subcomponents [23,33].
The subcomponent evolution can be performed by using various kinds of evolutionary algorithms [18], including the DE [28].
Neuroevolution
ANNs are computational models that are inspired by the central nervous system [5]. NE is a field that aims to optimize ANNs by using evolutionary computing methods [6]. The approaches suggested in NE can be grouped as direct and indirect encoding methods. One of the first examples of the direct encoding approaches evolved the connection weights of fixed topology networks by representing them within the genotype of the individual in the population [7,35].
Neuroevolution of Augmenting Topologies (NEAT) has been proposed to evolve both the topology and the weights of the networks starting from minimal networks and incrementally grow larger networks through the evolutionary process [31]. NEAT uses a global innovation counter to keep track of the history of changes, and to align the networks to generate more meaningful offspring as a result of the crossover operator.
Some of the works incorporate the CC scheme within Neuroevolution. The Symbiotic Adaptive Neuroevolution (SANE) evolves two separate populations, one for neurons and another for the network "blueprints". The evolved network blueprints are used to determine which combinations of the neurons to use from the neuron population to generate a network [20]. The Enforced SubPopulations (ESP) initiates a subpopulation for each neuron, and the genotype of these neurons encode the weights for incoming, outgoing and bias connections [9]; Cooperative Synapse Neuroevolution (CoSyNE) initiates a subpopulation for each connection [8].
The indirect NE methods can help scaling evolutionary approaches for evolving large networks. Kitano [12] suggested a grammatical graph encoding method, based on graph rewriting rules represented as individuals' genotypes, to evolve the connectivity matrix of ANNs. Koutnik et al., proposed using lossy compression techniques to reduce the high-dimensional parameters of the networks by transforming their parameters to the frequency domain using transformation functions such as the Fourier Transform and the Discrete Cosine Transform. In this case the evolutionary process is performed on a few significant coefficients on the frequency domain [13]. Gruau suggested a developmental method that evolves treestructured programs to specify the instructions to grow ANNs based on cell division and differentiation [10]. Stanley et al., proposed a Hypercube-Based Encoding method that uses Compositional Pattern Producing Networks (CPPNs) to assign the connection weights between neurons as a function of their locations [29,30].
The ANN architectures used in DL are often engineered for certain tasks in computer vision and signal processing [14]. In this case the connection weights are typically trained using the backpropagation. On the other hand, there are hyper-parameters for specifying the architecture and learning algorithms that play a role in the performance of network; thus, deep NE approaches have been suggested for optimizing the hyper-parameters of the deep neural networks efficiently [19,26]. Some recent work focuses on scalable evolutionary approaches for optimizing the connection weights of the networks. Salimans et al., used Evolution Strategies (ES) to optimize the connection weights of the Convolutional Neural Networks (CNNs) for reinforcement learning in MuJoCo and Atari environments [27]. The CNNs are a specific type of large ANN topologies that are specifically designed for image processing/recognition tasks in DL. Zhang et al., compared the ES proposed by Salimans et al. with the stochastic gradient descent for training CNNs on a large handwritten digit dataset, MNIST, and showed that the ES can achieve the state-of-the-art accuracy results [38].
Another scalability challenge for the NE is the fitness evaluation that can be computationally expensive, especially when there are large numbers of training instances to evaluate. Morse and Stanley proposed an approach called Limited Evaluation Evolutionary Algorithm (LEEA), inspired by the batch training in the stochastic gradient descent algorithm. The LEEA performs fitness evaluations over a small number of training instances (batches), and uses accumulated fitness values that are inherited from the parent generation to the offspring generation between batches [21]. We adopt the LEEA approach in our algorithm, and discuss the approach in more detail in Section 3.
THE PROPOSED ALGORITHM
The implementation details of the LECCDE algorithm are given in Algorithm 1. The algorithm is composed of the CC and LE schemes to decompose a large-scale continuous optimization task, and speed up the fitness evaluation process.
The CC scheme in LECCDE uses a heuristic to decompose the parameters of a high-dimensional ANN, i.e. the post-synaptic neurons are assumed to be the building blocks of the ANN, and are decomposed into subpopulations and evolved separately. Thus, the algorithm initiates SP subpopulations for each post-synaptic neuron, where each subpopulation consists of N P individuals. Each individual represents the pre-synaptic connection weights (see Appx. A).
From the SP subpopulations that contain N P of individuals, there are N P S P ANNs that can be constructed. To find the average fitness of each individual, all possible network combinations need to be evaluated. Since this number is quite large, we randomly sample trial ×N P times an individual from each subpopulation, construct a global network, evaluate it, and add the fitness value of the network to the fitness values of each individual that was part of the network [8]. At the end of this procedure, the fitness value of each individual is normalized to find the average fitness value, dividing by the number of time each individual is selected. The fitness of the individuals that were not selected during the sampling process set to 0. The individual with the maximum fitness from each subpopulation is then selected to construct the global ANN solution X . Finally, the performance of the global solution on the validation instances is found by evaluating X on the validation set.
The main loop of the algorithm iterates over all the batches. A batch is a small subset of the training instances used in the LE scheme [21]. In particular, TraininдSize⧸BatchSize batches are generated by randomly assigning each training instance to a batch.
The fitness score of the target vector on the current batch is found by replacing it within its corresponding part in the global Select a random individual x i,r j from each P i ▷ r j is a randomly generated integer index 6: end for 10: ∀i ∈ (1, SP) and j ∈ (1, N P), Ft i, j ← normalize(Ft i, j ) 11: Ft val idat ion ← evaluate(X , V alidationSet) 13: bestV alidation ← Ft val idat ion 14: best N etwork ← X
15:
while termination criterion is not satisfied do 16: for each b k ∈ Batches do 17: for each subpopulation P i do 18: for each x i,j ∈ P i do 21:
25:
Ft v ← (Ft i,r 1 + Ft i,r 2 + Ft i,r 3 )⧸3 26: u ← crossover (x i,j , v, CR) 27: if Ft ′ u > Ft ′ X then 31: if Ft val idat ion > bestV alidation then The fitness of the trial vector is computed in a similar fashion, by first replacing its corresponding part within the global solution, and then evaluating the global solution on the current batch. Since the mutant vector is composed of three randomly selected individuals {x i,r 1 , x i,r 2 , x i,r 3 }, the fitness value of the mutant vector is computed by taking their average. The fitness value of the trial vector is found using the sexual reproduction rule (see Appx. A.2).
The selection operator copies the trial vector and its fitness to a temporary set if its fitness value is greater than or equal to the fitness value of the target vector; otherwise, the target value and its fitness are copied. After all the computations are completed for all individuals in the subpopulation, the subpopulation is updated simultaneously by copying back the individuals and their fitnesses from the temporary sets.
After each subpopulation update, the individual with the highest fitness value in the subpopulation is copied back to the corresponding part of the global solution X . The global solution is evaluated on the validation set, and the one that performed the best is stored and provided as a the final result of the algorithm.
EXPERIMENTAL SETUP
Our experimental setup is designed to focus on the following questions: (1) Do the ANNs that are evolved using the Cooperative Coevolutionary DE algorithm with our subpopulation assignment heuristic achieve a better classification accuracy than the ANNs that are evolved by the standard DE algorithm? (2) Does the LE scheme applied to DE reduce the runtime of the algorithm, without decreasing the classification accuracy of the evolved ANNs? To answer these questions, we compare the results of the ANNs optimized by four algorithms, DE, LEDE, CCDE, and LECCDE, on three datasets with various sizes. The details for the implementation of the LECCDE are given in Algorithm 1. The CCDE and LEDE are implemented in a similar way, but, without the batch loop and the subpopulations, respectively. In standard DE, both batch training and subpopulations are not used. The LE algorithms require two evaluation per generation (target and trial vectors are evaluated on the current batch), while the algorithms without LE require one evaluation per generation. Regardless of this fact, the algorithms were run for the same number of function evaluations (FEs) for each dataset. For all experiments, we used "rand/1/bin" ("rand/1" mutation with binomial crossover) strategy with empirically fixed the parameter settings of F and CR to 0.1 and 0.3, respectively. We used 20 individuals for the population size, except for one experiment that we performed on a larger population size consisting of 100 individuals (see below). We set trial parameter to 5.
The three datasets used in the test process are listed in Table 1. These datasets were obtained from the Center for Machine Learning and Intelligent Systems dataset repository [16]. These datasets were chosen based on their number of features and instances, to show the relative performance of the algorithms in respect to the size of the dataset used. The Wisconsin breast cancer (WBC) dataset consists of 30 features, 2 classes, and 569 instances, the epileptic seizure recognition (ESR) consists of 178 features, 2 classes, and 4600 instances 1 , and the human activity recognition (HAR) dataset consists of 561 features, 6 classes, and 7144 instances [2]. The instances in each dataset were split into three groups (training, validation, and test) with ratios 70%, 15%, and 15% respectively. The fitness evaluations and selection process were performed on the train instances. The network that performs the best on the validation set is provided as the output of the algorithm, and evaluated on the test set. The fitness evaluation is based on the classification accuracy of the ANNs which is calculated by the number of correctly classified instances divided by the total number of instances. For all datasets, we used fixed-topology fully-connected feed forward ANNs with one hidden layer to perform the classification task (see Appx. A). The number of neurons within the hidden layer was kept constant at 50 for all ANNs evolved for all datasets. Based on the architecture of the ANNs and the number of features in the datasets, the total number of parameters evolved are 1652, 9052, and 28406 for the WBC, ESR, and HAR respectively.
We used a batch size of 100 instances for the WBC, 500 for the ESR, and 500 for the HAR. The decay value (see Appx. A.2) is set to 0.2, as suggested by Morse and Stanley [21]. The maximum number of FEs was set to 50000 for the WBC, 300000 for the ESR, and 500000 for the HAR, based on the number of their parameters.
NUMERICAL RESULTS
In this section, we present our experimental results. Each algorithm, with the specified settings, was run for 20 independent runs, and the median and the variance of train, validation and test accuracy were collected. All the accuracy results are shown with a precision of two digits. Table 2 shows the results obtained from the WBC dataset. In this case we could not observe a significant difference on the results of the ANNs evolved by the four algorithms. On the test data, the CCDE appears to be performing better than others. On the other hand, we observe a difference on the runtime of the algorithm (t = 322 sec, in our computing environment 2 ). The algorithms that employ LE and CC are less computationally expensive and run faster. For example, the runtime of DE is more than twice as big as that of LECCDE. This difference is less significant for the other algorithms, due to the size of the dataset. Even though all the algorithms are run for 50000 FEs for this dataset, the algorithms with LE performed evaluation on batches that are four times smaller than the whole set of training instances. However, since CCDE is run on the whole dataset, it appears that the CC improved its runtime possibly due to the computations of reduced-sized vectors within each subpopulation. Table 3 presents the results obtained from the ESR dataset. Based on the test data, CCDE appears to show better performance than the rest of the algorithms, while LECCDE follows it very closely. We observe the best running time with LECCDE (t = 2970 sec). Table 4 shows the results obtained from the HAR dataset. On this dataset, we observe a significant accuracy improvement when the CC scheme is used. The performance of CCDE and LECCDE are approximately %15-20 better than the algorithms that do not use the CC. Also, CCDE appears to be slightly better than LECCDE. On the other hand, we observe a significant runtime improvement when the LE scheme is used. The algorithms with the LE scheme run approximately four times faster than the algorithms that do not use the LE (t = 6530 sec). Also, LECCDE appears to produce the smallest variance on th train accuracy.
Finally, in Table 5, we report an additional experiment on the population size. In this case, we used a population size of 100 on the ESR dataset. When the population size increases (comparing to the Table 3), the accuracy results decrease. This may be due to the number of FEs needed for the convergence of the algorithm: in other words, when the population increases, the number of FEs needed for the convergence may increase. Moreover, we observe that CCDE and LECCDE perform significantly better than DE and LEDE. This may suggest that the CC increased the convergence speed. With respect to the running time of the algorithms, we observe the similar pattern observed in Table 3 (t = 2640 sec). Overall, CCDE appears to perform better than LECCDE due to the fact that it has the complete information for evaluating the individuals since it uses the entire set of training instances. However, CCDE comes with a larger runtime trade-off than LECCDE, which can make the difference with large datasets (e.g. for the HAR dataset the LECCDE runs on average four times faster). Also, increasing the number of evaluations or batch size can improve the performance of the LECCDE. For comparison, we performed two additional experiments with LECCDE, with the same settings used to produce the results in the Table 4, except the number of FEs and batch size. In the first experiment, we used 900000 FEs and observed that the ANNs the LECCDE optimize perform on training, validation and test sets on average 95.78, 94.31, and 93.28 respectively. This is almost %1 higher than the the performance observed in 4. On the other hand, the runtime of the algorithm is now 1.6 × t, which is still 1.8 times faster than the runtime of CCDE. In the second experiment, we used a batch size of 1000, and we observed that the algorithm performs on average 96.60, 93.84, and 93.38 on training, validation, and test datasets, with a runtime of 1.42 × t. These two additional experiments show an interesting trade-off between the batch size and the number of evaluations. Although the two additional experiments have similar runtime, the second experiment appears to produce better results. Figure 1 shows the overall comparison of the runtime of LEDE, CCDE, and DE relative to LECCDE on the three datasets. The x-axis shows the dataset, and the y-axis shows the increase in the runtime of the algorithm. The LEDE is relatively stable across experimented datasets. On the other hand, the runtime of the CCDE and DE increases when the number of instances increases. This is because the algorithms with LE perform the same number of function evaluations, on a smaller number of instances, which produces a clear advantage in terms of total runtime. Figure 2 shows the accuracy trend of the ANNs on the training, validation, and test instances during one example run of the optimization process performed by LECCDE (only the range [0. 8,1] is shown on the y-axis, for the sake of clarity). The data collected from this specific run shows that the accuracy on the training data is almost always the highest. The accuracy results of the test data closely follows the validation accuracy, and it is even higher for some of generations. Figure 3 shows the change of the validation accuracy during the evolutionary process of four algorithms on a single run (only the range [0.6 − 1] is shown on the y-axis). Firstly, the lines that represent the results of the LEDE and LECCDE are shorter than those of the other algorithms because they consume the same number of FEs within a half number of generations, since they perform two FEs (trial and target vectors) per generation. We observe that LEDE improves the DE in terms of validation accuracy and convergence speed; however, it suffers from the lack of diversity within the population (for a population size of 20), which prevents it from finding better solutions after about 80000 FEs are consumed. On the other hand, CCDE appears not to suffer from the early convergence problem observed in the LEDE, while LECCDE appears to improve the speed of CCDE.
To summarize, our empirical analysis suggests positive answers to the questions posed in Section 4: (1) It appears more significantly on large dataset (in Table 4), or with a large population size (in Table 5), that the ANNs that are evolved using the CC scheme using our heuristic achieve a better classification accuracy than the ANNs that are evolved by the standard DE algorithm; and (2) all experiments on the three datasets (most significantly on the largest dataset in Table 4), show that the LE scheme applied to DE reduce the runtime of the algorithm considerably, without causing a degradation on the classification accuracy of the evolved ANNs. To further assess the scalability of the proposed algorithm, we performed an additional experiment on the MNIST dataset [15]. We used the same ANN architecture that was used in the previous experiments. We provide the numerical results -which are not shown here for brevity-on the extended version of the paper available online 3 . For the same number of function evaluations, the computing time required for the LEDE and LECCDE is about 25 times less than the computing time required for the DE and CCDE. The LECCDE performs 8% better than LEDE. Overall, our preliminary results on MNIST show that the LECCDE achieves 90.80% classification accuracy on the test data, on average, which is about 4% lower than the backpropagation algorithm on the same ANN architecture. This may suggest that a better parameter tuning may be needed for the LECCDE to obtain results which are comparable to the state-of-the-art.
CONCLUSIONS
In this work, we proposed the LECCDE algorithm that employs the LE and CC schemes to improve the accuracy and the runtime of the standard DE algorithm for large-scale NE with direct encoding.
We performed experiments on four datasets, including a preliminary test on the MNIST dataset. Our results show that the CC scheme improves the performance of DE on the tested classification tasks. Moreover, we used the LE scheme to further improve the scalability of the method. Our results show that the LE scheme reduces the runtime of the algorithms, without affecting the performance. This reduction is due to the fact that the evaluation is performed on a small number of instances.
We used a heuristic in the CC scheme that decomposes the problem at the level of post-synaptic neurons. Thus, we evolve all the pre-synaptic weights of the post-synaptic neurons in different subpopulations. This decomposition approach aims to reduce the parameter size per subpopulation. For large datasets on the other hand, the number of parameters per subpopulation may still be large. Although this heuristic worked well, there may also be other 3 Supplementary results available at: https://arxiv.org/abs/1804.07234 decomposition heuristics that can be more effective. Alternatively, automatic methods can also be used for this purpose.
Another possibility for improving the results can be achieved by performing a sensitivity analysis. In this work, we did not experiment on the strategy and the parameters settings of the DE algorithm. Self-adaptive parameter control approaches can also be investigated to improve the performance of the results since these approaches can adjust the balance between the exploration and exploitation during the search process [4,36].
The methods proposed here can evolve only the ANNs with fixed topologies, it will be useful to extend these methods also to the network topology optimization.
A NEUROEVOLUTION A.1 Direct Encoding and Network Computation
An example of a feed forward network (FFN) is shown in Figure 4a where each node represents a neuron, and each edge represents a connection between two node, and the direction of each edge represents the direction of the information flow. A FFN consists of a number of input (i 1 , i 2 , i 3 , b 1 ), hidden (h 1 , h 2 , b 2 ), and output (o 1 ) neurons (b 1 and b 2 are bias neurons kept constant at 1) structured as input, hidden and output layers respectively (see Figure 4a). Inspired by biological neural networks, the connections between the neurons are often called synapses. A neuron that is at the starting point of the directional edge is called a pre-synaptic neuron, and the neuron that is at the end point (arrow) of the directional edge is called a post-synaptic neuron. Figure 4: (a) A fully-connected feed-forward ANN with one hidden layer, and (b) the representation of its genotype. Figure 4b shows the genotype representation of the network given in Figure 4a. Each synaptic weight is mapped directly to a gene in the genotype. The genotype is divided into its subcomponents consisting of the pre-synaptic weights of each post-synaptic neuron.
The activation of each neuron is updated using Equation (3) where a i is the activation of a post-synaptic neuron, a j is the activation of the jth pre-synaptic neuron and w i, j is the connection between them, b i is the bias of the post-synaptic neuron, and ψ is an activation function given in (4) [5].
A.2 Limited Evaluation
When the evaluation is performed episodically on a small subset of the whole training instances (batches), it is required to keep track of the individuals that performed well on the previous episodes. The LE scheme aims to adjust the fitness of the offspring by taking into account the success of its parents by fitness inheritance. The sexual and asexual reproduction rules are provided in Equations (5) and (6) respectively [21].
where, f ′ is the adjusted fitness of the offspring, f par ent is the parent of its parent, f is the actual fitness of the offspring on current batch of the training instances, and decay is a constant value for adjusting the weight of the previous fitness evaluations. The sexual reproduction method consists of two parents.
B EXTENDED EXPERIMENTS AND RESULTS
This section presents our preliminary results of the experiments performed on the MNIST dataset using the DE, LEDE, CCDE and LECCDE. The MNIST dataset consists of 60000 samples of 28 by 28 grayscale image instances of handwritten numbers between 0-9. Thus, the size of the input and output are 784 and 10 when each image pixel and its class label are considered as an input and output respectively. We used the same architecture of the artificial neural networks that were used for the experiments performed on the other datasets (feed forward artificial neural networks with one hidden layer consisting of 50 neurons). Thus, the total number of parameters of the networks optimized for the MNIST is 47710. The parameters of the Differential Evolution algorithm are also initialized using the same settings used for the other experiments except for batch size, number of individuals in each subpopulation and the maximum number of function evaluations. Since MNIST is larger than the tested other datasets, we used a a batch size of 1000, a population size of 60 and a maximum number of evaluations set to 2.16e + 6. Table 6 shows the training, validation and test accuracy results of the ANNs trained for the MNIST dataset. Each variant of the algorithm was executed for the same number of function evaluations. The total time required for computing every other algorithm is shown in relation to the computing time required for the LECCDE where t = 6.6e + 5 seconds that is approximately 19 hours on a single-core Intel Xeon E5 3.5GHz computer. Due to time constraints, we were able to perform 3 independent runs for the LEDE and LEC-CDE, and a single partial run for the DE and CCDE. Thus, on DE and CCDE we report their accuracy at 12% of their total allocated computing time (the total computing times of DE and CCDE are estimated based on their current execution progress). We observe a significant advantage in using the LE scheme on MNIST from the computing time point of view: indeed, the DE and CCDE implementations of the algorithm require a computing time that is 25 times bigger than the computing time required by the corresponding algorithms that make use of the LE scheme. Figure 5 illustrates the change of the validation accuracy of the evolved ANNs using the LECCDE during an evolutionary process. The speed of the accuracy improvements slows down around 88% -90% level. The best validation accuracy achieved during this evolutionary run was 91.62%. | 8,222.2 | 2018-04-19T00:00:00.000 | [
"Computer Science"
] |
From magnetoelectric response to optical activity
We apply a microscopic theory of polarization and magnetization to crystalline insulators at zero temperature and consider the orbital electronic contribution of the linear response to spatially varying, time-dependent electromagnetic fields. The charge and current density expectation values generally depend on both the microscopic polarization and magnetization fields, and on the microscopic free charge and current densities. But contributions from the latter vanish in linear response for the class of insulators we consider. Thus we need only consider the former, which can be decomposed into"site"polarization and magnetization fields, from which"site multipole moments"can be constructed. Macroscopic polarization and magnetization fields follow, and we identify the relevant contributions to them; for electromagnetic fields varying little over a lattice constant these are the electric and magnetic dipole moments per unit volume, and the electric quadrupole moment per unit volume. A description of optical activity and related magneto-optical phenomena follows from the response of these macroscopic quantities to the electromagnetic field and, while in this paper we work within the independent particle and frozen-ion approximations, both optical rotary dispersion and circular dichroism can be described with this strategy. Earlier expressions describing the magnetoelectric effect are recovered as the zero frequency limit of our more general equations. Since our site quantities are introduced with the use of Wannier functions, the site multipole moments and their macroscopic analogs are generally gauge dependent. However, the resulting macroscopic charge and current densities, together with the optical effects to which they lead, are gauge invariant, as would be physically expected.
I. INTRODUCTION
In a material that is optically active the plane of polarization of light rotates as the light propagates through the medium; the rotation is associated with a difference in the phase velocities of right-and left-handed circularly polarized light. The frequency dependence of the rotation is called optical rotary dispersion, and the associated difference in absorption of light of the different circular polarizations is called circular dichroism.
The study of optical activity has a long history. Pasteur was the first to associate it with structural dissymmetry [1], and as early as 1928 its first quantum mechanical description was given by Rosenfeld [2]. This phenomenon is most often observed in liquid solutions. The usual solvent, water, is not itself optically active, but the solution is optically active if the symmetry group characterizing the structure of the solute molecules contains no improper rotations. Early theoretical treatments involved models of solute molecules based on at least two coupled oscillators at different sites in each molecule [3], and it was natural to associate optical activity with the variation of the electromagnetic field across the molecule. However, an alternate approach [4] is to consider the electric and magnetic multipole moments of each molecule as a whole, and to describe their response to the electromagnetic field and its derivatives at a nominal center of the molecule. Optical activity is then typically associated with the response of the electric dipole moment to both such as σ il (q, ω) depends on q as well as ω. If timereversal symmetry holds before the medium is subjected to the electromagnetic field, then the optical activity has been called natural [7]. If time-reversal symmetry is broken, then there are generally additional contributions to σ ilj (ω), and as well a rotation of the plane of polarization of light can result from an asymmetric component of σ il (ω), as σ il (ω) = σ li (ω) in general. This latter phenomenon can be thought of as an "internal" Faraday effect.
Yet such a general treatment of linear optical properties of media based on σ il (q, ω) has its drawbacks. First, when using the minimal coupling Hamiltonian and directly calculating the expectation value of the electronic current density operator, "artificial divergences" can arise when the number of bands involved in the calculation is necessarily truncated; sum rules must be employed before such a truncation is performed to avoid these [9,10]. Second, although one can attribute different constituents of σ ilj (ω) to the purported response of different multipole moments [7], those multipole moments, and the physical insight they carry, do not directly arise in the calculation. And third, the bulk relation (1) and its expansion (2) give little direction on how to even approximately treat the subtleties that would arise if one considered a finite system and had to be concerned with effects at interfaces.
A strategy that is more physical is certainly available for crystalline systems in the "molecular crystal limit." In this limit we imagine molecules, here with no improper rotations in their symmetry group, positioned at lattice sites with a lattice constant sufficiently large that electrons can be considered essentially "bound" to one molecule or another, but still much less than the wavelength of light. Adopting the approach of molecular physics [11], multipole moments can be associated with each molecule and from these one can introduce macroscopic fields P i mol (x, t), M i mol (x, t), and Q ij mol (x, t), describing respectively the electric dipole, the magnetic dipole, and the electric quadrupole moment per unit volume of the "molecular crystal." The macroscopic charge and current densities are then given by mol (x, t) = −∇ · P mol (x, t), where the polarization and magnetization fields are given by with ". . ." indicating contributions from higher-order multipole moments. Neglecting local field corrections, from the response tensors associated with the multipole moments of the molecules themselves one can then identify bulk linear response tensorsχ il E (ω),γ ijl (ω),β il P (ω),β il M (ω), andχ ijl Q (ω) that relate the multipole moments to the macroscopic electric and magnetic fields, +γ ijl (ω)F jl (x, ω) +β il P (ω)B l (x, ω) + . . . , where the superscript (0) identifies the contribution to a net quantity from the unperturbed system, ". . ." here indicate contributions that are higher order in the macroscopic electric and magnetic fields and their derivatives, including the linear response of M mol to B, is the symmetrized (spatial) derivative of the macroscopic electric field evaluated at x, and the circle accents identify that these linear response tensors are valid in the molecular crystal limit.
Using (5,4) in (3), transforming to wave-vector space, and comparing with (2), we can construct σ ilj mol (ω) in terms ofγ ijl (ω),β il P (ω),β il M (ω), andχ ijl Q (ω). Such a calculation based on molecular response, done in terms of the multipole Hamiltonian familiar in molecular physics [12], does not suffer from the artificial divergences mentioned above; thus the resulting expression for σ ilj mol (ω) is well behaved. In addition, if time-reversal symmetry is broken before the molecules are subjected to the electromagnetic field, thenχ il E (ω) =χ li E (ω), which gives σ il mol (ω) = σ li mol (ω), leading to another source of the rotation of the plane of polarization of light as it propagates through the molecular crystal. Here the multipole moments of the molecules explicitly appear, and with the underlying macroscopic fields P i mol (x, t), M i mol (x, t), and Q ij mol (x, t) in hand one could begin to consider electrodynamics in the presence of interfaces.
But now what of more realistic models of crystalline materials, wherein the molecular crystal limit is not satisfied? Although there are no centers with which particular electrons are associated, in the "modern theory of polarization and magnetization" one can still define electric and magnetic dipole moments [13][14][15], albeit indirectly, through the response of the electronic charge and current densities to external electromagnetic fields. This approach is generally focused on the limit of static applied fields to insulators, the inclusion of higher-order moments in this framework is work in progress [16,17], and its generalization to optical fields is not obvious.
We recently introduced [18,19] a general approach to calculating both the static and the optical perturbative response of a medium based on the introduction of microscopic polarization and magnetization fields, p(x, t) and m(x, t). The usual macroscopic fields P (x, t) and M (x, t) are defined as the spatial averages of the corresponding microscopic fields. In general there are also microscopic free charge and current densities, the spatial averages of which are identified as the macroscopic free charge and current densities. However, at zero temperature, for the class of insulating crystals to which we restrict ourselves in this paper -which includes ordinary insulators [20] and Z 2 topological insulators -those free charge and current densities vanish in linear response, and the full microscopic response, to first order in the electromagnetic field, is captured by p(x, t) and m(x, t).
With the introduction of Wannier functions, the microscopic fields p(x, t) and m(x, t) can be decomposed into constituents associated with each lattice site, and these "site" contributions can be expanded in terms of a series of "site multipole moments." The spatial average of these microscopic fields then leads to an expansion of the macroscopic polarization and magnetization fields in the form (4), even if the molecular crystal limit does not hold. Further, the response of the multipole moments associated with each lattice site can be calculated in terms of the electromagnetic field and its derivatives evaluated at that site, and leads naturally to a description of the linear response that follows the form (5), again even though the molecular crystal limit does not hold. As well, the artificial divergences that can plague standard minimal coupling calculations are absent.
In this approach the site contributions to the electronic component of the microscopic polarization and magnetization fields, and thus to their multipole moments, depend on a modified form of the Wannier functions resulting from a generalized Peierls substitution [18]. There is also a well-known "gauge freedom" in choosing the original Wannier functions from which the modified functions are constructed, for they can be altered by adjusting the k-dependent unitary transformation relating them to the Bloch energy eigenstates. In general this leads to a "gauge dependence" of the site multipole moments, both initially and in their response to the electromagnetic field. And while exponentially localized Wannier functions (ELWFs) would of course be a natural choice for the original Wannier functions, we show that whatever choice is made the resulting electronic charge and current densities predicted are gauge invariant [21], as would be physically expected; the expressions we extract for σ il (ω) and σ ilj (ω) are thus gauge invariant. Even within the independent particle and frozen-ion approximations, which we adopt in this work, we believe this is the first derivation of σ ilj (ω) for an insulator that is valid not only at frequencies below the band gap, but also at frequencies above the band gap where absorption can occur. Thus, the σ ilj (ω) we present provides a description for both the optical rotary dispersion and the circular dichroism of crystalline insulators. At frequencies below the band gap we find agreement with an earlier calculation of σ ilj (ω) [7] that focused on that limit.
The special case of static and uniform electric and mag-netic fields is particularly interesting. In that limit the tensor describing the modification of the polarization due to the electric field becomes symmetric, even in the absence of time-reversal symmetry in the unperturbed crystal. But in the absence of both time-reversal and spatial inversion symmetry, a magnetic field can still induce a polarization and an electric field can still induce a magnetization. This phenomenon is called the magnetoelectric effect [22]; in an earlier work [19] we used our approach to derive the so-called orbital magnetoelectric polarizability (OMP) tensor that describes the magnetoelectric effect in the limit of fixed ion cores and with the neglect of spin contributions, and found agreement with earlier studies based on the "modern theory of polarization and magnetization" [23,24]. Optical activity can be understood as arising from the generalization of the magnetoelectric effect to finite frequencies, where the electromagnetic field is necessarily not uniform; time-reversal symmetry then need not be broken for the phenomenon to occur. And as our calculation is based on a microscopic identification of polarization and magnetization fields, we can identify a finite frequency generalization of the Chern-Simons contribution to the OMP tensor; this contribution is isotropic and thus does not lead to an induced electronic charge-current density in the bulk, which makes it inaccessible to approaches based on the bulk charge-current density response alone. Finally, since our calculation is based on the identification of site quantities, we can easily compare the general response of a crystal to that of a crystal in the molecular crystal limit mentioned above. In this paper we identify expressions for the response tensors χ il E (ω), γ ijl (ω), β il P (ω), β il M (ω), and χ ijl Q (ω) in both cases, indicating the response tensors that are valid in the molecular crystal limit by a circle accent as we have above. In particular, while the OMP tensor is identified with β il P (0) = β li M (0), the relationβ il P (ω) =β li M (−ω) continues to hold for finite frequencies in the molecular crystal limit, but it fails for a crystal more generally. Thus our approach is well positioned to explore the boundary between molecular physics and condensed matter physics in their descriptions of optical activity, and indeed of other optical phenomena.
The structure of this paper is as follows. In Section II we present the basic expressions for the microscopic polarization and magnetization fields, identify the site multipole moments, and present their relation to the macroscopic response functions; some of the details are relegated to Appendices A and B. The linear response of a crystalline insulator, within the independent particle approximation, is presented in Section III. Here for simplicity we neglect the spin degree of freedom and treat the ion cores as fixed. The response of the site multipole moments is detailed in Section IV, where we also consider some of the symmetries of the response tensors. In Section V we construct the linear response of the macroscopic charge and current densities, and identify σ il (ω) and σ ilj (ω); their constituent tensors are listed in Ap-pendix C, and in Appendices D and E we confirm that the response is gauge invariant and thus that σ il (ω) and σ ilj (ω) are as well. We also consider the special case of frequencies below the band gap of the insulator and confirm, using a result presented in Appendix F, that we have agreement with earlier work for σ ilj (ω) [7]. In Section VI we consider the molecular crystal limit and show that in this limit our general crystalline expressions reduce to what would be expected. We discuss and conclude in Section VII.
II. MULTIPOLE MOMENTS
In earlier work [18,19] we showed how the (total) microscopic charge and current densities can be written as where, in this work, with ρ ion (x) the charge density associated with fixed ion cores, and ρ(x, t) and ĵ (x, t) the expectation values of the microscopic electronic charge and current density operators, respectively [25]. These operators are obtained from the minimal coupling Hamiltonian via Noether's theorem and involve the electron field operators and their adjoint, which we take to be the dynamical degrees of freedom of the crystalline system; they evolve under the minimal coupling Hamiltonian, which results in the (assumed classical) electromagnetic field entering (6), and thus in both p(x, t) and m(x, t) generally having a nontrivial dependence on time [18,19]. These microscopic fields can generally be decomposed as a sum of constituent fields [18], one associated with each Bravais lattice vector R characterizing the structure of the unperturbed crystalline system, Each "site" polarization p R (x, t) is related to a portion ρ R (x, t) of the (total) charge density that is associated with the lattice site R, and each "site" magnetization m R (x, t) is related to a portion j R (x, t) +j R (x, t) of the electronic current density that is associated with the lattice site R, where the "relators" s i (x; y, R) and α ib (x; y, R) have been introduced and discussed previously [18]; they are presented in Appendix A. In general the microscopic "free" charge and current densities, ρ F (x, t) and j F (x, t), are also relevant. However, in this paper we assume the crystal to be in its zero temperature ground state before the electromagnetic field is applied and so, for the class of insulators considered here and specified below, both the unperturbed free charge and current densities, and their linear response to the electric and magnetic fields vanish [18]. This is as would be expected physically, and we can henceforth neglect those fields. The macroscopic polarization and magnetization fields, P (x, t) and M (x, t), can be identified as spatial averages of the microscopic fields (7), as discussed in Appendix B. Anticipating the integration over each site contribution (8) associated with such spatial averaging, we perform a formal expansion of each site contribution in terms of Dirac δ functions and their derivatives about that site, as we detail in Appendix A. The expansions are characterized by their dependence on a parameter u, and explicitly retaining the terms that are at most linear in that parameter we find where is the electric dipole moment, is the electric quadrupole moment, and is the magnetic dipole moment, each associated with lattice site R; here iab is the Levi-Civita symbol. Terms that are higher order in u, indicated by ". . ." in the expansions (9), involve the electric octupole moment, the magnetic quadrupole moment, and higher-order moments. For the sort of systems considered here, within the independent particle approximation one can physically expect the response of the moments µ i R (t), q ij R (t), and ν i R (t) to the microscopic electric and magnetic fields to depend on those fields in the neighborhood of R. The approximation of neglecting "local field corrections", which we adopt here, involves taking those fields to simply be the macroscopic fields E(x, t) and B(x, t) that are the spatial averages of the microscopic electric and magnetic fields; we call these macroscopic fields the "Maxwell fields" (see Appendix B). With this approximation, we show in Section IV that the linear response of each site moment (10,11,12) can be related to the Maxwell fields evaluated at that site, E(R, t) and B(R, t), and their spatial derivatives there. Then, implementing the usual Fourier series analysis, we find that the relevant terms are We have chosen to introduce a unit cell volume Ω uc here because, with the neglect of local field corrections, the response tensors χ il E (ω), γ ijl (ω), β il P (ω), β il M (ω), and χ ijl Q (ω) appearing here reduce to those of (5), in the molecular crystal limit. We show in Appendix B that macroscopic multipole moments, analogous to those appearing in (5), can be constructed from the corresponding site multipole moments (14) (see (B5) and (B8)), such that where the unperturbed contributions simply acquire a factor as they are in fact independent of R. Further, the macroscopic charge and current densities are given by with macroscopic polarization and magnetization fields even far from the molecular crystal limit. Notably in the systems we consider here, the unperturbed contributions to (15) vanish when implemented in (16). Hence, the lowest-order charge and current densities arise due to the linear response tensors χ il E (ω), γ ijl (ω), β il P (ω), β il M (ω), and χ ijl Q (ω). In the next two sections we turn to the calculation of these response tensors.
III. LINEAR RESPONSE
The charge and current densities associated with each lattice site that were mentioned above can be written as where ρ ion R (x) is the static contribution to the charge density associated with lattice site R due to the appropriate ion core(s) and where the ρ βR ;αR (x, R; t), j βR ;αR (x, R; t), andj βR ;αR (x, R; t) are generalized (electronic) "site quantity matrix elements" that have been presented earlier [18]. These quantities can be reasonably expected to vanish unless x is "close" to R, guaranteeing that ρ R (x, t), j R (x, t), andj R (x, t) have that property as well. To avoid possible confusion we note that the total microscopic charge and current densities are given by ρ( depends on the net charge density that is associated with R, while m R (x, t) is sensitive to only a portion of the current density that is associated with R.
It is clear from (18) that the single-particle density matrix, η αR ;βR (t), is central in the identification of electronic "site" quantities and in describing their dynamics [18]. This object captures the electronic transition amplitude from a particular Wannier orbital of type α associated with lattice site R to a Wannier orbital of type β associated with R , at time t (see Eq. (33,36) of Ref. [18]).
A. Dynamical and compositional contributions to the multipole moments
The site quantities of primary interest are the Cartesian components of the lowest-order multipole moments (10,11,12) that are associated with lattice site R. Indicating such a site quantity generally by Λ R (t), it is clear that upon inserting the relevant term(s) (18) in the desired site multipole moment expression (10,11,12), is generally of the form where Λ βR ;αR (R; t) is a general (electronic) site quantity matrix element and Λ ion R involves ρ ion R (x). In addition to the dependence of the single-particle density matrix on time, which would be expected in the presence of a time-dependent electromagnetic field, the site quantity matrix elements appearing in (18) also have a time dependence -and thus so do the Λ βR ;αR (R; t) associated with the various site multipole moments -because they themselves depend on the electromagnetic field. This sort of dependence is not unexpected in the response of systems to the full electromagnetic field. The diamagnetic response of an atom, for example, is not due to a change in its wave function when a magnetic field is applied, which would be captured by the single-particle density matrix, but rather arises because the expression of the charge velocity in terms of the canonical momentum is modified.
We begin by expanding all objects in powers of the electromagnetic field, such that etc. Again, the superscript (0) denotes the contribution to the quantity that is independent of the Maxwell fields; this is the value the object would take in the unperturbed system. The superscript (1) denotes the linear response of the quantity to the Maxwell fields [26]. Here ". . ." represent terms that are higher than first order in the Maxwell fields and will later be neglected. Also, for n = 0, ρ ion(n) R (x) = 0 and consequently Λ ion(n) R = 0 as the ion cores are assumed fixed; thus, in describing the electronic response, the net response of the system is captured. From (19) it is clear that there are two (electronic) contributions to the linear response of a general site quantity to the Maxwell fields, We have called [19] the first term on the right-hand side, a "dynamical" contribution to the linear response, because it arises from modifications to the unperturbed single-particle density matrix due to the Maxwell fields, and the other term, a "compositional" contribution, because it arises due to the way in which the site quantity matrix elements themselves depend on the Maxwell fields. As we will show, (22) only describes first-order modifications of single-site properties as a result of the electromagnetic field. Moreover, we will generally decompose the linear response of a site quantity (20) as a sum of the contributions from the Maxwell electric field, its symmeterized derivative, the Maxwell magnetic field, and higher-order derivatives of these fields, such that In general each of the constituents on the right-hand side of (23) is composed of a dynamical contribution and a compositional contribution; for instance, However, for each site multipole moment that is considered only a limited number of the constituents in (23) are retained; this is detailed in Section IV.
In the remainder of this section we determine the evolution of η (1) αR ;βR (t) from the initial η (0) αR ;βR , and in the following section we combine those results with the Λ (0) βR ;αR (R, t) and Λ (1) βR ;αR (R, t) appropriately, to find the linear response of the site multipole moments.
B. Evolution of the single-particle density matrix In the independent particle approximation, the equations of motion governing the evolution of the (electronic) single-particle density matrix elements take the form [18] i where The quantitiesH µR1;νR2 (R a , t) can be understood as generalized "hopping" matrix elements and are as previously defined [19]. With the neglect of local field corrections they involve the Maxwell fields in the neighborhood of the lattice sites appearing, including lattice site R a . This lattice site can be arbitrarily chosen [19], and we discuss its choice below. The Maxwell field B(x, t) also enters in the quantities ∆(R , . . . , R ; t), which are proportional to the magnetic flux through the surface generated by connecting the points (R , . . . , R ) with straight lines, when the usual choice of straight-line paths for the relators is adopted (see Appendix A). In this work, this choice is always made.
An expansion of the hopping matrix elements in powers of the electromagnetic field [19] gives where W αR (x) ≡ x|αR is the ELWF identified by its type α and the lattice site R with which it is associated, and H 0 x, p(x) is the differential operator that governs the dynamics of the electron field operators in the unperturbed infinite crystal; we take where we allow for a static and periodic magnetic field described by a vector potential satisfying A static (x) = A static (x + R) for any lattice vector R [27]. The eigen- x|nk being a cell-periodic function, and are identified by a band index n and an index k identifying the associated crystal momentum k; we denote the corresponding eigenvalues by E nk . These energy eigenfunctions can be used to construct ELWFs [28][29][30][31][32] via where the vectors |αk are related to the vectors |nk by a (unitary) "multiband gauge transformation", Generally for an insulating crystal in its zero temperature ground state there is a filling factor f n associated with each |nk that is either 0 or 1. And in this paper we restrict ourselves to the class of insulators characterized by the property that the sets of occupied and unoccupied cell-periodic functions x|nk can be used separately to construct sets of ELWFs; this class contains both ordinary insulators and Z 2 topological insulators [29,32]. Thus we can associate an analogous filling factor f α with each |αk that is also either 0 or 1 depending on the occupancy of the |nk used in the construction of that particular |αk , and so U nα (k) = 0 only if f n = f α . That is, (27) is a unitary transformation between elements of the (un)occupied subspace of the electronic Hilbert space alone. Associated with the set of vectors {|nk } is a non-Abelian Berry connection, and with the set of vectors {|αk } is another, where we have defined the Hermitian matrix W a (k), populated by elements and in general we adopt the shorthand ∂ a ≡ ∂/∂k a . For the class of insulators we consider, W a mn (k) is nonzero only if f m = f n . In what follows, the k dependence of the above introduced objects is usually kept implicit.
At zeroth order in the Maxwell fields, the unperturbed expression (25) can be implemented into (24); with this, the elements of the unperturbed single-particle density matrix for the zero temperature ground state are identified to be Following the same procedure, but now retaining only the terms that are first order in the Maxwell fields, the linear response of the single-particle density matrix is identified as [19] η (1) (recall (13) Here H Ra (x, ω) involves the electromagnetic field via the scalar quantity Ω 0 Ra (x, ω) and the vector quantity Ω Ra (x, ω). Very generally, Ω 0 y (x, ω) involves a line in-tegral involving E(z, ω) from y to x, while Ω y (x, ω) involves a more complicated line integral involving B(z, ω) from y to x [18], which also appears in (31). For |x − y| on the order of a lattice constant, an expansion [19] gives and similarly for |x − y| and |x − z| on the order of a lattice constant we have (see Appendix A). With the approximations (33,34) we find we can write [19] H (1) Thus R a acts as a natural point about which to expand the electromagnetic field, and a natural choice of R a for use in (31) would be a site "close" to R or R . Still leaving that choice open, we implement (36) in (31) to identify the contributions to η (1) αR ;βR (ω) from the electric field, the symmetrized derivative of the electric field, and the magnetic field. We write this decomposition as which will allow for the identification of the dynamical contributions to the constituents of (23). Notably we will neglect contributions related to the spatial variation of the magnetic field, since we identify any such terms as higher-order modifications (see Appendix A).
C. Linear response of the single-particle density matrix By implementing the first and second terms of (36) in (31) via (32), and noting (35) is independent of E l (x, ω) and F jl (x, ω), the linear response of the single-particle density matrix to the Maxwell electric field and its symmetrized derivative are found to be and where we have introduced The linear response to the Maxwell magnetic field involves the third term of (36), and using (35) it is found to be where we have introduced In the limit of uniform dc Maxwell fields, both (39) and the final term of (41) vanish trivially, and B ab mn (k, ω = 0) reduces to the previously defined B ab mn (k); the above expressions are thus consistent with past results [19]. Also, (38,39,41) are written as single Brillouin zone integrals. In past work [19] we showed explicitly how this reduction to a single k-integral arises when implementing (31) to find the ω = 0 component of (38). However, this reduction emerges more generally as a consequence of expressing the variation of the electromagnetic field over the unit cell through the expansion, following from (33,34,35), in powers of the length of the unit cell divided by the wavelength of light. Upon implementing the resulting expressions in (31), via (32,36), and using previously introduced identities [19], the reduction to a single k-integral occurs.
IV. THE RESPONSE TENSORS
The linear response of the single-particle density matrix (37) allows for the identification of the dynamical contributions (21) to the linear response of the site multipole moments (10,11,12) to the Maxwell fields and their derivatives. For the compositional contributions (22), we implement (30) and (13) to write which can also be decomposed into contributions due to the Maxwell fields and their derivatives. The decomposition of the net linear response is given by (23). However, for a given site multipole moment of interest, µ i R (ω), , we consider only those constituents of the associated Λ (1) αR ;αR (R; ω) and of η (1) αR ;βR (ω) that lead to the explicitly included first-order terms in (14); these are µ We have justified the retention of only these terms in Appendix A. From (10,11,12) follow the relevant site quantity matrix elements associated with the site multipole moments of interest in terms of the site quantity matrix elements appearing in (18). These have been presented earlier [18], and we now use them to determine the desired response tensors.
We are finally in a position to set R a . When considering the dynamical contribution to the linear response of a particular multipole moment associated with lattice site R to a particular Maxwell field or its derivative, we always choose R a = R in the constituent of η (1) αR ;βR (ω) being implemented. For instance, when considering µ αR ;βR (ω). In the expressions that follow, one of the matrix element indices R or R always equals R, making the use of the expansions (33,34,35) in deriving (38,39,41) sensible. As well, we are always able to manipulate the expressions for the Λ (1) αR ;αR (R; ω) that appear in such a way that the Maxwell fields are evaluated at R. Collectively, this results in the net linear response of the moments associated with R being related to the electric field, the magnetic field, and the symmetrized derivative of the electric field evaluated at R, and facilitates the passage to a relation between the macroscopic polarization and magnetization and the Maxwell fields (see (14,15) and Appendix B).
A. Linear response of the electric moments
Dipole response to the electric field
We begin with the linear response of a site electric dipole moment to the Maxwell electric field, µ (E) R (ω). While ρ (1) αR ;αR (x, R; t) depends on the magnetic field, it does not depend on the electric field; the compositional contribution µ i(E;II) R (ω) vanishes. The linear response of this quantity to the electric field is thus entirely dynamical -it is solely due to η (E) αR ;βR (ω) -and is given by From (14) we identify which is gauge invariant in that it is independent of U nα (k). In general (44) is not symmetric under exchange of Cartesian components i and l. However, if the unperturbed crystal is time-reversal symmetric, then one can show χ il E (ω) is equal to χ li E (ω); in the ω → 0 limit the exchange of these indices is always symmetric, and if absorption is neglected, then χ il E (ω) is equal to χ li E (−ω) [33]. To obtain (43) we have implemented previously introduced [19] identities, and in the remainder of this section we often do so; Eq. (8,14,15) of that work are particularly relevant.
We now take into account the spatial variation of the Maxwell electric field. The compositional contribution vanishes, as in the response calculated above; the linear response is entirely dynamical, and it is given by The two distinct contributions appearing in the braces of (45) originate individually from the first and second lines of (39), while the contribution from final line of (39) vanishes. Via (14) we again identify the relevant response tensor. We explicitly symmeterize the indices labeling Cartesian components that are contracted with the symmeterized derivative of the Maxwell electric field, and we find Notably γ ijl (ω) = γ ilj (ω); the underlying presence of this symmetry -even if the indices j and l were not contracted with an object symmetric in those indices, F jl (x, ω), in (45) -can be recognized by identifying that the objects carrying these indices in (46) originate from the second term of (36), which was used in (31) to obtain (39). There j and l are clearly symmetric as the components of x and R a commute. Unlike χ il E (ω), γ ijl (ω) is gauge dependent.
Dipole response to the magnetic field
As ρ αR ;αR (x, R; t) does depend on the magnetic field, there are nonvanishing compositional and dynamical contributions to µ βR ;αR (x, R; ω) be the part of ρ (1) βR ;αR (x, R; ω) that is proportional to the magnetic field, the compositional contribution is given by Note that, in going from the first to the final equality, we ensure (using Eq. (29) of Ref. [18]) the ∆(R 1 , y, R; ω) that enters via ρ (B) αR ;αR (y, R; ω) (see Eq. (28,45) of Ref. [18]) is in a form that, upon implementing (35), the magnetic field is evaluated at R. Writing β il(II) P as the compositional contribution to β il P (ω) (see (15)), we have which is again gauge dependent. Interestingly, β il(II) P is independent of frequency and is identical to the compositional contribution to the tensor describing the linear response of the electric dipole moment to a uniform dc magnetic field (see Ref. [19]).
The form of the dynamical contribution is similar to (43) but with η αR ;βR (ω); denoting its contribution to β il P (ω) by β il(I) which is also gauge dependent. The two distinct terms appearing in the braces of (49) originate individually from the first two lines of (41), and the contribution from final line of (41) vanishes. We separate out the frequencyindependent terms that appear in (49) and combine them with (48). Together these terms give rise to the previously found [19,23,24] OMP tensor, α il = α il G + δ il α CS , where α CS is termed the Chern-Simons contribution and α il G the cross-gap contribution; the expressions for these are given in Appendix C. The remaining terms are used in the construction of an explicitly frequency-dependent response tensor, α il P (ω), which vanishes in the ω → 0 limit. In all, then, we find where we have defined Notably α il P (ω) arises due to the linear response of the single-particle density matrix η αR ;βR (ω) alone, making it entirely the result of a dynamical contribution. Here α CS and α il P (ω) are gauge dependent, but α il G is not.
Quadrupole response to the electric field
The compositional contribution to q ij(E) R (ω) vanishes as ρ (1) αR ;αR (x, R; t) does not depend on the electric field. The dynamical contribution involves η (E) αR ;βR (ω) and again takes the form of (43), except that it will be the second moment (see (11)) of ρ (0) βR ;αR (y, R) that will appear rather than the first. Using the expression for q ij(E) R (ω) that results, from the second of (14) we identify another gauge-dependent response tensor, with χ ijl Q (ω) = χ jil Q (ω). This symmetry of the response tensor is a consequence of the symmetry in the definition (11) of q ij R (t). Notably, both χ ijl Q (ω) and γ ijl (ω) arise from dynamical contributions alone, and are of similar form apart from an energy derivative term that appears in γ ijl (ω).
B. Linear response of the magnetic dipole moment to the electric field
The expression (12) for a site magnetic dipole moment shows that there are two contributions; an "atomic-like" contribution arising due to j R (y, t), and an "itinerant" contribution arising due toj R (y, t) [18]. We denote the contribution of the first of these to the linear response of the site magnetic dipole moment to the Maxwell electric field ν (14) We now identify these contributions.
Response of the atomic-like contribution
As j αR ;αR (y, R; t) does not depend on the electric field,ν R (ω), we compare to (14) and extract
Response of the itinerant contribution
In contrast, sincej αR ;αR (y, R; t) does depend on the electric field, there will be a nonvanishing compositional We find the compositional contribution to bẽ which, like (48), does not depend on frequency. To ensure that the electric field is evaluated at R, in reaching (56) we have used the form of F µR1;νR2 αR ;βR (t) presented above in the expression forj αR ;αR (y, R; t) (Eq. (60,61,62) of Ref. [18]), and set R a = R. The dynamical contribution is β il(I) While (55) and (56) are generally gauge dependent, (57) is only gauge dependent if there are degeneracies present in the unperturbed system. Very generally, there is a simplification that occurs when (55,56,57) are summed to form the total response tensor (54); the gauge-dependent terms appearing in (57) cancel with terms appearing in (55), and as a result the gauge-dependent terms appearing in the total β il M (ω) do not explicitly depend on the energies E nk . In all we have where we have separated out the dc-like terms, α li = α li G + δ il α CS , as in (50), and defined Like α il P (ω), α li M (ω) is entirely a consequence of a dynamical contribution. The form of (59) is similar to that (51) found for α il P (ω), apart from a term related to an energy derivative. Also, like α il G , α il P (ω) and α li M (ω) are "cross-gap" contributions; that is, they depend on both initially occupied and unoccupied Bloch energy eigenstates, and their corresponding energies. Unlike α il G , however, both α il P (ω) and α li M (ω) are gauge dependent.
A qualitative feature shared by the response tensors γ ijl (ω), α il P (ω), α li M (ω), and χ ijl Q (ω) is that they are all gauge dependent. Moreover, the explicitly gaugedependent terms within these tensors are of a similar form; the terms that involve the objects W a nm are all linear in W a nm , and also involve the energies E nk and the non-Abelian Berry connection ξ b nm . This is in contrast to what is found at the level of uniform and static Maxwell fields, where the only gauge dependence of such a tensor enters via the Chern-Simons contribution (C3) to the OMP tensor [19,23,24]. There the explicitly gaugedependent term of α CS involves the W a alone and gives rise to a discrete ambiguity associated with the OMP tensor.
V. MACROSCOPIC CHARGE AND CURRENT DENSITIES
We now construct expressions for the linear response of the macroscopic charge and current densities to the Maxwell fields, and as well identify the effective conductivity tensor σ il (q, ω) to first order in q.
A. The macroscopic current density
Retaining only the contributions to the multipole moments that are linearly induced by the Maxwell fields and that are explicitly included in (15), implementing them into the expressions (16,17) to obtain the linear response of the current density and, following (13), writing this as where α il = α il G +δ il α CS . Of the response tensors appearing here, only χ il E (ω) and α il G are gauge invariant. The rest, which are α CS , α il P (ω), α il M (ω), γ ijl (ω), and χ ijl Q (ω), are all gauge dependent. Yet the linear response of the current density J (1) (x, ω) is in fact gauge invariant. To see this, first note that α CS appears in (60) in the form vanishing via Faraday's law. So in considering the bulk response (60) we can discard α CS , replacing α il by α il G . For the other gauge-dependent terms, we re-express each response tensor as a sum of a gauge-invariant contribution, denoted by a breve accent, and a gauge-dependent contribution. We then find (see Appendix D); that is, the sum of the gaugedependent contributions vanishes. Thus the linear response of the current density is gauge invariant, as expected.
B. The macroscopic charge density
A similar analysis holds for the linear response of the charge density to the Maxwell fields, where from (16) we have (1) Again, retaining only the contributions to P (x, t) that are explicitly included in (17), those involving the electric dipole and quadrupole moments, and retaining only the contributions to the electric dipole and quadrupole moments that are linearly induced by the Maxwell fields and that are explicitly included in (15), for the frequency components we have (1) Again the Chern-Simons coefficient α CS makes no contribution, since it appears in the form vanishing since the Maxwell magnetic field necessarily satisfies ∇·B(x, t) = 0. This is analogous to the scenario for J (1) (x, ω). As was the situation there, we expect (62) to be gauge invariant as a whole. Separating out the explicitly gauge-dependent terms as before, we find (see Appendix E), which is gauge invariant, as expected. We note that the expressions (61,63) satisfy continuity as also expected.
C. The effective conductivity tensor
Finally, we can identify the linear dependence on q of the effective conductivity tensor σ il (q, ω). Fourier transforming (2) to position space, we have Comparing with (61) we can identify which agrees with the usual optical conductivity tensor found via the Kubo formula in the long wavelength limit [18]. Then, defining the dc limit of σ ilj (ω) as and implementing Faraday's law, we can identify All of (65,66,67) are gauge invariant, as expected.
In the absence of time-reversal symmetry, the σ il (ω) of (65) is nonsymmetric and can lead to the rotation of the plane of polarization of light as it propagates through the medium; this can be thought of as an "internal" Faraday effect, as illustrated by the discussion of the molecular crystal limit in the next section. The σ ilj (ω) of (67) is generally nonvanishing and nonsymmetric with respect to the exchange of any of its indices, even in the presence of time-reversal symmetry. But if that symmetry is present, then σ ilj DC will vanish and the resulting σ ilj (ω) describes what has been called natural optical activity [7]. In general the tensor σ ilj (ω) can be evaluated at frequencies above the band gap, and thus can be used to describe both optical rotary dispersion and circular dichroism. Earlier work [7] considered σ ilj (ω) at frequencies below the band gap, where E ck − E vk = ω for all c, v, and k; here (c) v are the band indices labeling Bloch energy eigenstates of the unperturbed Hamiltonian that are initially (un)occupied. To compare our results with theirs, in our expression (67) for σ ilj (ω) we can take the 0 + limit immediately without introducing any divergences, and we follow them [7] in adopting the notation " . =" to identify equalities that only formally hold in this limit. Introducing the shorthand E cvk ≡ E ck − E vk , and putting σ ilj (ω) = Re[σ ilj (ω)] + iIm[σ ilj (ω)], we find and where we have adopted the previously introduced [7] B ab nm ≡ − (see Appendix F). This is in agreement with the orbital electronic contribution to σ ilj (ω) found by Malashevich and Souza [7], as expected. Notably the only nonvanishing contribution to σ ilj (ω) in the ω → 0 limit is due to σ ilj DC , which is purely imaginary, as α il G is real. Thus, in this limit, (68) is expected to vanish, which it does.
VI. THE MOLECULAR CRYSTAL LIMIT
We now consider our response tensors χ il E (ω), γ ijl (ω), , and α li M (ω) in the molecular crystal limit. That is, we consider a periodic array of molecules where the orbitals associated with a molecule at a given lattice site share no common support with those of molecules associated with other lattice sites; again, we take the external electric and magnetic fields to which the molecules respond to be the macroscopic Maxwell fields, neglecting any local field corrections. We denote the response tensors in this limit by a circle accent.
We discussed the approach to this limit from the full crystalline expressions earlier [19]; in essence, this limit can be reached by taking the ELWFs (26) to be eigenfunctions of H 0 x, p(x) , in addition to the condition on the common support of these functions mentioned above [34]. The former condition can be achieved by taking E nk → E n and U nα (k) → δ nα , and, consequently, Again restricting ourselves to frequencies below the band gap, as in the second part of Section V C, and implementing these substitutions, (44) becomes where E n n ≡ E n − E n . In the presence of time-reversal symmetry this tensor is symmetric under the exchange of Cartesian components i and l, but in general it is not, and we have onlyχ il E (ω) .
=χ li E (−ω). These results follow the pattern of the corresponding tensor for the more general crystalline system (see text surrounding Eq. (44)). Note that even were it the only response tensor present, an asymmetric χ il E (ω) would be sufficient to lead to the rotation of the polarization of light as it propagates through a medium, as can be easily confirmed. In this molecular crystal limit it is easy to give an example of how this might arise. Suppose, for example, that the breaking of time-reversal necessary for the asymmetricχ il E (ω) occurs because each molecule -or, simpler, atom -is subject to a dc magnetic field that is incorporated in the unperturbed atomic Hamiltonian. Then, if light is propagating in the direction of the dc magnetic field, then the rotation of its plane of polarization that results is just the Faraday effect, which is well known in atomic systems and indeed has a variety of applications [35]. Next, (46) simplifies to and (53) to Recall from previous work [19] α il = e 2 2mcΩ uc lab vcn where, in this limit, Further, (51) becomes Then, combining this with the dc-like contribution, the full response of the polarization to the magnetic field is given bẙ Finally, (59) simplifies to Combining this with the dc-like contribution, the full response of the magnetization to the electric field is given bẙ Physically one expects that an equivalent way to derive these expressions would be to solve for the linearly induced moments of the individual molecules; since local field corrections are being neglected, the fields to which they respond are the Maxwell fields, and the limiting response tensors above should be equal to the appropriate molecular response tensors multiplied by the number of molecules per unit volume, here equal to Ω −1 uc . The molecular calculations can be made with the usual multipole moment Hamiltonian [11,12], which including the moments relevant here can be written aŝ whereĤ 0 mol is the Hamiltonian in the absence of any Maxwell fields; E i (t), F ij (t), and B i (t) are the Cartesian components of the electric field, its symmeterized derivative, and the magnetic field evaluated at the position of the molecule; andμ i ,q ij ,ν i P , andν i D are the indicated components of the operators for the electric dipole and quadrupole moments, and the paramagnetic and diamagnetic dipole moments of the molecule. The diagmagnetic dipole moment is neglected here since it is not involved in optical activity, but the matrix elements of the other moments can be written in terms of the "position" and "momentum" matrix elements (71,75) involving the {W v0 (x)} and the {W c0 (x)}, now identified with the filled and empty orbitals of a molecule fixed at the origin.
The result is that (72,73,74,76,77) are indeed the appropriate molecular response tensors divided by Ω uc . The molecular calculation also clarifies certain symmetries in the expressions in the molecular crystal limit. For example, in this case one can immediately identifẙ . =γ lij (−ω), and the equivalent expressions . =α il M (−ω). The first of these holds because the response calculations leading to both quantities involve the different-time commutator of the electric dipole and the electric quadrupole moment operators, while the second holds because the response calculations leading to both involve the differenttime commutator of the electric dipole and the paramagnetic dipole moment operators. These symmetries no longer hold in the full crystal calculation, where the site multipole moments are not the result of the expectation values of site operators, but rather are evaluated in terms of the single-particle Green function.
VII. CONCLUSION
We have presented a theory for the effective conductivity tensor σ il (q, ω) of a class of insulating crystalline solids at zero temperature. In retaining terms that are at most linear in q, we extract tensors σ il (ω) and σ ilj (ω) that describe phenomena involving the rotation of the plane of polarization of light as it propagates through a medium; the former contributes through its antisymmetric part only when time-reversal symmetry is broken in the unperturbed system, and can be considered as describing an "internal" Faraday effect, while the latter contributes more generally and describes optical activity. Although we have restricted ourselves to the independent particle approximation, and have neglected spin effects and the motion of ion cores, within these approximations our expression for σ ilj (ω) describes both optical rotary dispersion and circular dichroism.
Our approach is based on introducing microscopic polarization and magnetization fields, from which the charge and current density expectation values can be found. The corresponding macroscopic fields of elementary electrodynamics can then be defined as the spatial averages of those microscopic fields; the "free" macroscopic charge and current densities that can generally arise vanish in the linear response of the class of insulators we consider. With the use of a set of Wannier functions, we associate portions of these microscopic fields with each lattice site, thereby introducing site polarization and magnetization fields from which site multipole moments are extracted.
We then construct macroscopic multipole moments from these site multipole moments, and from their linear response to the electromagnetic field we identify the tensors describing the response of the electric dipole moment per unit volume P i (x, t) to the electric field E l (x, t), to the symmetrized derivative of the electric field F jl (x, t), and to the magnetic field B l (x, t); the response of the electric quadrupole moment per unit volume Q ij (x, t) to E l (x, t); and the response of the magnetic dipole moment per unit volume M i (x, t) to E l (x, t). From these tensors we construct σ il (ω) and σ ilj (ω). Due to its focus on identifying site quantities, our strategy allows for an easy comparison with results in the "molecular crystal limit", where the electrons associated with a molecule at one lattice site cannot move to another site. But it certainly does not require that idealization.
In the limit of uniform and static electric and magnetic fields we recover the magnetoelectric effect described earlier by others [23,24] and us [19], the latter calculation using the approach implemented here. There the first-order modifications of both P i due to B l and of M l due to E i are described by the orbital magnetoelectric polarizability (OMP) tensor α il , which is nonvanishing only if both time-reversal and spatial inversion symmetry are broken in the unperturbed system. At finite frequencies the previously identified contributions to the OMP tensor, α il G and δ il α CS , remain as contributions to the response of P i (x, ω) to B l (x, ω) and of M l (x, ω) to E i (x, ω). However, additional explicitly frequency-dependent contributions, α il P (ω) and α il M (ω), to the total response tensors emerge, generally resulting in the tensors describing the linear response of P i (x, ω) to B l (x, ω) and of M l (x, ω) to E i (x, ω) to differ. These additional contributions are classified as "cross-gap" contributions, like α il G , but are gauge dependent. Thus, the net cross-gap contributions would be given by α il G + α il P (ω) and α il G + α il M (ω), respectively. The terms α il P (ω) and α il M (ω) that arise and differentiate the responses result from contributions that we identify as "dynamical." Furthermore, as the finite frequency generalization of the "compositional" contributions to the response tensors is trivial, and because α il P (ω) and α il M (ω) are manifestly "cross-gap" contributions, the Chern-Simons contribution to the finite frequency response tensors remains unchanged; that is, the finite frequency generalization of the Chern-Simons contribution to these response tensors is identical to that in the limit of uniform and static Maxwell fields.
In the molecular crystal limit the Chern-Simons contribution, which does not contribute to the bulk macroscopic charge and current densities that are linearly induced by the Maxwell fields, becomes gauge invariant [36]. As well, in that limit the response tensor characterizing the finite frequency linear response of P i (x, ω) to B l (x, ω), and the response tensor characterizing that of M l (x, ω) to E i (x, ω), are related; this relation does not hold in general beyond the molecular crystal limit. Similarly, the relations between the tensors describing the linear response of P i (x, ω) to F jl (x, ω) and of Q ij (x, ω) to E l (x, ω) that hold in the molecular crystal limit do not hold generally. This is because in the molecular crystal limit the site multipole moments can be associated with expectation values of associated operators familiar from molecular physics, whereas for a crystal in which charges can move more freely a Green function approach was used to define them.
Generally, these macroscopic multipole moments were introduced with the use of Wannier functions associated with each lattice site, and thus P i (x, t), Q ij (x, t), and M i (x, t) are "gauge dependent" in the sense that they depend on the choice of these Wannier functions. A natural choice, of course, would be a set of ELWFs. However, we showed that whatever choice is made the expressions for the linear response of the macroscopic charge and current densities to the Maxwell fields are gauge invariant. Thus our expression for σ ilj (ω), as well as that for σ il (ω), can be evaluated without any calculation -or any thought -of the Wannier functions than underpin our approach. At frequencies below the band gap we found agreement with earlier work restricted to that frequency range [7].
Yet, while they do not appear explicitly in the final expression for σ il (ω) or σ ilj (ω), it is the site multipole moments that can be introduced with the aid of these Wannier functions, and the microscopic polarization and magnetization fields on which the approach is based, that make possible the natural connection and comparison with the molecular crystal limit. This should lead to an understanding of which features of the optical activity of any material of interest can be associated with physics beyond that limit. As well, the use of such site quan-tities in our approach offers the possibility of considering the optical response of a finite system, where simply identifying the bulk tensors σ il (ω) and σ ilj (ω) is not sufficient, and will lead to the description of other linear and nonlinear optical response features that depend on the variation of the electromagnetic field throughout a finite crystal. We plan to turn to these generalizations in future publications. The "relators" allow us to obtain the microscopic polarization and magnetization fields from the charge and current density expectation values. They also arise in the way the Maxwell fields enter the dynamical equations governing such quantities. Thus, an expansion of the relators is relevant for the identification of the electric and magnetic moments and in expanding the equations of motion of quantities associated with the electron field in terms of powers of the Maxwell fields and their derivatives. As a consequence, the expansion parameter u appearing in the relator expansions can be used to identify which perturbative modifications to the various site multipole moments due to a particular Maxwell field, or derivative of that field, appear at the same "order." We now show this.
VIII. ACKNOWLEDGMENTS
The expansions of Ω j y (x, t) and Ω 0 y (x, t), (33,34), derived previously can be more easily derived using a formal expansion of the "relators", s i (w; x, y) and α ij (w; x, y), about u = 0. We begin with the definition, under the choice of a straight-line path; see Ref. [18], where we find Recall we have previously defined and found for nearly uniform Maxwell fields Ω a y (x, ω) which we have implemented in this work. We now find these approximate expressions in a different way. We write the first of (A1) as Used in (A3), following a partial integration with respect to w, immediately gives (A5). Notice that the first term of (A5) originates from the O(u 0 ) term of the s i -relator expansion (A6), and the second term from the O(u) term. Similarly, we expand the second of (A1) to the same order, O(u), and find α ij (w; x, y) Using this in (A2) we immediately arrive at (A4). Then the explicitly retained term of (A4) originates from an O(u) term of the α ij -relator expansion. Thus, (A4) and the second term of (A5) appear at the same order of the expansion parameter u. This is consistent with Faraday's law as the spatial derivatives of E(x, ω) are related to frequency factors times B(x, ω). Thus such terms appear at the same "order" with respect to the Maxwell fields and their derivatives kept in an expansion. It appears that the expansion parameter u captures this information. Now the site electric and magnetic multipole moments can be found from the "site" polarization and magnetization fields, respectively, using these same relator expansions. The site electric dipole moment (10) originates from the O(u 0 ) term of the s i -relator expansion (A6), while the site electric quadrupole moment (11) originates from the O(u) term of the s i -relator expansion. The site magnetic dipole moment (12) originates from the O(u) term of the α ij -relator expansion (A7). However, it is not only via the expansion of those relators that relate the microscopic charge and current densities to the microscopic polarization and magnetization fields that the expansion parameter u enters. When finding the linear response of the single-particle density matrix, (38,39,41), the quantities (A4,A5) are used. Thus, modifications to the site electric dipole moment appear at least at order O(u 0 ), but not all modifications to this quantity appear at this order; for instance, (43) appears at O(u 0 ), while (47) and (49) appear at O(u). Furthermore, modifications to the site electric quadrupole and the site magnetic dipole moments appear at least at order O(u); for example, (53) and (55)-(57) appear at O(u). In this work, we only consider the contributions to the linear response of a site quantity appearing at most at O(u); we neglect higher-order modifications, such as those leading to the magnetic susceptibility, which would appear at O(u 2 ), those related to spatial derivatives of the magnetic field, or those related to the linear response of higher-order multipole moments.
Appendix B: Microscopic and macroscopic fields
In this Appendix we describe approaches to constructing macroscopic fields from the microscopic fields appearing in (6).
One approach (I) often adopted to treat infinite crystals is to start from the Fourier transform to wavevector space of all the quantities of interest. For the current density, for example, we would have etc. If j(q, t) is nonzero only for a single, small q, then the variation in the current density is sinusoidal. If one wants to consider less trivial variations, then one needs to treat a range of qs. To do this one can introduce a macroscopic field associated with each microscopic field -e.g., J (q, t) with j(q, t) -by setting J (q, t) = j(q, t) for some restricted region of reciprocal space near the origin -say, |q| ≤ ∆ −1 , where ∆ is a length satisfying ∆ a, with a on the order of a lattice constant -and J (q, t) = 0 for other q in reciprocal space. Also choosing ∆ λ, where λ characterizes a typical range of variation of the fields that one is trying to capture, the macroscopic fields can describe excitations in the crystal characterized by typical length scales much larger than the lattice constant.
Another approach (II) with the same goal starts in position space rather than reciprocal space, and introduces a smooth weighting function w(x) to extract a macroscopic field L(x) from the associated microscopic field l(x) [37], identifying the macroscopic field at a point x with the average of the associated microscopic field in the neighborhood of x. We take w(x) to be a smooth positive function, peaking at x = 0 and spherically symmetric about that point, dropping off continuously as |x| → ∞ with a characteristic length scale ∆ satisfying the conditions given above, and with an integral over all space equal to unity. A typical example would be a Gaussian function, w(x) = w II (x), where w II (x) = e −|x| 2 /∆ 2 ∆ 3 π 3/2 .
The two approaches can be formally related, of course, because from (B1) we have L(q) = w(q)l(q), and by formally setting w I (q) = θ(∆ −1 − |q|), where θ(q) is the Heavyside step function, we recover the first approach. It has the advantage that constructing a macroscopic field from its associated microscopic field is a projection in wavevector space; thus, choosing w(x) = w I (x), if the operation (B1) is repeated there is no additional change. On the other hand, the w I (x) that results extends far beyond |x| = ∆, and as well takes on negative values. Indeed, any w(q) which, like w I (q), has a vanishing second derivative in some directionq about q = 0 will lead to a w(x) which must take on negative values, since it has a vanishing second moment. Thus the second approach, where one begins with a smooth and wellbehaved averaging function in position space (and note w II (q) = exp − |q| 2 /(4∆ 2 ) ), seems a better choice if one wants to understand the averaging physically, and with it one can envision a treatment of finite media and interfaces. In this paper we only concern ourselves with nominally infinite crystals, so the two approaches lead to essentially the same results; we indicate the small differences below, but most of what we say would apply to either.
We adopt the semiclassical approximation, where the electromagnetic field is treated classically, and in the Maxwell equations for the microscopic electric and magnetic fields, e(x, t) and b(x, t), we take ρ(x, t) and j(x, t) (6) as the microscopic charge-current density of the crystal. Using the averaging procedure (B1) to identify the macroscopic fields from their microscopic counterparts, we immediately find that those macroscopic fields satisfy the macroscopic Maxwell equations in the form ∇ · D(x, t) = 4π F (x, t), c∇ × H(x, t) = 4πJ F (x, t) + ∂ ∂t D(x, t), where D(x, t) = E(x, t)+4πP (x, t), H(x, t) = B(x, t)− 4πM (x, t), and etc. As mentioned in the text, we refer to the macroscopic fields E(x, t) and B(x, t) as the "Maxwell fields." Using the expansions (9) in the expression (7) for the total p(x, t) and m(x, t), and then spatial averaging using (B4), we find (4), where the macroscopic electric dipole moment per unit volume, electric quadrupole moment per unit volume, and magnetic dipole moment per unit volume are given by respectively. Since F (x, t) and J F (x, t) vanish in the problem at hand, upon implementing (4) in the macroscopic Maxwell equations, (B3), P i (x, t), Q ij (x, t), and M i (x, t) serve as the only source terms at this level of analysis. The remaining task is to establish the constitutive relations (5). We can do this by inserting (14) in (B5). The terms that will appear involve R w(x − R)L(R, ω), where here L(R, ω) is one of the macroscopic fields E l (R, ω), B l (R, ω), or F jl (R, ω). To investigate this kind of sum we note that Ω uc e iq·x w(q) + 1 Ω uc G =0 w(q + G)e i(q+G)·x , (B7) where the G are reciprocal lattice vectors, and we have used If we choose w(q) = w I (q), then the q that will contribute to L(R, ω) are such that the second term in the final equality of (B7) rigorously vanishes; from the first term in that expression we see that, since w I (q) acts as a projector, we will have R w(x − R)L(R, ω) = 1 Ω uc L(x, ω), exactly. If we choose w(q) = w II (q), then there will be corrections to this, since w II (q) does not act as a projector. However, the corrections will be small given that the inequalities (B2) are assumed to be satisfied, and we can redefine our local field corrections to include them. We then find that (B5,14,B8) lead to (5), the form of our constitutive relations.
Appendix C: List of response tensors
We here list all the response tensors that were found in this work. The derivation of these response tensors, including the acknowledgment of the assumptions that have been made, and the identification of the quantities they relate, is presented in Section II-IV. The response tensor χ il E (ω) is gauge invariant. For all the other tensors, the portion indicated with a breve accent is gauge invariant. .
ns ξ a sm ξ l mn . | 16,361.8 | 2020-06-18T00:00:00.000 | [
"Physics"
] |
Joint Modality Features in Frequency Domain for Stress Detection
Rich feature extraction is essential to train a good machine learning (ML) framework. These features are generally extracted separately from each modality. We hypothesize that richer features can be learned when modalities are jointly explored. These joint modality features can perform better than those extracted from individual modalities. We study two modalities, physiological signals – Electrodermal activity (EDA) and electrocardiogram (ECG) to investigate this hypothesis. We investigate our hypothesis to achieve three objectives for subject-independent stress detection. For the first time in the literature, we apply our proposed framework in the frequency domain. The frequency-domain decomposition of the signal effectively separates it into periodic and aperiodic components.We can correlate their behaviour by focusing on each band of the signal spectrum. Second, we show that our framework outperforms late fusion, early fusion and other notable works in the field. Finally, we validate our approach on four benchmark datasets to show its generalization ability.
I. INTRODUCTION
S TRESS is defined as the nervous system's reaction to a danger or an instruction [1]. Stress has been taken seriously in recent years as it affects many people. This tendency could be due to changing work styles, cultural demands, varying lifestyles, etc. [2]. In some circumstances, stress can be beneficial up to a point in high-pressure situations such as at work, exams, and so kind. Stress is no longer beneficial once it crosses a certain level; it also harms an individual's emotional state, health, quality of life, and productivity [3]. If certain events occur frequently and a person becomes highly concerned, the body will be stressed for the rest of the time, leading to severe health issues [4]. As a result, the importance of stress detection systems has grown compared to the situation that existed a decade ago. Protecting individuals from the growing effects of stress is critical, mainly because stress is unavoidable. As a result, timely stress diagnosis and control are crucial for improving an individual's mental health and overall well-being [5].
Automatic stress detection mainly uses three modalities: psychological, physiological, and behavioral [6]. The Hypothalamic Pituitary Adrenal (HPA) axis and the Autonomic Nervous System (ANS) are the two key components that respond to stress by attempting to restore physiological balance [7]. This is caused by changes in heart activity, sweat gland activity, skin temperature, etc. As effective stress markers, physiological signals can thus provide information on ANS activity. In addition, among the physiological signals, ECG and EDA provide a realistic view of an individual's stress level [8] The frequency-domain analysis of physiological signals has received less attention than the time-domain analysis. The signal's transitory properties can be used to comprehend the signal's frequency-domain interpretation [9]. Frequencydomain analysis for stress detection has received little attention. When looking for periodic behavior in a signal, frequency-domain analysis comes most in handy. [10]. This paper describes a joint modality feature learning method for stress detection in the frequency domain. The proposed method uses a deep neural network to learn joint-modal mapping. The ECG and EDA frequency bands are identified, and features are extracted from the PSD. These features are used for joint modality feature learning.
This study differs from earlier works in the following as-pects. Most physiological signal-based stress detection studies used time-domain and time-frequency-domain features. Frequency-domain analysis, despite its importance, receives less attention than time-domain analysis. As a result, we incorporate joint modality feature learning in the frequency domain for stress detection in this study. We use autoencoders to learn joint representation from different modality features. ECG and EDA's frequency bands, which contribute the highest to stress detection, are also evaluated.
The main contributions of this work are summarized as follows: 1) Frequency domain analysis is performed on ECG and EDA signals. The frequency bands of the ECG and EDA have been identified. We analyze the performance of each frequency band of ECG and EDA separately to identify the band that performs best for stress detection.
2) The ECG and EDA signals are divided into fixed duration segments of varying lengths. The above-developed frequency analysis framework is investigated for each segment duration separately to study the influence of segment duration on overall performance. 3) We propose an Auto-encoder-based framework to learn joint modality feature representation from ECG and EDA signals. Results obtained by using all the bands (whole signal) and individually performing the best bands (band-level) are analyzed. 4) We build an optimal CRNN-SE model consisting of convolutional and Long Short Term Memory (LSTM) layers and Squeeze-Excitation modules for use as a classifier in all of our experiments. 5) Finally, we evaluate the developed framework on four benchmark datasets to study the generalization capability.
The remaining paper is structured as follows. Section II reviews recent works on joint modality feature learning and frequency domain analysis of physiological signals. The research gap has been identified, and the objectives of the current proposal have been established. Section III contains details of our proposed frameworks. Section IV presents the results obtained and analysis performed on four benchmark datasets. Section IV-E compares the performance of the proposed method with other appropriate methods from the recent literature, and Section V concludes the paper.
II. RELATED WORKS
This section reviews prior works in joint modality feature learning and frequency domain analysis on physiological signals.
A. JOINT MODALITY FOR NON PHYSIOLOGICAL SIGNAL APPLICATIONS
Zhen et al. [11] proposed a CNN based cross-modal learning framework text-image matching.The modalities used were images and text. Two sub-networks (an image CNN and a text CNN) with weight sharing constraints at the fully connected layer were developed to learn the cross-modal correlation between the modalities. Discrimination loss was used for cross-modal learning. A linear classifier was trained using the features obtained from the cross-modal representation space. For text-image matching, a modality invariant framework was proposed by Liu et al. [12]. The proposed framework fine-tunes a pre-trained CNN image network and text RNN network with an auxiliary adversarial loss to improve the distribution consistency of the two groups of embeddings (image and text). The distributions of images and text were more similar after adversarial learning, which improved retrieval accuracy. A cross-modal representation for audio-video retrieval was proposed by Surís et al. [13]. Visual audio embeddings were obtained by projecting them into a common feature space with deep neural networks. The joint features were used for a retrieval task that generated a query from either of the two modalities. Cross-entropy was employed as the classification loss function. This loss is optimized with the cosine similarity loss to provide the best results.
A modality-invariant (MI) representations for multimodal sentiment analysis was proposed by Hazarika et al. [14]. Text, image and video were used for multiclass classification using Transformer. Joint modality features were obtained by training encoder with text, image and video. In MI learning, all modalities for the task are mapped to a common subspace for distributional alignment. Although multimodal signals come from a variety of sources, they are all used to achieve the same goal. Individual modalities are projected into a common subspace and aligned by minimising the loss of Central Moment Discrepancy (CMD). The learned representations are used as joint modality feature representations.
B. AUTOENCODER BASED WORKS IN FREQUENCY DOMAIN
A Frequential Stacked Sparse Auto-Encoder (FSSAE) was proposed by Feng et al. [15] for detecting Sleep Apnea (SA) using ECG features. The RR intervals are the input to the FSSAE module. This module transforms time-domain RR intervals into frequency-domain RR intervals. Mean Square Error (MSE) was used to calculate the reconstruction loss. Features retrieved from the hidden layer were used to train a separate Time-dependent, cost-sensitive (TDCS) model. An auto-encoder-based system for detecting epilepsy using electroencephalogram (EEG) data was proposed by Sharathappriyaa et al. [16]. Harmonic Wavelet Packet Transform (HWPT) and the Katz approach (yielding Fractal Dimension (FD)) are applied to the source EEG signal. The FD and HWPT outcome was supplied into an auto-encoder to map a high-dimensional vector into a lower-dimensional embedding. This lower-embedded feature vector was found to yield higher classification rates. The cost function used to train the autoencoder was MSE. An approach for classifying emotional states in the plane of valence-arousal using a stacked autoencoder was proposed by Bagherzadeh et al. [17]. Physiological signals from the DEAP database, includ- ing electromyogram (EMG), electroencephalogram (EEG), and other peripheral signals, were used. Time and spectral features were extracted from these source signals. These features were used to train multiple stacked autoencoders. MSE was used as the reconstruction loss. The majority voting method was used to make the final classification decision. A Supervised Denoising Autoencoder (SDAE) to learn a lowdimensional representation of ECG dynamics to detect false arrhythmia alarms was proposed by Lehman et al. [18]. MSE and binary cross-entropy were used to calculate the reconstruction and classification losses.
However, the use of autoencoders for joint modal feature learning in physiological signals, particularly in the frequency domain, has received relatively little attention. Hence, we propose a framework for subject-independent stress detection using features extracted from the ECG and EDA signals.
III. METHODOLOGY
An outline of the proposed framework is given in Figure 1. Frequency bands of EDA and ECG signals are identified. Features are extracted from the PSD. These features are used to learn a joint modality feature representation using an Autoencoder. The obtained joint modality features are used to train a CRNN-SE model to differentiate between stressed and unstressed subjects. Each of the modules is explained in detail below.
A. DATASET DETAILS
The following four benchmark datasets are used in this study.
1) ASCERTAIN
The electroencephalogram (EEG), EDA, ECG physiological signals, and facial activity recordings of 58 subjects are included in this dataset. The average age of the participants was 30. The physiological signals produced by subjects watching the emotional video were recorded. 36 video clips from [19] were used. The length of the videos was 58 to 128 seconds. The sampling rate of EDA and ECG was 128 HZ, and ECG was 256 HZ, respectively. The subjects were asked to give valence arousal ratings on a 7-point scale, expressing their emotional perception after seeing each video clip. Valence rating ranges from -3 to 3, and arousal rating ranges from 0 to 6 [20]. Based on the Valence and Arousal ratings [21], we assigned stress labels as 1 and unstressed as 0 respectively. In the 2-D valence arousal plane, as shown in the Figure 2, HALV is considered as stressed. As a result, those with high arousal and low valence were labeled as stressed, and others as unstressed. The mean value of the ratings is used to determine whether arousal or valence is high or low.
2) CLAS
The Plethysmography (PPG), EDA, and ECG physiological data were collected from 62 subjects with a mean age of 20. The sampling rate was 256 Hz. Most of the subjects were students. The subjects are involved in five different activities, including three problem-solving tasks and two perceptive tasks. Image and video-clip stimuli were used for provoking the emotional reactions of subjects in perceptive tasks. 16 emotionally classified 30-second clips from the DEAP database [23] were used as video-clip stimuli. We had 59 subjects after eliminating subjects who didn't have VOLUME 4, 2016 complete information. Stress labels were assigned using predefined stimulus tags, which are provided in the dataset [24].
3) MAUS
The dataset captured simple physiological signals under various mental load situations. The N-back task was used to create a mental workload in 22 subjects, 20 of whom were male, and 2 of whom were female. GSR, Wrist-PPG, Fingertip-PPG, and ECG signals were recorded for 35 minutes with a sampling rate of 100 Hz for Wrist-PPG and 256 Hz for others. There was a five-minute rest period at the start of the trial. The N-back task of six trials was performed after a rest interval. The subject had to remember the last N one-digit value in a succession of quickly showing digits in the N-back task. The participant was instructed to reply by pressing the space bar on the keyboard when a stimulus was identical to the N-th number before the stimuli number. The intricacy of the tasks served as ground truth. As the more significant level of N generates a greater level of mental effort, 2 and 3-back tasks were labeled as "high" mental workload states, and 0back tasks were labeled as "low" [25].
4) WAUC
The study involved 48 participants who performed the NASA Revised Multi-Attribute Task Battery II under three different activity level conditions. The speed of a stationary bike or a treadmill was changed to manipulate physical activity. Six neural and physiological modalities were recorded during the activity: ECG, EDA, breathing rate, electroencephalography, skin temperature, blood volume pulse, and 3-axis accelerometer. After each experimental section, subjects were asked to complete the NASA Task Load Index questionnaire. The NASA Task Load Index questionnaire rating was converted to a binary value and subjects were labeled (low mental workload or high mental workload) using the average rating as a threshold, which is given in the dataset [26]. We had 45 subjects after removing those subjects who lacked the necessary information.
For subject independence, we fixed training and testing subject IDs. The first 42, 43, 18 and 36 subject samples of ASCERTAIN, CLAS, MUAS and the WAUC dataset respectively are used for training. The remaining 16 subject samples of ASCERTAIN, CLAS, 4 subject samples of MUAS and 9 subject samples of WAUC dataset are used for testing. We addressed the class imbalance problem by applying the Synthetic Minority Oversampling Technique (SMOTE) [27] to training data.
B. FREQUENCY BAND AND FEATURE EXTRACTION
Based on prior works in the literature by Kwon et al. Power spectral density (using Welch's approach) of the Heart Rate Variability (HRV) extracted from each band of ECG is computed. The python library's frequency module pyHRV [32] is used for this purpose. From these PSDs computed, we extracted a total of 51 frequency-domain measures including Peak, relative powers, logarithmic powers, absolute powers, and so on. Complete list of the 51 measures are available in [32]. Power spectral density (using Welch's approach) of each band of EDA is computed. From these PSDs, we extracted a total of 40 (5 bands with 8 features each) statistical features such as mean, median, min, max, variance, standard deviation, kurtosis and skewness.
An overview of the proposed Auto-encoder to learn the joint modality representation. ECG and EDA features are concatenated (U ECG_EDA ) and given as input to the encoder. The embedded layer outcome h2(.) is taken as joint modality feature representation and used to train a CRNN-SE model.
C. AUTO-ENCODER BASED JOINT MODALITY LEARNING MODULE
ECG and EDA modalities are simultaneously mapped to a single subspace, and we use adversarial learning to learn this subspace, termed as joint modality. Different from the other works in the literature, we investigate this joint (also referred as shared, cross, common subspace in the literature) modality subspace in the frequency domain for the first time.
We propose an auto-encoder based framework to achieve this objective. The architecture of the proposed Joint Modality Auto-encoder (JMAE) is shown in Figure 3. Firstly, we concatenate the ECG features, U ECG and EDA features U EDA into one single vector input, U ECG_EDA . The first, second and third fully connected layers are h 1 (.), h 2 (.) and h 3 (.) respectively. The last layer is an output layer, Y joint of the length same as the input vector U ECG_EDA . The first, second and third hidden layers constitute the parameter vector θ(.) to be learnt by minimizing a cost (reconstruction) function. The cost function is selected such that the distributions of ECG and EDA are aligned in the joint subspace.
12:
/* The decoder function returns a Y n from a hidden representation h n (.) */ 13: 15: min θ (Loss) 16: i ← i + 1 17: end while 18: return θ 19: θ ← P arameters 20: Loss ← MSE, Cosine similarity and KL divergence 21: end procedure Based on different works in frequency domain, we investigated the following three cost functions -MSE, cosine similarity, and Kullback-Leibler (KL) divergence. The cost function will represent the differences between the input U ECG_EDA and the reconstructed Y joint . The proposed model was trained with the Adam optimizer using the default learning rate and 64 as the mini-batch size. The pseudo-code for training the JMAE is summarized in Algorithm 1.
1) MSE
MSE is calculated, as shown in Eqn. 1, where a i is the target value -U ECG_EDA . and p i is the predicted value -Y joint . The cost function value ranges from 0 to ∞. The reconstructed Y joint is more similar to input U ECG_EDA if the MSE value is near to 0 else they are dissimilar.
2) Cosine similarity
The cosine similarity is computed between the a i , the target value -U ECG_EDA and p i , the predicted value -Y joint , as shown in Eqn.2. The cost function has a value between 0 and 1. The value near 0 implies that the Y joint is similar to the U ECG_EDA , while the value near 1 indicates that they are dissimilar.
Cos(a, p) = 1 − The KL divergence is the distance metric that computes the similarity between the a i , the target value -U ECG_EDA and p i , the predicted value -Y joint , as shown in Eqn.3. The cost function value ranges from 0 to ∞. The two distributions (U ECG_EDA and Y joint ) are similar if the value is close to 0, else the distributions are dissimilar.
The results of each loss are compared in the result's section Table 3.
D. CLASSIFIER
We selected a CRNN-SE model having 2 convolutional layers, one LSTM layer and two SE modules as our classifier in all our experiments. Details for this choice are given in Appendix A. For frequency domain analysis, each signal is broken into segments of duration 5 sec each. Details for this choice are given in Appendix B.
All the models are trained with the Adam optimizer using the default learning rate and 64 as the mini-batch size. Binary Cross-Entropy (BCE) given by Eqn. 4 is taken as the loss function. Here, y act i is the actual label, and y pred i is the predicted label for all the N samples.
An early-stopping strategy controls the training duration if the loss does not decrease for 30 epochs in succession. The accuracy and F1-score is used to evaluate the performance of various models.
IV. RESULTS AND DISCUSSION
This sections presents the results obtained by applying our proposed framework on the four benchmark datasets.
A. SELECTION OF FREQUENCY BAND
To study the performance of each of ECG and EDA frequency band, the features obtained from each band used to train separate CRNN-SE classifier. Table 1 shows the frequency band analysis of the ECG dataset, and Table 2 shows the frequency band analysis of the EDA dataset. The results show that the HF band (0.15-0.40 Hz) of the ECG and b band (0.15-0.25 Hz) of EDA achieved the highest accuracy and F1 score for all the four datasets. It means frequencies from 0.15-0.25 Hz, both ECG and EDA have VOLUME 4, 2016 features with higher discriminative capacity for identifying stress. For a hardware implementation, low pass filter can be used to extract these richer features from the frequency transform on the ECG, EDA signals. It will be interesting to pursue if this band range is valid for other physiological signals e.g. EEG.
B. BAND LEVEL VS WHOLE SIGNAL
We investigated the proposed framework on the whole signal (using all the ECG and EDA frequency bands) as well as on a band level (using the bands with highest performance as obtained in Section IV-A). For the whole signal's performance, 51 frequency-domain features from the ECG signal and 40 frequency-domain features from the EDA signal are used to train a JM AE whole module. The first hidden layer h 1 (.) is a full-connected layer of length 95. The second hidden layer h 2 (.) is also a full-connected layer of length 100. The third hidden layer h 3 (.) is another full-connected layer of length 95. The joint modality features obtained from the JM AE whole are used to report the results in third and fourth columns of the We validated the proposed model by performing K-fold cross-validation on the highest performed model (Loss-MSE). The K value is chosen to be 5. The joint features obtained from the JMAE model are split into 5 folds. Classifi- cation accuracy and F1-score (mean ± standard deviation) is given in Table 4. In all the datasets cross-validation results outperformed previous results (Table-3) by 2.7-4% (absolute). We infer that this increase is due to subject dependence during cross-validation.
C. T-SNE VISUALISATION
To further investigate the joint feature learning achieved by our model, we plot t-distributed Stochastic Neighbour Embedding (tSNE) before and after joint feature learning. The t-SNE approach projects multi-dimensional points onto two-dimensional or three-dimensional spaces such that if two points have the same distribution, the resulting projection keeps them close. Similarly, in the t-SNE projections, distant points remain far apart. With tSNE, we project the joint features into a 2-D space. The feature visualization of U ECG_EDA (regular features) and h 2 (.) (joint features learnt) using MSE cost function on whole signal of all the benchmark datasets are shown in Figure 4. The red dots represent ECG features, and the green dots represent EDA features. Joint feature learning aims to bring different modalities features to a shared space. In the visualization, we observed close overlapping among modalities (ECG and EDA) after joint feature learning. This indicates that the modality gap between the distribution of modalities is significantly reduced.
D. GENERALIZATION CAPABILITIES
The proposed model is tested on four benchmark datasets to assess the proposed framework's generalization capabilities. These tests ensure that our proposed framework is not overfitting to a specific dataset collected in a given environment. We discovered that the performance on all four datasets followed the same patterns. As a result, we ensured that the four benchmark datasets we used were gathered in various scenarios. The CLAS and ASCERTAIN were collected while subjects watched emotional video clips, MAUS and WAUC were collected when subjects undergone physical activity.
E. COMPARISON WITH OTHER WORKS
This section contrasts results obtained by our proposed JMAE framework with recent works on the ASCERTAIN, CLAS, MAUS and WAUC datasets. An overview of the metrics -accuracy, F1-score and AUC are given in Table 5. It is noted that the majority of the stress detection studies used time and frequency domain features, [24], [25], [33]- [36], [39]- [41] and [26]. Our proposed JMAE based features are learned from the frequency domain measures. Hence, they perform better than the time and frequency domain feature-based frameworks of ASCERTAIN, MAUS and WAUC datasets by 15-17%.
Most works [24], [33]- [35], [38], [39] and [25] reported performance on subject-dependent scenario. The performance of these works are usually higher owing to prior knowledge of the testing subject during training process itself. However, our proposed framework also outperforms VOLUME 4, 2016 Few works utilized traditional handcrafted features in conjunction with Machine Learning (ML) models, such as Support Vector Machine and Naive Bayes and Random Forest [24], [25], [33]- [35], [38], [39] and [26]. Few more trained end-to-end deep learning model such as CNN [36] and [37] The proposed framework trains a DL models and uses the outcome of DL models (intermediate layer) to train a DL model. Our framework outperformed existing ML and DL works of ASCERTAIN, MAUS and WAUC datasets by 8-17% Using ECG biomarkers several stress related abnormalitiesties can be detected (Coronary Artery Disease (CAD [42], myocardial ischemia [43], stroke, atrial fibrillation, cardiac arrhythmias [44]).Using EDA/GSR biomarker some other set of abnormalities caused by stress can also be detected (brain and heart attack [45], Epilopsy [46], blood pressure [47], Depression [48]). Traditional approaches built separate classifiers using these modalities (biomarkers) and then took the final decision (late fusion techniques) of stressed or not. Our approach concatenates the two biomarkers (feature fusion till here) and then learns joint representation (our contribution) to yield the best feature representation biomarkers for stress detection. This is in line with the clinical practice of diagnosing by simultaneous monitoring physiological signals to take decision. Clinical decisions are rarely made by monitoring only one physiological signal. Our results are performing better than other works in the literature that are based on the single, early, and late fusion of modalities. It is interesting to note that the band-level features based framework (JM AE band ) performs better than all the other works on ASCERTAIN and MAUS datasets by 11-15%. This reinforces the richer nature of our proposed JMAE based features.
The results indicate that learning joint features of different modalities from the shared space can enhance the performance of the models. The proposed model is able to perform better than other existing works on ASCERTAIN, MAUS, and WAUS datasets. On the CLAS dataset, the accuracy of [39] is higher due to the ensemble voting on subject dependent model.
V. CONCLUSION
We proposed a joint modality features-based framework in the frequency domain for stress detection. We validated our framework using physiological signal modalities -EDA and ECG. Frequency bands of ECG and EDA are identified. Features extracted from the PSD are used to train CRNN models with SE modules. The proposed framework was tested on four benchmark datasets. The High Frequency (HF) band (0.15-0.40 Hz) of ECG and b frequency band (0.15-0.25 Hz) of EDA were found to have the most impact on the overall performance. Our promising findings encourage us to continue further study into joint modality learning with more than two modalities. .
APPENDIX A SELECTION OF CRNN ARCHITECTURE
The following sections provide information on selecting the number and location of different Convolutional layers, LSTM layers, and SE modules. VOLUME Each convolutional layer is always followed by Batch normalization and max pool layers. All the models have two fully connected layers (FC1 and FC2) and a sigmoid output layer. Performance of individual modalities on two datasets for different models is presented in Table 7. Model 7 yielded the highest performance in ASCERTAIN EDA, and CLAS EDA features. Model 5 yielded the highest performance in ASCERTAIN ECG and CLAS ECG features. We selected Model 7 architecture for the rest of the experiments using ECG features and EDA features.
APPENDIX B SELECTION OF SEGMENT DURATION
Model 7 is used as framework for EDA features and ECG features. Each signal (ECG/EDA) is divided into segments of fixed duration. Four cases are considered -2 sec, 5 sec, 10 sec and 15sec duration each. In each case, Model 7 is trained, and the performances obtained are reported in the Table 8. The highest performance is observed for segment duration 5 sec. The overall performance of 5 sec segmented signals (Table 8) rows 2 and 6, for both ECG and EDA features) is higher than the baseline performance of the full signal (Table 7 rows 1 and 5 for ECG features, rows 3 and 6 for EDA features). V RAMANA MURTHY ORUGANTI received his Masters and PhD degrees in Electrical Engineering from IIT Delhi, India. He is an Assistant Professor in Department of Electrical and Electronics Engineering, Amrita Vishwa Vidyapeetham, India. His past affiliations include NUS (Singapore), NTU (Singapore), University of Canberra (Australia) and Carnegie Mellon University (US). His research focuses on medical image processing and Affective computing. He is a Member of IEEE and the ACM. VOLUME 4, 2016 | 6,440 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Novel Dual Beam Cascaded Schemes for 346 GHz Harmonic-Enhanced TWTs
The applications of terahertz (THz) devices in communication, imaging, and plasma diagnostic are limited by the lack of high-power, miniature, and low-cost THz sources. To develop high-power THz source, the high-harmonic traveling wave tube (HHTWT) is introduced, which is based on the theory that electron beam modulated by electromagnetic (EM) waves can generate high harmonic signals. The principal analysis and simulation results prove that amplifying high harmonic signal is a promising method to realize high-power THz source. For further improvement of power and bandwidth, two novel dual-beam schemes for high-power 346 GHz TWTs are proposed. The first TWT is comprised of two cascaded slow wave structures (SWSs), among which one SWS can generate a THz signal by importing a millimeter-wave signal and the other one can amplify THz signal of interest. The simulation results show that the output power exceeds 400 mW from 340 GHz to 348 GHz when the input power is 200 mW from 85 GHz to 87 GHz. The peak power of 1100 mW is predicted at 346 GHz. The second TWT is implemented by connecting a pre-amplification section to the input port of the HHTWT. The power of 600 mW is achieved from 338 GHz to 350 GHz. The 3-dB bandwidth is 16.5 GHz. In brief, two novel schemes have advantages in peak power and bandwidth, respectively. These two dual-beam integrated schemes, constituted respectively by two TWTs, also feature rugged structure, reliable performance, and low costs, and can be considered as promising high-power THz sources.
Introduction
Terahertz (THz) devices are widely used in high data-rate communication systems, plasma diagnostic, hazardous material detection, medical imaging, etc. However, the development of THz technology faces some challenges, such as lack of THz sources with high power, miniaturization, and low cost. Semiconductor THz sources and vacuum electronic THz sources are two common ones. Although semiconductor THz sources can produce high-power output up to milliwatt, they are usually troubled by over-high upfront cost. As a compromise, vacuum electronic devices (VEDs) may deliver higher output power with lower cost [1][2][3][4][5][6][7]. In 2004, a kind of compact THz free electron laser device was introduced by Stuart R A, with 1 kW pulsed power from 0.3 THz to 3 THz [8]. In 2010, Khanh Nguyen developed a high-gain multi-beam traveling wave tube (TWT) whose operating frequency range varies from 200 GHz to 250 GHz [9]. In 2011, Istok proposed a series of Backward-wave oscillators (BWOs) in which the grating line is utilized as slow wave structure (SWS). These devices can deliver several milliwatts output power at 1.4 THz [10]. In 2012, Paoloni et al. presented a cascade backward-wave amplifier operating at 1 THz [11]. From 2012 to 2016, Tucek et al. discussed a series of vacuum electronic amplifiers including one 100 mW 670 GHz prototype device driven by a novel solid-state source [12], a compact, microfabricated vacuum electronic amplifier with 39.4 mW of maximum output power from 0.835 THz to 0.875 THz [13], and a 29 mW 1.03 THz vacuum electronic with 20 dB of saturated gain and 5 GHz of instantaneous bandwidth [14]. In 2018, a folded waveguide (FWG) TWT is fabricated by Armstrong et al., with over 300 mW power in 231.5-235 GHz [15]. In 2020, Pan Pan et al. proposed a G-band continuous wave TWT. Saturated power of 20 W is generated from 217 GHz to 219.4 GHz [16].
For the development of nuclear fusion energy, the understanding of a critical plasma phenomenon as transport of the plasma is necessary. The collective Thompson scattering at THz frequency has been proven to be an adequate technology to map anomalous density fluctuation of electrons in the plasma, without perturbing its plasma behavior. The optically pumped far-infrared (FIR) laser is a practical radiation source for this technology and applied in the National Spherical Torus Experiment (NSTX) [17]. However, the features of the high price, large volume, and relatively low power restrict its mapping region. To extend the dimension of the plasma diagnostic, the BWOs operating at 346 GHz can be promising devices due to low cost, large output power, easy assembly, and compact volume. C. Paoloni et al. designed a 0.4 W double corrugated waveguide (DCW) BWO and a 1 W double-staggered grating (DSG) BWO [18]. J. Feng et al. designed a Grooved Single Grating (GSG) structure for 346 GHz BWO. The GSG circuit was fabricated by UV-lithographie galvanoformung abformung (LIGA) microelectromechanical technologies [19].
It should be noted that the BWO is strict about power source because BWO requires very suitable power source to maintain frequency stable. Phase noise of THz signal generation, which caused by power supply voltage ripple, should be reduced to ensure low bit-error rate in THz communication [20]. To reduce requirement of high-performance power source and obtain pure frequency spectrum of output power, we developed a kind of THz source, named high-harmonic TWT (HHTWT) [6,21]. Based on HHTWT, one new HHTWT and two novel types of THz sources operating at 346 GHz are proposed in this paper to improve the output power. The HHTWT can generate the THz band electromagnetic (EM) wave by amplifying the E-band signal. Compared with conventional THz signal sources, the application of high-power E-band signal source in HHTWT can input considerable signal into SWS. It can avoid input signal being interfered with and even drown out noise, which is caused by electron gun and discordance of SWS fabrication.
One of the novel THz sources, named cascaded enhanced HHTWT (CE-HHTWT), outputs the THz power by amplifying the signal, which is generated by HHTWT. The other one, named Pre-amplified HHTWT (PA-HHTWT), amplifies the THz band EM wave by inputting a relatively high power of fundamental wave into HHTWT. This paper is organized as follows: HHTWT and two novel types of THz sources are introduced in Section 2, Section 3, and Section 4. Each section contains the operating principle, SWS design methodology, and simulation results. The analysis and design works are accomplished by CST particle studio. Section 5 is a summary of this paper.
Operating Principle of HHTWT
A HHTWT operating at 346 GHz is introduced firstly, which utilizes FWG as SWS. FWG is a promising type of SWS with wide bandwidth and high power. Compared with other conventional SWSs such as helix, FWG is of easy fabrication and assembling. Within it, the wave transmission path can be folded back upon itself multiple times, with a beam tunnel passing through its center. Energy exchange is achieved by synchronizing both the longitudinal energy flow speed and the electron beam velocity.
The schematic of the HHTWT SWS is shown in Figure 1. Port 1 and 4 are input and output ports, respectively. Attenuators are applied to match two severed ports (Port 2 and 3). The electron beam is sent from the electron gun into the tunnel. The SWS consists of three sections: Modulation section, drift tube, and radiation section. The modulation section operates at E band. The radiation section operates at THz band, corresponding to the fourth harmonic of input signal. There is also a drift tube between the modulation three sections: Modulation section, drift tube, and radiation section. The modulation section operates at E band. The radiation section operates at THz band, corresponding to the fourth harmonic of input signal. There is also a drift tube between the modulation section and radiation section, where the EM wave is cutoff and only electron beam can pass. In the modulation section, the velocity of electron beam is modulated by the E-band input signal. When the electron beam traverses the drift tube, the velocity modulation is transformed into longitudinal density modulation. If the cutoff frequency of the radiation section is 300 GHz, the fundamental wave and lower-order harmonic waves can be cut off in the radiation section, and then only high-frequency EM waves are excited and amplified by the high-order harmonic beam current. Hence, when inputting a signal at 86.5 GHz, we can get a 346 GHz output signal.
Compared with conventional FWG TWT, HHTWT is featured by adopting two high frequency structures operating in two different bands. The power of conventional THz band FWG TWT is restricted due to the high loss. It leads to low gain and overlong SWS. Generally, the low input power of THz source could exacerbate the deleterious effects. The introduction of modulation section could modulate the electron beam efficiently. It is also instructive to mitigate the demand for a THz high-power signal source, by using a millimeter-wave high-power source.
SWS Design
Two sections of the HHTWT, i.e., the modulation section and the radiation section, have different operating bands. The modulation section works at E band, and the radiation section operates at THz band. The dispersion curve and interaction impedance are obtained by 3D EM simulation. For the modulation section, the phase shift at the center frequency is set to 1.41π, which ensures enough bandwidth. Then, interaction impedance should be set as high as possible. For the radiation section, the operating voltage should be the same with that in modulation section. The dispersion curve should also be flat to broaden bandwidth. Hence, the phase shift at the center frequency is set to 1.49π. Figure 2 shows the dispersion curve, associated electron beam line, and interaction impedance of the modulation and the radiation sections, respectively. The structural parameters of the HHTWT are shown in Table 1, in which a, b, h, p, and r are the width of the broad edge of the waveguide, the length of the narrow side of the waveguide, the height of the straight rectangular waveguide, the axial period length, and the radius of the electron beam channel, respectively. Between the modulation section and the radiation section, the drift tube is adopted to decrease the risk of self-oscillation and realize the transition from velocity modulation to density modulation. The length determination of two sections and drift tube are discussed later. In the modulation section, the velocity of electron beam is modulated by the E-band input signal. When the electron beam traverses the drift tube, the velocity modulation is transformed into longitudinal density modulation. If the cutoff frequency of the radiation section is 300 GHz, the fundamental wave and lower-order harmonic waves can be cut off in the radiation section, and then only high-frequency EM waves are excited and amplified by the high-order harmonic beam current. Hence, when inputting a signal at 86.5 GHz, we can get a 346 GHz output signal.
Compared with conventional FWG TWT, HHTWT is featured by adopting two high frequency structures operating in two different bands. The power of conventional THz band FWG TWT is restricted due to the high loss. It leads to low gain and overlong SWS. Generally, the low input power of THz source could exacerbate the deleterious effects. The introduction of modulation section could modulate the electron beam efficiently. It is also instructive to mitigate the demand for a THz high-power signal source, by using a millimeter-wave high-power source.
SWS Design
Two sections of the HHTWT, i.e., the modulation section and the radiation section, have different operating bands. The modulation section works at E band, and the radiation section operates at THz band. The dispersion curve and interaction impedance are obtained by 3D EM simulation. For the modulation section, the phase shift at the center frequency is set to 1.41π, which ensures enough bandwidth. Then, interaction impedance should be set as high as possible. For the radiation section, the operating voltage should be the same with that in modulation section. The dispersion curve should also be flat to broaden bandwidth. Hence, the phase shift at the center frequency is set to 1.49π. Figure 2 shows the dispersion curve, associated electron beam line, and interaction impedance of the modulation and the radiation sections, respectively. The structural parameters of the HHTWT are shown in Table 1, in which a, b, h, p, and r are the width of the broad edge of the waveguide, the length of the narrow side of the waveguide, the height of the straight rectangular waveguide, the axial period length, and the radius of the electron beam channel, respectively. Between the modulation section and the radiation section, the drift tube is adopted to decrease the risk of self-oscillation and realize the transition from velocity modulation to density modulation. The length determination of two sections and drift tube are discussed later. The following simulation results are obtained by CST Particle Studio. The operating voltage and current are set to 18.4 kV and 10 mA in Particle in Cell (PIC) simulation, respectively. With frequency increased, transmission loss caused by skin effect becomes significantly high. The surface roughness, determined by different manufacture technology, also has significant effect on the transmission loss [21,22]. The precision computer numerical control (CNC) lathe and electric discharge machining (EDM) are main and mature processing methods and widely applied in the fabrication of THz band and E-band SWS [22,23]. By referencing experimental cases in [22,23] and summarizing our engineering experience in [21], the effective conductivities of sections operating at E band and THz band are set as 3.5 × 10 7 S/m and 1 × 10 7 S/m, respectively.
Due to low input power, the length of the modulation section should be long enough to ensure deep electron beam modulation. Hence, we construct 100 periods of modulation SWS, and determine the length of modulation section depending on the bunch state of electron beam. The frequency of input signal is 86.5 GHz, and the input power is 200 mW.
The phase space of beam electron in the modulation section is shown in Figure 3. It shows the amplification process is at linear state before 38 mm. Hence, the length of modulation section is set as 38 mm. The following simulation results are obtained by CST Particle Studio. The operating voltage and current are set to 18.4 kV and 10 mA in Particle in Cell (PIC) simulation, respectively. With frequency increased, transmission loss caused by skin effect becomes significantly high. The surface roughness, determined by different manufacture technology, also has significant effect on the transmission loss [21,22]. The precision computer numerical control (CNC) lathe and electric discharge machining (EDM) are main and mature processing methods and widely applied in the fabrication of THz band and E-band SWS [22,23]. By referencing experimental cases in [22,23] and summarizing our engineering experience in [21], the effective conductivities of sections operating at E band and THz band are set as 3.5 × 10 7 S/m and 1 × 10 7 S/m, respectively.
Due to low input power, the length of the modulation section should be long enough to ensure deep electron beam modulation. Hence, we construct 100 periods of modulation SWS, and determine the length of modulation section depending on the bunch state of electron beam. The frequency of input signal is 86.5 GHz, and the input power is 200 mW.
The phase space of beam electron in the modulation section is shown in Figure 3. It shows the amplification process is at linear state before 38 mm. Hence, the length of modulation section is set as 38 mm.
In order to determine the length of the drift tube, a series of current monitors are placed every other millimeter on the drift tube. Figure 4 plots the Fourier transform of the electron current signal at the end of the drift tube. Fundamental component and other three high harmonic components are also shown in it. However, only the fourth harmonic wave can be excited and amplified because the fundamental wave and other lower-order harmonic waves are cut off in the radiation section. In general, the length of the drift tube is controlled at the position where the fourth harmonic current is the largest. Figure 5 depicts the relative amplitude of the fourth harmonic current curve versus the length of the drift tube. The length of the drift tube is determined as 12 mm. Figure 6 shows the phase space graph of the electron beam at the end of the drift tube. It depicts that the electron Electronics 2021, 10, 195 5 of 15 beam is modulated by EM wave and stay in the linear state. The energy is still stored in the electron beam.
The length of the radiation section is chosen where output power reaches saturation point. Figure 7 shows the power versus the length of the HHTWT at 346 GHz. Hence, the length of the radiation section is 34 mm. The phase space graph of electron beam at the end of radiation section is shown as Figure 8. In order to determine the length of the drift tube, a series of current monitors ar placed every other millimeter on the drift tube. Figure 4 plots the Fourier transform of th electron current signal at the end of the drift tube. Fundamental component and othe three high harmonic components are also shown in it. However, only the fourth harmoni wave can be excited and amplified because the fundamental wave and other lower-orde harmonic waves are cut off in the radiation section. In general, the length of the drift tub is controlled at the position where the fourth harmonic current is the largest. Figure depicts the relative amplitude of the fourth harmonic current curve versus the length o the drift tube. The length of the drift tube is determined as 12 mm. Figure 6 shows th phase space graph of the electron beam at the end of the drift tube. It depicts that th electron beam is modulated by EM wave and stay in the linear state. The energy is sti stored in the electron beam. In order to determine the length of the drift tube, a series of current monitors ar placed every other millimeter on the drift tube. Figure 4 plots the Fourier transform of th electron current signal at the end of the drift tube. Fundamental component and othe three high harmonic components are also shown in it. However, only the fourth harmon wave can be excited and amplified because the fundamental wave and other lower-orde harmonic waves are cut off in the radiation section. In general, the length of the drift tub is controlled at the position where the fourth harmonic current is the largest. Figure depicts the relative amplitude of the fourth harmonic current curve versus the length o the drift tube. The length of the drift tube is determined as 12 mm. Figure 6 shows th phase space graph of the electron beam at the end of the drift tube. It depicts that th electron beam is modulated by EM wave and stay in the linear state. The energy is sti stored in the electron beam. The length of the radiation section is chosen where output power reaches saturatio point. Figure 7 shows the power versus the length of the HHTWT at 346 GHz. Hence, th length of the radiation section is 34 mm. The phase space graph of electron beam at th end of radiation section is shown as Figure 8. The length of the radiation section is chosen where output power reaches saturatio point. Figure 7 shows the power versus the length of the HHTWT at 346 GHz. Hence, th length of the radiation section is 34 mm. The phase space graph of electron beam at th end of radiation section is shown as Figure 8. Figure 8 shows the fourth harmonic is excited. Compared with Figure 6, the velocit of central cluster decreased significantly in Figure 8. It means the electron beam transfer a lot of energy to the EM wave, during the beam wave interaction in the radiation section For convenience of reference, the parameters of HHTWT are summarized in Table 2. Figure 8 shows the fourth harmonic is excited. Compared with Figure 6, the velocity of central cluster decreased significantly in Figure 8. It means the electron beam transfers a lot of energy to the EM wave, during the beam wave interaction in the radiation section. For convenience of reference, the parameters of HHTWT are summarized in Table 2.
The Simulation Results
The output power of the HHTWT is shown in Figure 9. It can deliver about 300 mW at 346 GHz. As illustrated in Figure 10, the spectrum of the output power is concentrated at 346 GHz. Figure 11 plots the output power from 340 to 348 GHz. When 200-mW signal is input from 85 to 87 GHz, over 100-mW power can be delivered with the 8-GHz bandwidth.
The Simulation Results
The output power of the HHTWT is shown in Figure 9. It can deliver about 30 at 346 GHz. As illustrated in Figure 10, the spectrum of the output power is concen at 346 GHz. Figure 11 plots the output power from 340 to 348 GHz. When 200-mW is input from 85 to 87 GHz, over 100-mW power can be delivered with the 8-GHz width.
The Simulation Results
The output power of the HHTWT is shown in Figure 9. It can deliver about 300 mW at 346 GHz. As illustrated in Figure 10, the spectrum of the output power is concentrate at 346 GHz. Figure 11 plots the output power from 340 to 348 GHz. When 200-mW signa is input from 85 to 87 GHz, over 100-mW power can be delivered with the 8-GHz band width. Electronics 2021, 10, x FOR PEER REVIEW 9 of Figure 11. Output power of HHTWT.
Operating Principle of CE-HHTWT
In order to amplify the power of HHTWT, a novel structure is proposed on the bas of HHTWT, named cascaded enhanced HHTWT. It is featured by introduction of anothe electron beam that forms dual-beam THz band CE-HHTWT.
CE-HHTWT is demonstrated in Figure 12. Connecting an amplification section to th output port of the HHTWT forms dual-beam THz band CE-HHTWT. The THz signal en ters the amplification section from the radiation section. Beam-wave interaction occu between a new electron beam and input signal of amplification section. Compared wit the HHTWT, the efficiency and the output power of dual-beam CE-HHTWT is improved
SWS Design Methodology of CE-HHTWT
As shown in Figure 12, the amplification section operates at THz band. To simplif design and fabrication, the amplification utilizes the radiation section with the same stru tural parameters as SWS does. The parameters of two electron beams in CE-HHTWT ar the same with those in HHTWT.
The length of the amplification section is chosen where the output power reache
Operating Principle of CE-HHTWT
In order to amplify the power of HHTWT, a novel structure is proposed on the basis of HHTWT, named cascaded enhanced HHTWT. It is featured by introduction of another electron beam that forms dual-beam THz band CE-HHTWT.
CE-HHTWT is demonstrated in Figure 12. Connecting an amplification section to the output port of the HHTWT forms dual-beam THz band CE-HHTWT. The THz signal enters the amplification section from the radiation section. Beam-wave interaction occurs between a new electron beam and input signal of amplification section. Compared with the HHTWT, the efficiency and the output power of dual-beam CE-HHTWT is improved.
Operating Principle of CE-HHTWT
In order to amplify the power of HHTWT, a novel structure is proposed on the basis of HHTWT, named cascaded enhanced HHTWT. It is featured by introduction of another electron beam that forms dual-beam THz band CE-HHTWT.
CE-HHTWT is demonstrated in Figure 12. Connecting an amplification section to the output port of the HHTWT forms dual-beam THz band CE-HHTWT. The THz signal enters the amplification section from the radiation section. Beam-wave interaction occurs between a new electron beam and input signal of amplification section. Compared with the HHTWT, the efficiency and the output power of dual-beam CE-HHTWT is improved.
SWS Design Methodology of CE-HHTWT
As shown in Figure 12, the amplification section operates at THz band. To simplify design and fabrication, the amplification utilizes the radiation section with the same structural parameters as SWS does. The parameters of two electron beams in CE-HHTWT are the same with those in HHTWT.
The length of the amplification section is chosen where the output power reaches saturation point. Figure 13 shows the power curve versus the length of the amplification
SWS Design Methodology of CE-HHTWT
As shown in Figure 12, the amplification section operates at THz band. To simplify design and fabrication, the amplification utilizes the radiation section with the same structural parameters as SWS does. The parameters of two electron beams in CE-HHTWT are the same with those in HHTWT. The length of the amplification section is chosen where the output power reaches saturation point. Figure 13 shows the power curve versus the length of the amplification section at 346 GHz. Hence, the length of the amplification section is 34 mm. The parameters of CE-HHTWT are summarized in Table 3.
Electronics 2021, 10, x FOR PEER REVIEW 10 of 1 section at 346 GHz. Hence, the length of the amplification section is 34 mm. The parame ters of CE-HHTWT are summarized in Table 3.
The Simulation Results
The output power of the CE-HHTWT is shown in Figure 14. The input signal is 20 mW at 86.5 GHz. Furthermore, a signal of 1100 mW is obtained at 346 GHz.
The Simulation Results
The output power of the CE-HHTWT is shown in Figure 14. The input signal is 200 mW at 86.5 GHz. Furthermore, a signal of 1100 mW is obtained at 346 GHz. section at 346 GHz. Hence, the length of the amplification section is 34 mm. The p ters of CE-HHTWT are summarized in Table 3.
The Simulation Results
The output power of the CE-HHTWT is shown in Figure 14. The input signa mW at 86.5 GHz. Furthermore, a signal of 1100 mW is obtained at 346 GHz.
Operating Principle of PA-HHTWT
The PA-HHTWT is presented in Figure 17. A pre-amplification section is co to the input port of a HHTWT, which constitute the other new TWT. Fundament is amplified by an electron beam firstly. Then, as an input signal, the amplified fu tal signal interacts with a new electron beam to generate the THz signal in the H In comparison with HHTWT and PA-HHTWT, the electron beam is deepl lated. Hence, it can improve the output power. Figure 16b shows that more than 400 mW output power could be achieved from 340 to 348 GHz. The maximum power is 1100 mW at 346 GHz. The 3-dB bandwidth of CE-HHTWT is 5 GHz.
Electronics 2021, 10, x FOR PEER REVIEW 11 of 16 Figure 15. The Fourier transform of output signal. Figure 16 shows the gain property and bandwidth property of CE-HHTWT. At 346 GHz, the output power is saturated when the input power is 200 mW. Figure 16b shows that more than 400 mW output power could be achieved from 340 to 348 GHz. The maximum power is 1100 mW at 346 GHz. The 3-dB bandwidth of CE-HHTWT is 5 GHz.
Operating Principle of PA-HHTWT
The PA-HHTWT is presented in Figure 17. A pre-amplification section is connected to the input port of a HHTWT, which constitute the other new TWT. Fundamental signal is amplified by an electron beam firstly. Then, as an input signal, the amplified fundamental signal interacts with a new electron beam to generate the THz signal in the HHTWT.
In comparison with HHTWT and PA-HHTWT, the electron beam is deeply modulated. Hence, it can improve the output power.
Operating Principle of PA-HHTWT
The PA-HHTWT is presented in Figure 17. A pre-amplification section is connected to the input port of a HHTWT, which constitute the other new TWT. Fundamental signal is amplified by an electron beam firstly. Then, as an input signal, the amplified fundamental signal interacts with a new electron beam to generate the THz signal in the HHTWT.
In comparison with HHTWT and PA-HHTWT, the electron beam is deeply modulated. Hence, it can improve the output power. Electronics 2021, 10, x FOR PEER REVIEW 12 of 16 Figure 17. The SWS of dual beam PA-HHTWT.
SWS Design Methodology of PA-HHTWT
For the PA-HHTWT, the pre-amplification section plays a role in amplifying the fundamental wave to several-watts level. The pre-amplification section utilizes the same structural parameters with modulation section. The length of pre-amplification section is set as 51 mm. The power versus axial length curve and the output signal are shown in Figures 18 and 19, respectively. The input power is set at 200 mW and then 4.5-W output power is obtained.
SWS Design Methodology of PA-HHTWT
For the PA-HHTWT, the pre-amplification section plays a role in amplifying the fundamental wave to several-watts level. The pre-amplification section utilizes the same structural parameters with modulation section. The length of pre-amplification section is set as 51 mm. The power versus axial length curve and the output signal are shown in Figures 18 and 19, respectively. The input power is set at 200 mW and then 4.5-W output power is obtained.
The length of the modulation section is chosen to make the energy modulate the beam as deeply as possible, rather than amplify the EM wave. Figure 20 depicts the input and output signal of the modulation section at 86.5 GHz. When the length of the modulation section is 20.48 mm, the output power is barely higher than the input power, because the energy of electron beam is almost not changed within the modulation process. The behavior mentioned above is also validated in Figure 21. The electron beam is modulated more deeply than HHTWT and CE-HHTWT. Meanwhile, there is no obvious nonlinear characteristic.
SWS Design Methodology of PA-HHTWT
For the PA-HHTWT, the pre-amplification section plays a role in amplifying the fun damental wave to several-watts level. The pre-amplification section utilizes the sam structural parameters with modulation section. The length of pre-amplification section set as 51 mm. The power versus axial length curve and the output signal are shown i Figures 18 and 19, respectively. The input power is set at 200 mW and then 4.5-W outpu power is obtained. The length of the modulation section is chosen to make the energy modulate the beam as deeply as possible, rather than amplify the EM wave. Figure 20 depicts the input and output signal of the modulation section at 86.5 GHz. When the length of the modulation section is 20.48 mm, the output power is barely higher than the input power, because the energy of electron beam is almost not changed within the modulation process. The behavior mentioned above is also validated in Figure 21. The electron beam is modulated more deeply than HHTWT and CE-HHTWT. Meanwhile, there is no obvious nonlinear characteristic. The method to determine the length of the drift tube and the radiation section is similar to that in the HHTWT and CE-HHTWT. Eventually, the length of the drift tube and The length of the modulation section is chosen to make the energy modulate th beam as deeply as possible, rather than amplify the EM wave. Figure 20 depicts the inpu and output signal of the modulation section at 86.5 GHz. When the length of the modula tion section is 20.48 mm, the output power is barely higher than the input power, becaus the energy of electron beam is almost not changed within the modulation process. Th behavior mentioned above is also validated in Figure 21. The electron beam is modulate more deeply than HHTWT and CE-HHTWT. Meanwhile, there is no obvious nonlinea characteristic. The method to determine the length of the drift tube and the radiation section is sim ilar to that in the HHTWT and CE-HHTWT. Eventually, the length of the drift tube an the radiation section are chosen at 4 mm and 10.2 mm, respectively. For convenience, th parameters of PA-HHTWT are shown in Table 4. The length of the modulation section is chosen to make the energy mod beam as deeply as possible, rather than amplify the EM wave. Figure 20 depicts and output signal of the modulation section at 86.5 GHz. When the length of the tion section is 20.48 mm, the output power is barely higher than the input power the energy of electron beam is almost not changed within the modulation pro behavior mentioned above is also validated in Figure 21. The electron beam is m more deeply than HHTWT and CE-HHTWT. Meanwhile, there is no obvious n characteristic. Table 4. The method to determine the length of the drift tube and the radiation section is similar to that in the HHTWT and CE-HHTWT. Eventually, the length of the drift tube and the radiation section are chosen at 4 mm and 10.2 mm, respectively. For convenience, the parameters of PA-HHTWT are shown in Table 4.
The Simulation Results
The output power of PA-HHTWT at 346 GHz is shown in Figure 22. The output power is 970 mW. The spectrum of output signal is shown in Figure 23.
The Simulation Results
The output power of PA-HHTWT at 346 GHz is shown in Figure 22. The power is 970 mW. The spectrum of output signal is shown in Figure 23.
The Simulation Results
The output power of PA-HHTWT at 346 GHz is shown in Figure 22. Th power is 970 mW. The spectrum of output signal is shown in Figure 23.
Conclusions
To develop high-power THz sources, HHTWT operating at 346 GHz is introduced and analyzed in this paper. It can output THz signal by amplifying the fourth harmonic component of E-band RF signal. The simulation results demonstrate that amplifying the fourth harmonic signal is a promising way to obtain high-power THz signal. Furthermore, two power-enhanced schemes, CE-HHTWT and PA-HHTWT, are proposed in this paper. CE-HHTWT yields the THz power by amplifying the signal generated by HHTWT. PA-HHTWT amplifies THz band EM wave by importing high power-level fundamental wave into HHTWT. The operating principle and design methodology of two schemes are also described in this paper, including high frequency characteristics and length determination of each sections. To validate working principle which relies upon Pierce's linear theory, the simulation results are predicted by CST. Driven by two 18.4 kV,10 mW electron beams, CE-HHTWT can generate over 400 mW power in 340 GHz -348 GHz by inputting signal of 200 mW from 85 GHz to 87 GHz. The peak power is 1100 mW at 346 GHz. The output power of PA-HHTWT is over 600 mW in 338 GHz -350 GHz. The 3 dB bandwidth reaches 16.5 GHz. The simulation results show that CE-HHTWT and PA-HHTWT have advantages in peak power and wide bandwidth, respectively. Compared with conventional THz sources driven by high-power solid-state power amplifier, these two schemes, both constituted by two TWTs, have superiorities in structural strength and costs.
In future study, CE-HHTWT and PA-HHTWT SWSs can be fabricated by using highprecision CNC milling. Periodic permanent magnet (PPM) and dual-beam electron gun will be adopted in electron optical system. To validate the feasibility of design, the insertion loss, output power, bandwidth, and phase noise characters will be tested. Thus, the schemes of CE-HHTWT and PA-HHTWT are hopeful approaches to realize practical high-power THz sources for plasma diagnostic, communication, and radar.
Conclusions
To develop high-power THz sources, HHTWT operating at 346 GHz is introduced and analyzed in this paper. It can output THz signal by amplifying the fourth harmonic component of E-band RF signal. The simulation results demonstrate that amplifying the fourth harmonic signal is a promising way to obtain high-power THz signal. Furthermore, two power-enhanced schemes, CE-HHTWT and PA-HHTWT, are proposed in this paper. CE-HHTWT yields the THz power by amplifying the signal generated by HHTWT. PA-HHTWT amplifies THz band EM wave by importing high power-level fundamental wave into HHTWT. The operating principle and design methodology of two schemes are also described in this paper, including high frequency characteristics and length determination of each sections. To validate working principle which relies upon Pierce's linear theory, the simulation results are predicted by CST. Driven by two 18.4 kV,10 mW electron beams, CE-HHTWT can generate over 400 mW power in 340-348 GHz by inputting signal of 200 mW from 85 GHz to 87 GHz. The peak power is 1100 mW at 346 GHz. The output power of PA-HHTWT is over 600 mW in 338-350 GHz. The 3 dB bandwidth reaches 16.5 GHz. The simulation results show that CE-HHTWT and PA-HHTWT have advantages in peak power and wide bandwidth, respectively. Compared with conventional THz sources driven by high-power solid-state power amplifier, these two schemes, both constituted by two TWTs, have superiorities in structural strength and costs.
In future study, CE-HHTWT and PA-HHTWT SWSs can be fabricated by using highprecision CNC milling. Periodic permanent magnet (PPM) and dual-beam electron gun will be adopted in electron optical system. To validate the feasibility of design, the insertion loss, output power, bandwidth, and phase noise characters will be tested. Thus, the schemes of CE-HHTWT and PA-HHTWT are hopeful approaches to realize practical high-power THz sources for plasma diagnostic, communication, and radar. | 8,563.8 | 2021-01-16T00:00:00.000 | [
"Physics",
"Engineering"
] |
Dependence evaluation of factors influencing coal spontaneous ignition
Coal spontaneous combustion is determined by a variety of factors. Testing can describe the changes incoal spontaneous combustion with various factors, however, the dependence of spontaneous combustion on various factors is unclear. Therefore, reliability theory was used to deduce the functional relationship of the dependence of coal spontaneous combustion on various factors, and construct a model algorithm for predicting the probability of occurrence of coal spontaneous combustion, which is adopted to evaluate and rank the degree of influence of various factors on coal spontaneous combustion. Effective prevention methods are proposed by strengthening the role of the most important factors. The results show that, by taking the duration of coal heating to 150°C as the measurement standard of coal spontaneous combustion, the duration of the initial stage of coal heating increases linearly with the increase of specific heat capacity, thermal conductivity, and moisture content of coal. With the increase of oxygen concentration, oxidation rate, and initial temperature of the coal, the duration of the initial stage of coal heating decreases exponentially. With the increase of gas flow seepage velocity in the coal body, the duration of the initial heating stage changes in a parabolic manner. At the same time, the probability of spontaneous combustion decreases exponentially with the increase of specific heat capacity and moisture content of the coal. The probability of coal spontaneous combustion increases linearly with the increase of coal thermal conductivity, oxygen concentration, gas seepage velocity, and rate of oxidation. The dependence of coal spontaneous combustion probability on different factors can be expressed as follows: coal temperature > gas seepage velocity > specific heat capacity > oxidation rate > oxygen concentration > moisture content > thermal conductivity.
The coal industry is a powerful guarantor underpinning the rapid development of China's national economy, but its mining is restricted by gas, fire, water, dust, and other disasters. 1][4] Spontaneous ignition of coal depends on many factors; in recent years, scholars have conducted research into the factors affecting coal spontaneous ignition. 5,6Zhang et al. 7 and Wang et al. 8 analyzed the influence of metamorphism on oxidation and exothermic behavior of coal through spontaneous combustion test data pertaining to coal samples with different degrees of metamorphism.Wen et al. 9 and Xiao et al. 10 experimentally assessed the spontaneous combustion characteristics of coal under different oxygen volume fractions, and analyzed the characteristic temperature, mass loss, thermal effect, and thermal reaction kinetic parameters of coal sample oxidation.Pan et al. [11][12][13] carried out in-depth research on the thermal and microstructure characteristics of coal in the process of oxidation spontaneous combustion, and concluded that the thermal properties of coal spontaneous combustion have obvious segmental characteristics, which are not affected by metamorphism degree and heating rate, and the thermal dynamic balance is an important reason for the change of microstructure.In the oxidation process, the spatial arrangement of the molecular structure evolution of coal is reversible, and the -OH group is conducive to the oxidation of coal.In the low temperature oxidation stage of coal, the higher the preoxidation degree, the higher the wetting heat value and the higher the spontaneous combustion risk of coal.Xi et al. [14][15][16] studied the influence of air leakage on the spontaneous combustion process of coal, and described the trend in volume of gas generated by the reaction of coal and oxygen after changes in key parameters.Li et al. 17 and Wang et al. 18 simulated the temperature rise of oxidation of residual coal under high temperature in deep goaf to study the influences of different coal rock temperatures on the spontaneous combustion process thereof.By combining differential scanning calorimetry with programmed temperature rise, Zhang et al. 19 estimated the influences of different water contents on the parameters of coal sample absorption and heat release.Although beneficial results have been achieved in the research on the influencing factors of spontaneous coal combustion, these factors are of numerous types, complex structures, and interrelated influences, and the dependence studies on various factors influencing spontaneous combustion of coal are insufficient. 20,21n the measurement of the factors influencing coal spontaneous ignition, because the properties of the studied samples are different in each experiment, the resulting data are often significantly different, and the degree of influence of each factor cannot be described.Even if, on the basis of these data, the trend of the spontaneous combustion process of coal and the changes in the parameters affecting it can be obtained, it is difficult to rank them in order of importance or to describe mathematically the dependence of the process on each factor.Therefore, to evaluate the degree of influences of various factors on coal spontaneous ignition, the most effective method is to establish a mathematical model of the dependence of coal spontaneous ignition on various factors.][24][25][26] Based on the thermodynamic theory of gas flow in coal medium, the model of coal heat transfer and gas seepage was established, and the law of influence for influencing factors on coal spontaneous combustion was analyzed numerically.At the same time, the mathematical model for probability of coal spontaneous ignition was established to predict the dependence of coal spontaneous ignition on various factors.
| Basic theory and mathematical model
Coal is a porous medium, and its internal heat transfer, moisture transfer, and gas flow affect each other.Heat and water vapor transfer in a porous structure is a coupling phenomenon.In establishing transient heat transfer models in porous structures, the following basic assumptions should be made: (1) The structural properties of coal are uniform; (2) each phase is in thermal equilibrium; (3) The volume change of coal due to moisture and temperature is ignored; (4) Ignore the natural convection within the material.Then the basic WU ET AL.
| 3739 equation of one-dimensional heat transfer in porous media can be expressed as where φ is the porosity of porous medium, %; ρ the density of the porous medium, kg/m 3 ; C the heat capacity of the porous medium, J/(mol K); λ the thermal conductivity of the porous media, W/(m K); and q the variety of heat sources.
For the coal heat and moisture transfer model with internal gas seepage, factors such as oxidation heat, gas seepage convection heat transfer, and evaporation-driven heat absorption need to be considered.Then the coal heat and moisture transfer equation and oxygen concentration diffusion equation can be expressed thus [27][28][29] : In Formula (2), is the heat of oxidation; x denoted the heat transfer during gas seepage; ρ C w w W t denotes the latent heat of vaporization of the water; ρ c is the density of coal, kg/m 3 ; C c is the heat capacity of the coal, J/(mol K); T is coal body temperature, °C; λ c is the thermal conductivity of the coal, W/(m K); Q is the thermal effect of the oxidation reaction, J/m 3 ; C is the oxygen concentration, %; η 0 denotes the rate of oxidation, m 3 /(kg•s); E is the activation energy, J/mol; R is the ideal gas constant, 8.314 J/(mol K); ρ g is gas density, kg/m 3 ; V is the gas seepage velocity, m/s; C g is the heat capacity of the gas, J/ (mol K); C w denotes the specific heat of evaporation of water, J/(kg K); W is the water content of the coal, %; ρ w is the density of water, kg/m 3 ; D is the molecular diffusion coefficient of oxygen, m 2 /s.
Initial boundary value conditions:
where T 0 is the initial temperature of coal, °C; α the heat transfer coefficient of the coal, W/(m 2 •K); α m the mass transfer coefficient of coal, s −1 ; C 0 the initial concentration of oxygen, %; and W 0 the initial moisture content, %; L is the length of the coal body, m.
| Factors influencing coal spontaneous ignition
There are many factors affecting coal spontaneous combustion, but the temperature change of coal spontaneous combustion process mainly depends on the thermal conductivity (λ c ), specific heat capacity (C c ), initial water content (W 0 ), oxidation rate (η 0 ), the initial temperature of the coal (T 0 ), gas seepage velocity (V), and initial oxygen concentration (C 0 ).Therefore, the influences of main factors on coal spontaneous ignition were analyzed when other parameters were fixed, as shown in Figure 1.
Other basic parameters of the coal body are: L = 3.0 m, ρ c = 1200 kg/m 3 , φ = 0.3, Q = 1.26 × 10 7 J/m 3 , E = 6.28 × 10 4 J/mol, C 0 = 21%, ρ g = 1.29 kg/m 3 , C w = 4200 J/kg K, C g = 1.003J/(mol K), ρ w = 1000 kg/m 3 , D = 1.5 × 10 −11 m 2 /s, 30 and α = 0.01 W/m 2 K. 31 To simplify the calculation, the temperature at which the water evaporates was set to 100°C, and the rate was chosen to maintain that temperature until the coal was completely dry.Here, the coal was kept at 150°C until spontaneous ignition, and the degree of influences of various factors on the time to spontaneous ignition was evaluated.
| Specific heat capacity of coal
The change of specific heat capacity of coal has a significant influence on spontaneous combustion (Figure 1A).With the increase of specific heat capacity of coal, the duration of the initial heating stage of coal increases linearly.For example, when the specific heat capacity of coal is increased by 1.42 times (from 1200 to 1600 J/kg K), the initial phase of coal heating will be prolonged by 1.39 times.Then the functional relationship between coal heating time and coal-specific heat capacity can be expressed as where C c is the specific heat capacity of coal, J/(kg K), t c is the time at 150°C, h.
| Thermal conductivity of coal
Although the time of the initial heating stage of coal increases linearly with the thermal conductivity of coal (Figure 1B), the change of thermal conductivity of coal has a weak influence on the duration of the initial heating stage.For example, a twofold increase in the thermal conductivity of coal (from 0.15 to 0.30 W/m K) only increases the heating time to the critical temperature by 1.27 times.The functional relationship between the heating duration of the region with a fixed temperature and the thermal conductivity of coal can be expressed as where λ c is the thermal conductivity, W/(m K). Figure 1A,B indicate that the increase of specific heat capacity and thermal conductivity of coal can delay coal spontaneous combustion, but cannot prevent it.
| Initial oxygen concentration
The increase of oxygen concentration reduces the initial heating time of coal (Figure 1C), that is, the decrease of oxygen concentration can significantly slow down or prevent spontaneous ignition.Numerical calculation shows that the properties of coal heating process can be significantly changed by the change of oxygen concentration in the gas stream.For the given coal Relationship between calculation results of influencing factors and heating time when coal heated to 150°C.(A) Specific heat capacity, (B) thermal conductivity, (C) oxygen concentration, (D) gas flow seepage velocity, (E) moisture content, (F) oxidation rate, and (G) initial temperature of coal.
reaction properties and oxygen concentration below 15%, spontaneous ignition will not occur within the calculated 750 h.Moreover, as the concentration of oxygen in the air stream decreases, so does the optimal air speed to ensure maximum temperature rise.When the oxygen concentration is reduced from 20% to 15% (i.e., the oxygen concentration is reduced by 25%), the time for spontaneous combustion to develop to the adopted critical temperature increases from 492.52 to 739.07 h (the initial heating time increases by 50.06%).This fully shows that this parameter has a significant influence on coal spontaneous combustion, because the decrease of oxygen content will lead to a sharp decline in heating rate.As a function of oxygen content in the air, coal heating time can be expressed as where C 0 is the oxygen concentration, %.The analysis of the effect of oxygen concentration on the initial heating time of coal shows that gas insertion in goaf is conducive to preventing the development of coal spontaneous ignition. 32,33However, experiments 34 show that the critical oxygen concentration to prevent spontaneous combustion of coal is a variable and depends on many other factors.The adsorption property of coal and gas penetration rate have the greatest influence on the critical oxygen concentration.For example, if the oxygen concentration is 15%, it is similar to this calculated result.If the coal adsorbability is increased by 1.5 times, the initial heating time is about 1000 to 1400 h, creating conditions for the formation of a high-temperature environment.The rate of gas penetration in coal produces a similar effect.For example, if the oxygen concentration is 15% and the other basic parameters of the coal pile remain unchanged, the gas seepage rate through the coal body will also cause coal spontaneous combustion within 700 to 800 h when it is reduced to 5 × 10 −4 m/s.Therefore, when gas inserted in goaf is implemented to prevent coal spontaneous combustion, oxygen concentration must be reduced for a long time, and a lot of inert gas is needed.
| Gas seepage velocity
The gas seepage velocity in coal is within the range of 10 −5 to 10 −2 m/s, and the influence of gas velocity (10 −4 to 10 −3 m/s) on coal spontaneous combustion was evaluated here (Figure 1D). Figure 1D indicates that the change in gas seepage velocity has an uncertain influence on coal spontaneous ignition.For each specific case, there is a maximum optimal rate of temperature rise, depending on a set of characteristics of the coal pile and environmental effects.That is, with the increase of the flow velocity in coal, the duration of the initial heating stage presents a parabolic change, which can be expressed as where V c is the gas seepage velocity, m/s.According to the analysis in Figure 1D, if the spontaneous combustion is to be prevented by reducing or increasing the air velocity penetrating into coal, the influence of uneven aerodynamic resistance in goaf on the change of air velocity in coal must be considered.Therefore, the change of air leakage through the goaf may save some parts from the optimal spontaneous ignition conditions, but may increase the possibility of internal fires in other parts.Due to technical and technological reasons, it is often difficult to prevent coal spontaneous combustion through different air leakage rates.
| Initial water content of coal
Figure 1E shows the influence of initial coal water content on the spontaneous ignition time of coal below 150°C.In the calculations, only the slowing down of heating due to heat loss from liquid phase evaporation at a fixed temperature of 100°C is considered.As can be seen from Figure 1E, increasing the water content in coal is an effective way in which to prolong the initial stage of spontaneous combustion.There is a linear relationship between the heating time to the adopted temperature and different water contents of coal, which can be expressed as where W 0 is the water content of the coal, %.
Figure 1E shows that a fourfold increase in coal water content (from 10% to 40%) results in a 1.32-fold increase in the time required to heat the coal to the critical temperature (150°C); however, delaying spontaneous combustion by increasing the water content of coal depends on many other factors.To a large extent, the delay time caused by water evaporation is affected by the chemical reactivity of coal, air penetration rate and so on.For example, the evaporation time of water increases as the chemical reactivity of coal decreases.Therefore, it is better to use heat-resistant sources that can increase the water content of the coal. 19,35
| Oxidation rate
The chemical reactivity of coal with oxygen is the number of interactions between oxygen molecules and the active center of coal, expressed here as the oxidation rate (η 0 ).Like the oxygen concentration, the chemical reactivity of coal with oxygen also has a significant effect on spontaneous ignition, as shown in Figure 1F.
In the equation, η 0 is the oxidation rate, m 3 /(kg s).
As can be seen from Figure 1F, with the increase of oxidation rate, the heating time at the initial stage of coal spontaneous ignition decreases exponentially, that is, with the decrease of oxidation rate, the heating time at the initial stage of coal spontaneous ignition increases exponentially.
This is because coal with an inhibitory anti-heat source changes the rate of oxidation, in particular because of the blocking effect of the film formed on the surface of the coal, which prevents or hinders the infiltration of oxygen molecules into the reaction center of the coal, since the anti-heat source reacts with the coal to deactivate it.
| Initial temperature of coal
Figure 1G shows the influence of the initial temperature of the coal on the duration of the initial heating stage.As can be seen from Figure 1G, as the initial temperature of coal mass increases, the duration of the initial heating stage decreases exponentially, that is, as the initial temperature of coal decreases, the duration of the initial heating stage increases exponentially.For example, if the initial temperature is reduced by a factor of 1.5 (decreased by 10°C from 30°C to 20°C), the initial heating period is doubled.The critical initial temperature of the coal to prevent the development of spontaneous combustion depends on the properties of the coal.The chemical reactivity of coal may change this parameter and the nature of the spontaneous combustion process to the greatest extent.With the increase of reactivity, the initial critical temperature to prevent spontaneous ignition of oxidized coal decreases.Reducing the initial temperature of coal is the best way to prevent endogenous fire.
where T 0 is the initial temperature of the coal, °C.
In conclusion, the change of initial temperature of the coal has the greatest influence on spontaneous ignition.The occurrence of coal spontaneous ignition can be prevented by reducing the temperature of factors inducing coal spontaneous ignition to the lowest level: the value of each parameter, however, corresponds strictly to a particular coal property and depends on a combination of its properties and environmental impacts.The combined action of several factors that independently prevent coal spontaneous ignition may enhance or weaken the overall preventive effect of the introduced interference.
| Theoretical derivation
In the case of spontaneous combustion of coal in the goaf of coal seams, it is necessary to estimate the possibility of a fire in a specific area at time t.According to the analysis of the degree of influence of the above factors on coal spontaneous ignition and the goaf model (Figure 2), reliability theory is adopted to establish the prediction model of the probability of occurrence of a risk of coal spontaneous ignition.
Based on the dynamic process of coal spontaneous combustion, the function relation of coal spontaneous combustion incidence is given as follows: where f(t) is the occurrence rate of spontaneous combustion of coal, which is a complex function of spontaneous combustion factors and process parameters of goaf.X* = (x 1 , x 2 , …, x n ) represents the spontaneous combustion factors; Y* = (y 1 , y 2 , …, y m ) represents the process parameters.These parameters were obtained by mathematical statistics test results; in engineering practice, it is difficult to obtain the initial information of coal spontaneous combustion and other complex processes in field tests, making it necessary to study the probability of coal spontaneous combustion to establish the mathematical equation of the process according to the physical law of coal spontaneous combustion.Based on the goaf layout, a model algorithm of coal spontaneous combustion probability is constructed.
When the coal reaches the critical temperature (T ct ) above (T > T ct ), the coal is subject to the possibility of combustion.Formula ( 12) is expressed as where F is the possibility of a fire occurring; T is the temperature of the coal, °C.In Equation ( 13), dF/dT can be assumed to follow the normal distribution rule when the number of samples is as large as possible, namely: where σ is the root-mean-square deviation between the critical temperature and its expected value under fixed conditions, °C.According to the heat balance equation, the change rate of temperature of the coal can be obtained [36][37][38][39] : where dT/dy is the temperature gradient along the air seepage direction, °C/m; T 2 is the Laplace operator; H Γ( ) denotes the residual heat loss.
Equation (15) shows that the spontaneous combustion process depends on the heat exchange conditions in the coal, and the heat exchange problem is reduced to finding the temperature field in the goaf and the heat transferred by the coal and rock mass to the surrounding medium.According to the heat exchange conditions, the temperature field (T = T(x, y, z, t)) has a definite form in any given case.Here, the goaf can be considered as the temperature field of a plane with finite size.According to Veinik analysis, the approximate equation of the temperature curve can be expressed as where, T 0 is the initial temperature, °C; T s the surface temperature of coal rock mass at time t, °C; Y the depth of the heating layer at time t, m.By substituting Equation ( 17) into Equation ( 16), the equation of temperature curve is as follows: where the exponent n is determined according to the special conditions pertaining to the problem.
In addition, it is assumed that at time t = 0, a constant source of relative power q begins to act on the entire coal mass.In the absence of external heat exchange, the temperature at all points in the coal will change at the same rate.It can be found from the heat balance equation that temperature is time dependent.In time dt, the heat of coal body per unit volume is Due to heat release, the temperature of coal body increases by dT, then the heat of coal body per unit volume can be expressed as where C w is the specific heat of water.
According to Equations ( 19) and ( 20): By integrating (t = 0, T = T 0 ) against Equation ( 21), we can obtain: where q = Qη 0 Cρ c .If external heat exchange is superimposed during this process, some of the heat will be lost to the surrounding medium.According to Equation ( 22), the approximate equation of coal cooling can be expressed as where T Y is the mass temperature at depth Y from the surface.Due to thermal conductivity, the heat transferred through the surface of coal will be given by Fourier's law, namely: where S is the surface area of the coal, m 2 .In this case, the normal line of the surface lies in the same direction as the Y-axis, so the isothermal surface in the plane is parallel to the surface.Therefore, ( 24) can be expressed as Through Equation ( 25) pertaining to the temperature differential curve with respect to y, the temperature gradient can be obtained: For the surface of coal, y = Y, then: The heat lost by coal through conduction in time dt is The heat lost from the coal through convection is In Equations ( 23), (28), and (29), T h changes linearly with time.
By substituting T Y from Equation (25) into Equation ( 30) and setting T s = T 0 , we can get In Equation (31), Y is a function of time.The relationship between Y and t can be found from the heat balance equation, which can be obtained as follows: where Y 0 is half the size of the coal body in the direction of gas seepage, m.
By substituting Y in Equation (32) into Equation ( 31) and taking the derivative with respect to time t, the approximate equation of temperature of the coal field can be obtained: where Replacing the expression of temperature change rate obtained with that for the rate of occurrence of spontaneous ignition, that is, Equation ( 13), after mathematical transformation, the probability of spontaneous ignition of the coal is given by
| Numerical analysis of the probability of coal spontaneous ignition
According to the probability function of coal spontaneous ignition, MATLAB program can be used to estimate the probability of spontaneous ignition of coal in the goaf at any time t.At the same time, the probability of coal spontaneous ignition under the influences of different factors was estimated, and then the dependence of coal spontaneous ignition on the influencing factors was explored.The basic parameters of the goaf are: the density of the coal ρ c = 1200 kg/m 3 , initial oxygen concentration C 0 = 21%, the density of the gas ρ g = 1.29 kg/m 3 , the specific heat of water C w = 4200 J/kg K.The influences of thermal conductivity (λ c ), specific heat capacity (C c ), moisture content (W), oxidation rate (η 0 ), temperature of the coal (T), gas seepage velocity (V), and oxygen concentration (C) on the probability of coal spontaneous combustion are considered.
| Variations of coal spontaneous combustion with time
Under different conditions, time has an important effect on coal spontaneous combustion in goaf.The variations of the probability of coal spontaneous ignition with the duration under different influencing factors are shown in Figure 3: given thermal conductivity (λ c ), specific heat capacity (C c ), water content (W), oxidation rate (η 0 ), temperature of the coal (T), gas seepage velocity (V), and oxygen concentration (C), the probability of spontaneous combustion of goaf coal remains increases linearly with time.However, different factors have different influences on the probability of coal spontaneous ignition.For example, with the increase of the specific heat capacity of coal (C c ) and the water content of coal (W), the probability of coal spontaneous ignition decreases.With the increase of thermal conductivity (λ c ), oxidation rate (η 0 ), temperature of the coal (T), gas seepage rate (V), and oxygen concentration (C), the probability of coal spontaneous combustion increases.Different factors affect the probability of coal spontaneous ignition greatly.To describe the importance of these factors and rank them, this research selected the same time and changed the parameters, respectively to compare and analyze the changes in probability of coal spontaneous ignition.For example, at 240 and 480 h, the changes in the probability of coal spontaneous ignition caused by different factors are shown in Table 1.At 480 h, the specific heat capacity of coal increases from 1100 to 1500 J/kg K (an With the increase of coal-specific heat capacity, the probability of coal spontaneous ignition decreases exponentially (Figure 4A).As thermal conductivity of coal increases, the probability of coal spontaneous ignition increases linearly (Figure 4B), but the increase rate is small, that is, the influence of this factor is small.With the increase of oxygen concentration, the probability of coal spontaneous ignition increases linearly (Figure 4C), and the rate of increase is large, that is, the influence of this factor is obvious.With the increase of gas seepage velocity, the probability of coal spontaneous ignition increases linearly (Figure 4D), and the increase rate is large, that is, the influence of this factor is significant.
With the increase of coal moisture content, the probability of coal spontaneous ignition decreases exponentially (Figure 4E).When the moisture content is small (less than 20%), the probability of coal spontaneous ignition is significantly affected; however, when the moisture content is large, the probability of coal spontaneous ignition is less significantly affected.With the increase of oxidation rate, the probability of coal spontaneous ignition increases linearly (Figure 4F), and the influence of this factor is relatively small.With the increase of temperature of the coal, the probability of coal spontaneous ignition can be divided into two stages (Figure 4G): the stage of very low probability of coal spontaneous ignition (the coal oxidation rate is very low, equivalent to the incubation period of spontaneous combustion) and the stage of sharply increasing probability of coal spontaneous ignition; in addition, the probability of coal spontaneous ignition also increases sharply with the increase of temperature of the coal (the self-heating stage), exerting a particularly significant influence.
To sum up, although the probability of coal spontaneous ignition is determined by a variety of factors, there are clear differences in the degree of dependence on different factors.According to the degree of influence, for example, for a certain kind of coal, the specific heat capacity, thermal conductivity, water content, and rate of oxidation change little, the ranking of important factors affecting the probability of coal spontaneous ignition is as follows: temperature of the coal > gas seepage velocity > oxygen concentration, and then the existing prevention methods of spontaneous ignition can be improved by strengthening the role of the most important factors.
| CONCLUSION
1. Taking the duration of coal heating to 150°C as the measurement standard of coal spontaneous ignition, the duration of coal heating at the initial stage was found to increase linearly with the increases of coalspecific heat capacity, thermal conductivity and water content.With the increase of oxygen concentration, oxidation rate and initial temperature of the coal, the duration of initial coal heating stage decreases exponentially.With the increase of the flow velocity in coal, the duration of the initial heating stage is parabolic.2. Based on the heat balance equation and reliability theory, the probability prediction model of coal spontaneous ignition was deduced theoretically, and the probability model algorithm of coal spontaneous ignition prediction was proposed.3. The probability of coal spontaneous ignition increases linearly with time for each influencing factor.Meanwhile, the probability of coal spontaneous ignition decreases exponentially with the increase of coal-specific heat capacity and water content.With the increase of thermal conductivity, oxygen concentration, gas seepage velocity, and rate of oxidation, the probability of spontaneous ignition of the coal increases linearly.With the increase in temperature of the coal, the probability of coal spontaneous combustion can be divided into two stages: the stage of very low probability of coal spontaneous combustion (coal oxidation rate is very low), and the stage of rapid increase of coal spontaneous combustion probability.4. The probability of coal spontaneous ignition depends on different factors, and the influences of different factors on coal spontaneous ignition are as follows: temperature of the coal > gas seepage velocity > specific heat capacity > oxidation rate > oxygen concentration > water content > thermal conductivity.5.One of the main advantages of this research method is that it can infinitely change any parameter of the spontaneous combustion process and any combination with other parameters to numerically analyze the probability of coal spontaneous combustion.The research provides a basis for improving the accuracy of predictions of the risk of spontaneous combustion of coal.
4
Variations of the probability of spontaneous combustion of coal.(A) Specific heat capacity, (B) thermal conductivity, (C) oxygen concentration, (D) gas flow seepage velocity, (E) moisture content, (F) oxidation rate, and (G) coal temperature.spontaneousignition with influencing factorsTo describe the changes in probability of coal spontaneous ignition when different influencing factors change, 120, 240, 360, and 480 h were selected, and the influencing factors were taken as variables.The curve of changes in the probability of coal spontaneous ignition is shown in Figure4.As illustrated in Figure4, the longer the time, the greater the probability of coal spontaneous ignition under different factors.At the same time, the probability of coal spontaneous ignition varies with the parameters.
Changes in the probability of coal spontaneous combustion.
T A B L E 1 | 7,329 | 2023-08-25T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
BayesSMILES: Bayesian Segmentation Modeling for Longitudinal Epidemiological Studies
The coronavirus disease of 2019 (COVID-19) is a pandemic. To characterize its disease transmissibility, we propose a Bayesian change point detection model using daily actively infectious cases. Our model builds on a Bayesian Poisson segmented regression model that can 1) capture the epidemiological dynamics under the changing conditions caused by external or internal factors; 2) provide uncertainty estimates of both the number and locations of change points; and 3) adjust any explanatory time-varying covariates. Our model can be used to evaluate public health interventions, identify latent events associated with spreading rates, and yield better short-term forecasts.
Introduction
A newly identified coronavirus, SARS-CoV-2, is a lethal virus for humans. It has caused a worldwide pandemic for the disease known as COVID-19. As reported by the Johns Hopkins University Center for Systems Science and Engineering (JHU-CSSE), the COVID-19 pandemic has spread to 188 countries and territories, with more than 14 million confirmed cases by the end of July 2020. The extremely rapid spreading of the disease and the increasing burden on healthcare systems have became major public health problems. In response to the public health demand to "flatten the curve" (Akiyama et al., 2020), both federal and local governments in the United States (U.S.) have enforced a wide range of public health measures, such as border shutdowns, travel restrictions, and quarantine.
As a consequence, the importance of understanding the COVID-19 dynamics is steadily increasing in the contemporary world. In epidemiology, the basic reproduction number, denoted by R 0 , is commonly used to evaluate the transmissibility of an infectious disease like COVID-19. R 0 is interpreted as the expected number of secondary cases produced by a typical case in a closed population. During the outbreak of an epidemic, R 0 can be affected by intervention strategies. For example, social measures that decrease the contact rate between individuals would control R 0 . Isolating or treating the infected cases could lower the R 0 value as well. Another concept in the epidemic theory is the effective reproduction number R t , which describes the number of people who can be infected by an individual at any specific time t in a population. R t is timespecific since it accounts for the varying proportions of the population that are immune to the disease over time. There are many recent studies implementing the SIR model (Kermack and McKendrick, 1927) or its modified version to analyze COVID-19 transmissibility in terms of R 0 or R t (see e.g. Chen et al., 2020;Alvarez et al., 2020;Kantner and Koprucki, 2020;Gostic et al., 2020;Cooper et al., 2020). Furthermore, several studies have incorporated the information on social measures to understand the COVID-19 dynamics all over the world. For instance, Dehning et al. (2020) combined the SIR model with Bayesian inference to study the time-varying spreading rate of COVID-19 in Germany. Song et al. (2020) extended the SIR model by considering the quarantine protocols with a focus on understanding the time-course dynamics of COVID-19 in Hubei, China. Giordano et al. (2020) enriched the SIR model with additional compartments to account for under-diagnosis, which could explain the gap between the actual infection dynamics and perception of COVID-19 outbreak in Italy. Because of the heterogeneity in susceptibility and social dynamics, COVID-19 transmissibility differs among locations and changes over time. U.S. local governments have implemented different interventions since mid-March to combat the spread of COVID-19. Therefore, the basic reproduction numbers should spatiotemporally vary.
The basic reproduction number of an epidemic event is changing due to societal and political actions. Effective social measures such as business closures and stay-at-home orders could help lower the transmission rate and thus induce changes in R 0 . By studying the variations in R 0 over time, we can evaluate the dynamic transmissibility of infectious diseases like COVID-19. For instance, during the outbreak of severe acute respiratory syndrome (SARS) in China around 2003, it was reported an R 0 ≈ 3.0 for the onset stage of SARS in Hong Kong (Riley et al., 2003;Lloyd-Smith et al., 2003). Later on, it dropped to about 1.1 due to stringent control measures (Chowell et al., 2004). Decreases in R 0 captured the evolution of SARS transmission dynamics under the approach of efficient diagnosis coupled with patient isolation measures. A recent study in Germany (Dehning et al., 2020) estimated the variations in COVID-19 transmission rates for the four pre-labeled phases partitioned by three time points corresponding to the three major government interventions. Meanwhile, Song et al. (2020) extended the standard SIR model by introducing a transmission rate modifier, which takes different pre-specified decay functions under different macro or micro quarantine measures over time. These studies have enabled public health workers to analyze and evaluate the time-course dynamics of COVID-19 and motivated us to develop a method that can automatically detect the important transitioning time points that occurred during the outbreak of an epidemic, while characterizing the transmission dynamics.
We propose a method named BayesSMILES, which is short for Bayesian Segmentation ModelIng for Longitudinal Epidemiological Studies, to study the dynamics of COVID-19 transmissibility and to evaluate the effectiveness of mitigation interventions. BayesSMILES adopts a Bayesian Poisson segmented regression model to detect multiple change points based on the daily infectious COVID-19 cases. This novel model can 1) capture the epidemiological dynamics under the changing conditions caused by external or internal factors; 2) quantify the uncertainty in both the number and locations of change points; and 3) adjust any relevant explanatory time-varying covariates that may affect the infectious case numbers. In addition, BayesSMILES integrates the change point information to quantify the COVID-19 transmissibility by estimating the basic reproduction numbers in different segments. We demonstrate that our approach can improve the accuracy of the change point detection compared with a widely used change point search method on the simulated data. Applying the proposed BayesSMILES to the U.S. state-level COVID-19 daily report data, we find that the detected change points correlate well with the timing of publicly announced interventions. We also demonstrate that a stochastic SIR model incorporating change point information can provide a better short-term forecast. In all, BayesSMILES enables us to understand the disease dynamics of COVID-19 and provides useful insights for the anticipation and control of current and future pandemics.
The rest of the paper is organized as follows. We review the traditional susceptible-infectiousrecovered (SIR) model in Section 2. In Section 3, we describe the framework of BayesSMILES. The Markov chain Monte Carlo (MCMC) algorithm and posterior inference procedures are described in Section 4. We provide a comprehensive simulation study to illustrate the performance of the proposed method against a competing approach in Section 5. In Section 6, we analyze the COVID-19 data for U.S. states using the proposed BayesSMILES. Finally, we conclude with remarks in Section 7 and provide information about implementation in Section 8.
Review of the SIR Model
The susceptible-infected-removed (SIR) model was developed to simplify the mathematical modeling of human-to-human infectious diseases by Kermack and McKendrick (1927). It is a fundamental compartmental model in epidemiology. At any given time, each individual in a closed population with size N is assigned to three distinctive compartments with labels: susceptible (S), infectious (I), or removed (R, being either recovered or deceased). The standard SIR model in continuous time that models the flow of people from S to I and then from I to R is described by the following set of nonlinear ordinary differential equations (ODEs): for t > 0, subjecting to S(t) + I(t) + R(t) = N . Here β > 0 is the diseases transmission rate and γ > 0 is the removal (recovery or death) rate. Conceptually, susceptible individuals become infectious (S → I) and then are ultimately removed from the possibility of spreading the disease (I → R) due to death or recovery with immunity against reinfection. The rationale behind the first equation in (1) is that the number of new infections during an infinitesimal amount of time, −dS(t)/dt, is equal to the number of susceptible people, S(t), times the product of the contact rate, I(t)/N , and the disease transmission rate β. The third equation in (1) reflects that the infectious individuals leave the infectious population per unit of time, dI(t)/dt, as a rate of γI(t). The second equation in (1) follows from the first and third ones as a result of dS(t)/dt + dI(t)/dt + dR(t)/dt = 0. Assuming that only a small fraction of the population is infected or removed in the onset phase of an epidemic, we have S(t)/N ≈ 1 and thus the second equation reduces to dI(t)/dt = (β − γ)I(t), revealing that the infectious population is growing if and only if β > γ. As the expected lifetime of an infected case is given by γ −1 , the ratio β/γ is the average number of new infectious cases directly produced by an infected case in a completely susceptible population. Since it is a good indicator of the transmissibility of an infectious disease, the epidemiologists name it the basic reproduction number R 0 = β/γ in the context of a standard SIR model, or the effective reproduction number R t = β t /γ t in the context of a time-variant SIR model, where β and γ are replaced by β(t) and γ(t) in (1). The standard SIR model is appealing due to its simplicity. It can be extended in many different ways to better characterize the disease, such as considering vital dynamics, adding more compartments, and allowing more possible transitions between compartments. For instance, the susceptible-exposed-infectious-recovered (SEIR) model includes an additional compartment accounting for the incubation period. The susceptible-infectious-recovered-susceptible (SIRS) model allows recovered individuals to return to a susceptible state. For a comprehensive summary, see Bailey et al. (1975), Becker and Britton (1999), Allen (2008), or Andersson and Britton (2012). It is worth noting that some modified SIR models are currently being used to model the COVID-19 outbreak under under-reporting scenarios (see e.g. Flaxman et al., 2020;Riou et al., 2020;Song et al., 2020).
Data notations
During a pandemic such as COVID-19, the most accessible and complete data are the daily reported numbers on confirmed cases and deaths. Suppose N is the total population size in a given region. Let C = (C 1 , . . . , C T ) and D = (D 1 , . . . , D T ) be the sequences of cumulative confirmed case and death numbers observed at T successive equally spaced points in time (e.g. day), where C t and D t ∈ N for t = 1, . . . , T . For a region for which recovery cases are closely monitored day by day, we use E = (E 1 , . . . , E T ) to denote the sequence of cumulative recovery case numbers. Thus, due to the compositional nature of the basic SIR model, the three trajectories can be constructed as S = (S 1 , . . . , S T ) with S t = N − C t , R = (R 1 , . . . , R T ) with R t = D t + E t , and I = (I 1 , . . . , I T ) with I t = N − S t − R t = C t − D t − E t . For a region for which recovery cases do not exist or are under-reported, we consider both R and I as missing data and reconstruct these two sequences according to the last equation of (1) with a pre-specified constant removal rate γ. Specifically we set I 1 = C 1 and R 1 = 0, and generate R t = R t−1 + γI t−1 and I t = I t−1 + (C t − C t−1 ) − (R t − R t−1 ) from t = 2 to T sequentially, where · : R + → N denotes the ceiling function. For the choice of γ, we suggest estimating its value from publicly available reports in some region where confirmed, death, and recovery cases are all well-documented, or from prior epidemic studies due to the same under-reporting issue in actual data. Lastly, given a vector Y = (Y 1 , . . . , Y T ) and some initial value Y 0 (for example, Y can be C, D, E, S, I or R), we useẎ = (Ẏ 1 , . . . ,Ẏ T ) to denote the lag one difference of Y , whereẎ t = Y t − Y t−1 for t = 1, . . . , T ; that is,Ẏ t is the difference between two adjacent observations. Table 1 and 2 summarize the data notations as well as the key notations of models introduced in Section 3.3 and 3.4, respectively.
Modeling epidemic dynamics via a modified stochastic SIR model
An SIR model has three trajectories, one for each compartment. The compositional nature of the three trajectories, i.e. S(t) + I(t) + R(t) = N , implies that we need only two of them to solve the ODEs as shown in (1). As mentioned previously, assuming S(t) ≈ N for all t results in dI(t)/dt = (β − γ)I(t) and further leads to an exact solution: I(t) = I(0) exp{(β − γ)t}. For modeling daily reported actively infectious data I, we utilize its discrete-time version, The cumulative confirmed case numbers The cumulative recovery case numbers The actively infectious case numbers The cumulative removed case numbers The design matrix, where the first column is an all-one vector for the intercept, the second column is (1, . . . , T ) for the time effect, and all other columns make up the covariate matrix γ γ ∈ R + The pre-specified constant removal rate The adjusted time-varying transmission rates The adjusted time-varying transmission rates on the logarithmic scale,α t = log α t The change point indicator The locations of change points n = n k K×1 n k ∈ N The number of time points in each segment The regression coefficients for each segment The regression errors (i.e. the process error) The diagonal variance-covariance matrix that defines the prior on each column in B, i.e. b k ω ω ∈ (0, 1) The probability of being a change point a priori The hyperparameters for ω
Others
The matrix transpose operator · The ceiling function δ(·) The indicator function I n = Diag(1, . . . , 1) The n-by-n identity matrix The n-dimension all-zero column vector shown in (1) during the onset phase of a pandemic. Specifically, we suppose the infectious population size at time t is sampled from a Poisson model, where α t = I 0 exp{(β − γ)t}/N is a redefined time-varying transmissibility parameter that depends on the initial infectious population size I 0 , the disease transmission rate β t , the removal rate γ, and any latent factors (e.g. public health intervention, social behavior, virus mutation, healthcare quality, etc.) that may affect the disease transmissibility. This model automatically accounts for measurement errors and uncertainties associated with the counts. Note that (3) can be generalized to a negative binomial (NB) model, i.e. I t |α t ∼ NB(N α t , φ I ) if needed, where φ I is a dispersion parameter aiming to account for over-dispersion that might be observed in I.
Detecting change points via a Poisson segmented regression model
Our change point detection builds upon the above modified stochastic time-variant SIR model (3) with stationary transmissibility α t in a certain segment. Particularly, we assume that β t only changes at certain time points. Identifying those change points is of significant importance, as it not only enables us to characterize the temporal dynamics of the pandemic but also helps policymakers evaluate the effectiveness of the past and ongoing mitigation and intervention strategies.
In this paper, the change points are defined as those discrete time points that significantly alter the disease transmission rate β t between two adjacent segments, given a constant removal rate γ across all time points. We introduce a latent binary vector ζ = (ζ 1 , . . . , ζ T ), ζ t ∈ {0, 1}, with ζ t = 1 if time point t is a change point and ζ j = 0 otherwise. We set ζ 1 = 1 by default, interpreting the first time point as the "zeroth change point." Those points with ζ j = 0 can be partitioned into segments bounded by two adjacent change points. Thus, we use another vector z = (z 1 , . . . , z T ), z t ∈ {1, . . . , K} to reparameterize ζ, where we define z t = t u=1 ζ u . Thus, z t = k indicates that time point t is in segment k, that is, between the (k − 1) and k-th change points. The total number of change points excluding the first time point is K − 1. Note that ζ is the lag one difference of z, i.e. ζ t = z t − z t−1 with ζ 1 = 1. Figure 1 shows the representations of ζ and z for a simulated time-series dataset (T = 10) with two change points.
To infer ζ or z given the sequence of infectious population size I, we adopt a Poisson segmented regression framework, which can be written as, whereα t = log α t , x t = (1, t, x t,1 , . . . , x t,p−2 ) is a p-dimensional row vector of covariates that includes a scalar of one for the intercept, time t, and p − 2 explanatory variables observed at time t if applicable. Those explanatory variables could contain the number of tests, weather information, mobility report, or other necessary and accessible time-varying measures important to adjust for during a longitudinal epidemiological study. The vector b k = (b 1,k , . . . , b p,k ) is a p-dimensional column vector of segment-specified coefficients that includes an intercept representing the proportion of infectious people at logarithmic scale, i.e. b 1,k = log(I 0 /N ), in segment k, and a slope accounting for the time-varying disease transmission rate, i.e. b 2,k = β t −γ.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint Figure 1: An example of time-series data (T = 10) with two change points (K = 3) and its associated parameterizations in terms of ζ and z, respectively. Red triangles are change points, while black circles are not. Note that the first time point is treated as the "zeroth change point." Let X denote the design matrix, which combines all x t 's as rows and B denote the corresponding coefficient matrix, which combines all b k 's as columns. For simplicity's sake, we assume the process error 1 , . . . , T are independent and identically Gaussian distributed with zero mean and segment-specified variance, i.e. t ∼ N(0, σ 2 k ). To ensure the identifiability of all model parameters, we try a set of considerably small values for σ 2 k 's and employ a robust cross validation method called Pareto-smoothed importance sampling leave-one-out (PSIS-LOO) cross validation to determine the best choice (Vehtari et al., 2017).
Let α k be the sequence of all α t 's in segment k, i.e. α k = (α c k , . . . , α c k +n k −1 ) , where we denote c k = min{t : z t = k} as the location of the (k−1)-th change point and n k = T t=1 δ(z t = k) as the number of time points in segment k with δ(·) being the indicator function. We can rewrite the second equation in (4) asα k |b k ∼ MN(X k b k , σ 2 k I n k ), where I n k is an n k -by-n k identity matrix and X k can be explicitly written as We assume a zero-mean multivariate normal distribution for b k , that is, b k ∼ MN(0 p , H), where 0 p is an p-by-1 all-zero column vector and H = Diag(h 0 , . . . , h p−1 ) is set to be a diagonal variance-covariance matrix. For a weakly informative setting, we recommend choosing a large value for each h j . Through this prior specification, we are able to marginalize out the nuisance parameter b k and obtainα k ∼ MN 0 n k , X k HX k + σ 2 k I n k . Consequently, we can write the collapsed model of (4) as To complete the model specification, we impose an independent Bernoulli prior on ζ as ζ|ω ∼ T t=2 Bern(ω), where ω is interpreted as the probability of a time point being a change point a priori. We further relax this assumption by allowing ω ∼ Be(a ω , b ω ) to achieve a beta-Bernoulli prior. In practice, we suggest a constraint of a ω + b ω = 2 for a vague hyperprior of ω (Tadesse et al., 2005). In addition to that, we make the prior probability of ζ equal to zero if two adjacent time points are jointly selected as change points. In other words, a segment should consist of at least two time points.
Estimating basic reproduction numbers via a stochastic SIR model
Given the multiple change points ζ, we estimate the basic reproduction number R 0 = β/γ for each segment via a stochastic version of the standard SIR model as shown in (1), which only needs the cumulative confirmed case numbers C. This is because recovery data exist in only a few states in the U.S., which makes both model inference and predictions infeasible. This model considers both of the removed and actively infectious cases as missing data and mimics their relationship as in some compartmental models in epidemiology. Specifically, we assume the number of new removed cases at time t, i.e.Ṙ t , is sampled from a Poisson distribution with mean γI t−1 , that is,Ṙ t ∼ Poi(γI t−1 ) = Poi(γ(N − C t−1 − R t−1 )), where γ should be pre-specified. Based on this simplification, we rewrite the discrete version of the first equation in (1) as, where β * k is the common disease transmission rate for the all the time points in segment k. We further assume the new case number observed at time t, i.e.Ċ t , is sampled from an NB model,Ċ as it automatically accounts for measurement errors and uncertainties associated with the counts. Following most epidemiological models, we assume this stochastic process is a Markov process, where the present state (at time t) depends only upon its previous state (at time t−1). The setting above builds upon the standard SIR model. It is worth noting that the oversimplified assumptions of the proposed stochastic SIR model, as well as the bias in data reporting, might undermine the reliability of the estimates on disease transmission rates β * k 's and their succeeding basic reproduction numbers R 0k 's. However, they can still be used as a proxy to indicate the transmissibility dynamic of an infectious disease. We could consider additional compartments as seen . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint The cumulative confirmed case numberṡ The new removed case numbers, which are treated as missing data The basic reproduction numbers for each segment The hyperparameters for all φ k 's in the susceptible-infectious (SIS) model, the susceptible-infectious-recovered-deceased (SIRD) model, the susceptible-exposed-infectious-removed (SEIR) model, and the susceptible-exposedinfectious-susceptible (SEIS) model (see a comprehensive summary in Bailey et al. (1975)). The effects from the additional compartments could be incorporated by reparameterizing the mean function in the NB distribution, as shown in Equation (6), which is left as future work. For the prior distribution of the segment-specific dispersion parameter φ k , we choose a gamma distribution, φ k ∼ Ga(a φ , b φ ) for k = 1, . . . , K. We recommend small values, such as a φ = b φ = 0.001, for a weakly informative setting. This model, on average, mimics the epidemic dynamics and is more flexible than those deterministic epidemiological models. For each segment k, we assume β * k comes from a gamma distribution with hyperparameters that makes both mean and variance of the transformed variable β * k /γ equal to 1. Table 2 summarizes the notations for the model parameters described above.
the change point detection model are written as, Be-Bern(ζ t ; a ω , b ω ).
Update the change point indicator ζ: We update the binary latent vector ζ via an add-delete-swap algorithm. We randomly select an entry in ζ, say ζ t , and change its value to ζ new t = 1 − ζ t to form a new ζ new . This is an add step if ζ new t = 1 and a delete step otherwise. The swap step is performed every ten iterations, where we randomly select a change point, say ζ t = 1, and swap the values between the t and (t ± 1)-th entries in ζ to form a new ζ new . We accept the proposed ζ new with the probability min(1, m MH ), where the acceptance ratio is Here we use J(· ← ·) to denote the proposal probability distribution for the selected move. Note that the last proposal density ratio equals one. This step simultaneously updates the segmentation vector z, as it can be constructed from ζ. Update the adjusted time-varying transmission rates α: For each segment partitioned by ζ, we update α t within the same segment, say segment k, sequentially by using a random walk Metropolis-Hastings (RWMH) algorithm. We first propose a newα new . . , α c k +n k −1 ). Then we accept the proposed value α new t with probability min(1, m MH ), where the acceptance ratio is Note that the proposal density ratio cancels out for this RWMH update. The computation of the multivariate normal (MN) probability density involves matrix inversion, which can be timeconsuming, particularly when n k is large. To significantly improve the computational efficiency, we follow Zhou and Guan (2018) to approximate the exact inversion under an appropriate choice of H that satisfies the asymptotic condition. As mentioned previously, H is a p-by-p diagonal matrix, where the first entry h 0 corresponds to the variance of the normal prior on b 1,k . Under . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint the asymptotic condition of h 0 ≥ h j , ∀j = 0, the inversion of an n k -by-n k matrix is reduced to an inversion of a p-by-p matrix (See more details in Appendix A1). In practice, we set h 0 = 10, 000 and h 1 = . . . = h p−1 = 10 to ensure this asymptotic condition. The full details of the approximation method are available in the Appendix A1.
MCMC algorithms for estimating basic reproduction numbers
Once the change points are determined, we aim to estimate the basic reproduction numbers R 0 's across different segments and quantify their uncertainties based on the cumulative confirmed cases C only. According to Section 3.4, the full data likelihood and the priors of the stochastic SIR model are written as, where β * = (β * 1 , . . . , β * K ) and φ = (φ 1 , . . . , φ K ), i.e. the collections of transmission and dispersion rates of all segments. For the hyperparameters, we set a β = 1 and b β = 1/γ so that both of the expectation and variance of the basic reproduction number R 0 = β * k /γ are equal to one. With a pre-defined removal rate γ, we propose the following updates in each MCMC iterations.
Generate R based on C: We assume I 1 = C 1 and R 1 = 0, i.e. all the confirmed cases are capable of passing the disease to all susceptible individuals in a closed population at time point t = 1. Then we sampleṘ 2 ∼ Poi(γI 1 ), where γ is a pre-specified tuning parameter anḋ R 2 = R 2 − R 1 is the new removed case number at time point t = 2. Due to the compositional nature of the SIR model, we can compute I 2 = I 1 +Ċ 2 −Ṙ 2 , whereĊ 2 = C 2 − C 1 is the new confirmed cases at time point t = 2. Next, we repeat this process of samplingṘ t ∼ Poi(γI t−1 ) and computing I t = I t−1 +Ċ t −Ṙ t , t = 3, . . . , T , to generate the sequence R used in every iteration.
Update the dispersion parameters φ: For each segment, we update φ k by using an RWMH algorithm. We first propose a new φ new k , of which the logarithmic value is generated where only the k-th entry is replaced. Then we accept the proposed value φ new k with probability min(1, m MH ), where the acceptance ratio is Note that the proposal density ratio cancels out for this RWMH update.
Update the disease transmission rates β * : For each segment, we update β * k by using an RWMH algorithm. We first propose a new β * new k , of which the logarithmic value is generated . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
Note that the proposal density ratio cancels out for this RWMH update.
Posterior inference
We explore posterior inference for the parameters of interest by postprocessing the MCMC samples after burn-in iterations. We start by obtaining a point estimate of the change point indicator ζ by analyzing its MCMC samples {ζ (u) , . . . , ζ (U ) }, where u indexes the MCMC iteration after burn-in. One way is to choose the ζ corresponding to the maximum-a-posteriori (MAP), The correspondingẑ MAP can be obtained by taking the cumulative sum ofζ MAP . An alternative estimate relies on the computation of posterior pairwise probability matrix (PPM), where the probability that time points t and t are assigned into the same segment is estimated by . This estimate utilizes the information from all MCMC samples and is thus more robust. After obtaining this T -by-T co-clustering matrix denoted by P = [p tt ] T ×T , a point estimate of z can be approximated by minimizing the sum of squared deviations of its association matrix from the PPM, that is, The correspondingζ PPM can be obtained by taking the difference between consecutive entries in z PPM and setting the first entry to one. To construct a "credible interval" for a change point, we utilize its local dependency structure from all MCMC samples of ζ that belong to its neighbors. Due to the nature of the MCMC algorithm described in Section 4.1 , if a time point t is selected as a change point, i.e. ζ t = 1, then its nearby time points must not be a change point. Thus, the correlation between the MCMC sample vectors (ζ t±s ) tends to be negative when s is small. We define the credible interval of a change point as the two ends of all its nearby consecutive time points, for which the MCMC samples of ζ are significantly negatively correlated with that of the change point. This could be done via a one-sided Pearson correlation test with a pre-specified significant level, e.g. 0.05. Although quantifying uncertainties of change points is not rigorous, it performs very well in the simulation study and yields reasonable results in the real data analysis.
Once the change points are determined, an approximate Bayesian estimator of the disease transmission rate β * k for each segment k can be simply obtained by averaging over all of its MCMC samples,β k = U u=1 β (u) k /U . In addition, a quantile estimation or credible interval can be obtained. Lastly, we summarize the basic reproduction number in each segment k aŝ R 0k =β k /γ. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
Prediction
Conditional on the change point locations, we can predict the cumulative or new confirmed cases at any future time T f by Monte Carlo simulation based on the information in the last segment K only. Specifically, from time T + 1 to T f , we sequentially generatė Then, both short and long-term forecasts can be made by summarizing the (T f −T )-by-U matrix of MCMC samples. For instance, the predictive number of cumulative and new confirmed cases T +1 /U , respectively.
Simulation
We used simulated data to evaluate the performance of our BayesSMILES method in terms of both change point detection and basic reproduction number estimation. It is shown that the proposed Bayesian framework outperforms an alternative change point detection method.
The generative model
The three trajectories S, I, and R with length T = 120 were generated in the following way. We first divided the T = 120 time points into K = 4 segments with the same length; that is, the true change points were t = 31, t = 61, and t = 91. To mimic the disease transmissibility dynamics across different segments, we chose segment-varying disease transmission rates β * k while fixing the removal rate γ = 0.03. Let R 0 be a K-vector where each entry gives the reproduction number of one segment, which can be computed by β * k /γ for k = 1, . . . , K. We considered four scenarios of the set (β * 1 , β * 2 , β * 3 , β * 4 ), corresponding to 1) R 0 = (3.0, 1.2, 2.0, 0.8); 2) R 0 = (3.0, 2.3, 1.5, 0.8); 3) R 0 = (3.0, 1.8, 0.8, 1.6); 4) R 0 = (3.0, 2.0, 1.1, 0.5). Then based on the stochastic version of the standard SIR model, we sampled S t and R t from negative binomial (NB) distributions, and obtained I t , sequentially from t = 1 to T through where N = 1, 000, 000, the initial I 0 = 100 and R 0 = 0, and the dispersion parameters φ S = φ R = 10. Note that the generative scheme was with an NB error structure, which was different from our model assumption based on a Poisson error structure. We repeated the above steps to generate 50 independent datasets for each setting of R 0 . Figure 2 displays the temporal patterns of the simulated infectious counts I for the four scenarios.
Evaluation criteria
To evaluate the change point detection, we may rely on either the binary change point indicator vector ζ or the time point allocation vector z. For the choice of ζ, a change point is considered to be correctly identified if its location is within a local window of the true position (Killick and Eckley, 2014). The selection of the window size is ad hoc and may bias the evaluation. In . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint addition to that, since change points and non-change points are usually of very different sizes, most of the binary classification metrics are not suitable for model comparison here. Thus, we chose those metrics that quantify the agreement between the true and estimated allocation vectors, i.e. z andẑ. The two classic performance metrics for the analysis of clustering results are the adjusted Rand index (ARI) and mutual information (MI), proposed by Hubert and Arabie (1985) and Steuer et al. (2002), respectively. ARI is the corrected-for-chance version of the Rand index (Rand, 1971), as a similarity measure between two sample allocation vectors. Let be the number of pairs of time points that are a) in the same segment in both of the true and estimated partitions; b) in different segments in the true partition but in the same segment of the estimated one; c) in the same segment of the true partition but in different segments in the estimated one; and d) in different segments in both of the true and estimated partitions. Then, the ARI can be computed as .
The ARI usually yields values between 0 and 1, although it can yield negative values (Santos and Embrechts, 2009). The larger the index, the more similarities between z andẑ, and thus the more accurately the method detects the actual times at which change points occurred. An alternative metric choice is MI, which measures the information about one variable that is shared by the other (Steuer et al., 2002). Let m kk = T t=1 δ(z t = k)δ(ẑ t = k ) be the number of time points shared between the k-th segment in the true z and the k -th segment in the estimated oneẑ. Then, MI can be computed as whereK is the number of segments andn k 's are the segment lengths for segment 1, . . . ,K inẑ. It yields non-negative values. The larger the MI, the more accurate the partition result.
To quantify how well a method estimates the dynamic transmissibility across different segments, we used the root mean square error (RMSE) that measures the deviation between the true and estimated values of R 0 over all T time points: A smaller value of RMSE indicates a more accurate estimation of R 0 's.
Results
As for the MCMC setting of change point detection, we set 40, 000 MCMC iterations and discarded the first half as burn-in. We adopted the weakly informative setting by setting a ω = 0.1 and b ω = 1.9 in the Beta-Bernoulli prior for the change point indicator vector ζ. We set H = Diag(h 0 , h 1 ) with h 0 = 10, 000 and h 1 = 10 as the covariance matrix in the prior distribution of b k 's. Finally, we let σ 2 k take ten equally spaced values ranging from 0.0001 to 0.01 at the logarithmic scale (base 10) in the PSIS-LOO cross validation. In fitting the stochastic SIR . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint model, we set 100, 000 MCMC iterations with the first half as burn-in. As suggested in Waqas et al. (2020), the value of removal rate γ could be estimated by (T − 1) −1 T t=2 (R t − R t−1 )/I t for each simulated dataset. Then, we set a β = 1 and b β = 1/γ so that both the prior expectation and prior variance of the basic reproduction number R 0 = β * k /γ are equal to 1. We first checked the performance of BayesSMILES on a single simulated dataset, which was randomly selected from the 50 replicates in Scenario 4 (marked as the blue line in Figure 2). Note that we did the same for the remaining three scenarios, and the related results are summarized in Appendix A2. Figure 3(a) demonstrates the change point detection result based on the Poisson segmented regression model. The red dashed and the blue solid lines represent the true and the estimated change point locations, respectively, while the gray ribbons represent the 95% credible intervals for those identified change points. As we can see, BayesSMILES successfully detected the three true change points in general, as each of the 95% credible intervals covered the truth.
The resulted values of ARI and MI were 0.93 and 1.28, respectively. Later on, the stochastic SIR model introduced in Section 3.4 was then fitted to quantify the disease transmissibility in each segment bounded by the identified change points. Figure 3(b) shows the posterior distributions of R 0k 's fork = 1, 2, 3, 4 from their MCMC samples. The red dashed and blue solid lines pinpoint the true and posterior mean of R 0k 's, while the two black solid lines mark the boundary of their 95% credible intervals. Clearly, those true values were within their corresponding 95% credible intervals. The final RMSE for R 0 estimation was 0.38 for this single simulated dataset.
To the best of our knowledge" there is no method like BayesSMILES that can detect latent change points while characterizing the transmission dynamics through an SIR model. Thus, in setting up a comparison study, we therefore considered a two-stage approach that first identifies multiple change points of time-series data based on a likelihood based framework, and then estimates the basic reproduction numbers between each pair of nearby change points, following the stochastic SIR model introduced in Section 3.4. The alternative change point model assumes time points within one segment follow a normal distribution with distinct mean and/or variance from its nearby segments (Hinkley, 1970;Jen and Gupta, 1987), and it uses the likelihood ratio test (LRT) to detect multiple change points. An algorithm named binary segmentation (Edwards and Cavalli-Sforza, 1965;Sen and Srivastava, 1975) is commonly used to compute the test statistics for the LRT with high efficiency (Killick et al., 2012). In our case, to detect change points using this alternative approach named the likelihood ratio test with binary segmentation (LRT-BinSeg), we input the logarithmic scale of I into the function cpt.meanvar in the related R package changepoint (Killick and Eckley, 2014) for each of the simulated datasets. We set the maximum number of possible change points to 5 for the binary segmentation algorithm. Note that this restriction was not applicable to the alternative algorithms provided in the changepoint package. In practice, we found that the alternative algorithms tended to over-select the number of change points. Figure 4(a) and (b) exhibit the change point detection performances for the four scenarios of R 0 . Our BayesSMILES performed much better than the LRT-BinSeg with respect to change point detection under both performance metrics, ARI and MI. For instance, the ARI by BayesSMILES increased 39.29% to 122.16% over the LRT-BinSeg among the four scenarios, while the growth in MI could be up to 60.54%. Figure 4(c) compares the ability to capture the transmission dynamics in terms of RMSE, which depends on the change point detection accuracy. As expected, our BayesSMILES yielded smaller RMSE values across all scenarios since its identified change point locations were more accurate. In all, the simulation study demonstrated the strengths of BayesSMILES.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4
i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e
C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4
i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e
s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4
Analysis of COVID-19 Data
In this section, we applied BayesSMILES to the U.S. state-level COVID-19 daily report data provided by JHU-CSSE COVID-19 Data Repository 1 . Several recent COVID-19 studies also based their analyses on this resource (see e.g. Dong et al., 2020;Zhou and Ji, 2020;Toda, 2020). We first performed a preprocessing step to ensure the quality of the infectious data I for the model fitting. Due to the fact that recovery cases are not recorded in some states, we treated I and R as missing data and reconstructed the two sequences according to the process described in Section 3.1. The cumulative confirmed case numbers C were collected for each U.S. state starting from an early stage of the pandemic outbreak. In particular, we chose the starting time for each state as when there were at least ten confirmed COVID-19 cases for that state. We also set the removal rate γ = 0.1 as suggested by Pedersen and Meneghini (2020) and Weitz et al. (2020). Since different states could have different starting times, we further trimmed the sequences I and R for each state based on the latest starting time available. Finally, we set March 22, 2020, as the new starting time (t = 1) for all 50 states, and let July 19, 2020, be the 1 https://github.com/CSSEGISandData/COVID-19 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint We used the same hyperparameter and algorithm settings as described in Section 5.3. We ran four MCMC samplers for 40, 000 iterations with the first half discarded as burn-in for the change point detection model to ensure reliable results. We randomly initialized the starting points for each chain. We assessed the concordance between the four chains based on the Pearson correlation coefficients of the marginal posterior probability of inclusions (PPIs), π(ζ t |·) ≈ U u=1 δ(ζ (u) t = 1)/U . For our real data analysis in this paper, we obtained coefficient values ranging from 0.951 to 0.997, which indicated good concordance among the four MCMC chains. Concordance among the marginal PPIs was confirmed by looking at their scatter plots across each pair of MCMC chains. Furthermore, we also used the Gelman and Rubin's convergence diagnostics (Gelman et al., 1992) to assess the convergence of the segment-specified basic reproduction numbers R 0k 's to their posterior distributions. The potential scale reduction factors were all below 1.1, ranging from 1.001 to 1.045, clearly indicating that the MCMC chains for the stochastic SIR model were run for a satisfactory number of iterations, which was set to 100, 000. Convergence was also confirmed by looking at their trace plots.
Detecting change points for U.S. states
We limit our analysis to four U.S. states with the highest cumulative confirmed cases as of July 19, 2020, to keep the paper in a reasonable length. They are New York, Texas, California, and Florida. The results for the 46 remaining states are available in https://shuangj00.github.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint io/BayesSMILES/ (see details in Section 8). Figure 5 displays the detected change points, as well as the estimated basic reproduction number R 0 's cross segments, for the four states. The associated credible interval to each identified change point is represented by a gray ribbon. In general, those change points detected by BayesSMILES indeed captured the important COVID-19 events that might affect the transmission rates. For instance, some change points reflected the positive effects of the preventative strategies such as lockdown, while others explained the "bounce back" in confirmed cases after reopening. Table 4 lists the change point locations and their potentially related events for the four states.
In New York, the first change point was estimated to be March 28. We estimated the posterior mean of the basic reproduction number decreased from 2.24 (between March and March 27) to 1.63 (between March 28 and April 8). Notably, March 28 was the date when the Centers for Disease Control and Prevention (CDC) issued a 14-day domestic travel advisory for non-essential persons, which presumably alleviated the situation for the populated states such as New York. The second change point appeared around April 9, and the R 0 of the third segment dropped to 0.98 with a 95% credible interval of [0.76, 1.25]. This matched the exact day when New York state posted its first drop in the ICU admissions since the COVID-19 outbreak began. The third change point was around April 27. Though there was no direct intervention issued in late April, we noticed that the mayor of New York City announced that all major events had been canceled starting from April 20. This action could bring a positive effect in controlling the outbreak, and our estimation from the SIR model suggested a further decrease in the basic reproduction number down to 0.66 with a 95% credible interval of [0.54, 0.81]. We observed another change point around June 18, which was close to the Phase II reopening of New York state on June 22. During Phase II reopening, restaurants were allowed to open for outdoor dining, stores opened for in-person retail, and more services resumed operational under strict limitations. Thus, we saw a little "bounced back" in R 0 from 0.66 to 0.82. The last change point was on June 29. As expected, the basic reproduction number increased to 1.04 with a 95% credible interval of [0.84, 1.29] in the last segment. Although there was no public announcement around June 29 with a credible interval from June 28 to July 4, we suspect that the increased social interaction during the Independence Day long weekend (between July 3 and July 5) could be responsible for the increase in transmission dynamics.
In Texas, there were five change points detected. The first change point was estimated to be March 28, the same day as the first one for New York state. Due to a similar reason, the policy of mandatory 14-day quarantines for travelers entering Texas could bring a decrease in terms of the basic reproduction number (decreased from 2.97 to 2.07). The second change point was around April 9 with a further drop of R 0 to 1.14 with a 95% credible interval [0.96, 1.35]. We found that the Texas Governor had extended the state's disaster declaration for an additional 30 days on April 12. The extension aimed at protecting the health and safety of Texans by ensuring adequate capabilities of supporting communities. Organizations such as the State Operations Center and the Strategic National Stockpile would continuously supply the state government with the resources needed to protect residents. May 25 was detected as the third change point, and it was the first time that R 0 increased after the two drops. The estimated basic reproduction number was 1.29 with a 95% credible interval [1.02, 1.62]. This increase appeared around May 25 could be due to the Governor's updated executive order issued on May 26 that allowed additional services and activities to open for phase II reopening. The next change point was around June 16, and R 0 further increased to 1.72 with a 95% credible interval [1.40, 2.11]. According to the prediction reported by the University of Texas at Austin's COVID-19 Modeling Consortium at the end of May, there might be a significant increase in the number of cases and hospitalizations . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ;https://doi.org/10.1101https://doi.org/10. /2020 beginning mid-June (News from kxan). Here, the change point location and the increased basic reproduction number were consistent with the results of this report. The last change point was around June 28 with an estimated decrease in R 0 to 1.42 with a 95% credible interval [1.17, 1.72]. Notably, the Texas Governor issued multiple executive orders around late June to early July to mitigate the disease spreading. For instance, the executive order on June 26 reemphasized the limited occupancy for all business establishments in Texas. According to an executive order on July 2, all Texans were required to wear a face-covering in public spaces in counties with 20 or more positive COVID-19 cases. On the same day, the Governor announced an update regarding the executive order on June 26 with additional measures to slow the spread of COVID-19.
In Florida, the first estimated change point was April 3. It was two days after the statewide stay-at-home order for Florida. We estimated that the basic reproduction number decreased from 2.70 to 1.28 after the change point. The second change point appeared around the middle of April. Starting from April 13, some counties such as Osceola county enforced face-covering in public places. It could explain the reason why we observed a slight decrease in R 0 , from 1.28 to 0.92 with a 95% credible interval [0.73, 1.15]. The next change point was located around May 13, and R 0 in this new stage went above 1 again, with a posterior mean of 1.19 and a 95% credible interval [0.96, 1.50]. We noticed that Florida entered the phase I reopening on May 18, which could lead to the "bounced back" situation. The fourth change point was around June 7, two days after the phase II reopening in Florida. Changes in the phase II reopening included that Universal Orlando opened the parks to the general public for the first time in months, and we observed that R 0 increased again to 1.81 with a 95% credible interval [1.50, 2.19]. In the last segment (after June 27), our result revealed a slight drop in the basic reproduction number from 1.81 to 1.45. This change was potentially related to the consequence of requiring facial coverings in the four most populated cities in Florida: Tampa, Orlando, Miami, and Jacksonville. The face mask mandates went into effect for the four cities starting from June 19, 20, 25, and 29, respectively. Therefore, the drop in the transmissibility at the end of June may be explained by the effectiveness of wearing face masks as a non-pharmaceutical practice.
In California, we detected two change points. California was the first state to announce lockdown in the COVID-19 pandemic and its stay-at-home order became effective on March 19. Our change point detection results could miss these early actions since the data we analyzed started from March 22. The first selected change point was on April 5, with the value of the basic reproduction number decreasing dramatically when transitioning to the second segment (from 2.20 to 1.20). The second change point was on June 17, and we saw that R 0 increased to 1.35 in the last segment with a 95% credible interval [1.12, 1.62]. According to California Governor, higher-risk businesses and venues (e.g. movie theaters, bars, gyms) were allowed to reopen with restrictions on June 12. Hence, the increase in the basic reproduction number could be the consequence of reopening. The same observation was made in New York and Texas.
Clustering U.S. states based on their change point locations
We applied BayesSMILES on all 50 U.S. states. Based on the results, we seek to derive an overall picture of the COVID-19 dynamics across states. We summarized the temporally detected change points of the 50 states into common patterns, and then we labeled each state by matching its specific change point pattern to the common patterns. In particular, for each state, we calculated the marginal posterior probability of inclusion (PPI) for all time points, where the PPI for a time point t was calculated based on the B of MCMC samples after burn-in: p PPI t = B b=1 ζ t /B. Then we obtained the vector p PPI = (p PPI 1 , . . . , p PPI T ). Each entry in p PPI is a value between 0 and . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ;https://doi.org/10.1101https://doi.org/10. /2020 1, representing the proportion of time t selected as a change point among all iterations. Next, we computed the overall pattern by averaging over the vector p PPI across 50 states. We noticed that some time points were rarely or never selected as change points. This naturally suggested that we could group the time points. To illustrate this, we trimmed the top 20% values of p PPI t for each time t (Figure 6). The trimming step provided a clear pattern and highlighted the groups of dates that were commonly identified as change points. We observed three time spans as shown in Figure 6: March 27 -April 11, May 1 -May 10, and May 22 -July 3. For each state, we defined its cluster label based on the corresponding change point detection results. If a given state had at least one change point (including the credible interval) between March 27 -April 11, the first element in its cluster label is "Change". Otherwise, the first element in the group label was set to "Stable". We repeated the same process to determine the second and third elements of the class label for each state. In the end, each state was assigned to a cluster label "Change-Change-Change", "Change-Stable-Change", or "Stable-Change-Change". The map in Figure 7 colors each of the 50 states based on its cluster label, where green, yellow, and pink correspond to temporal patterns "Change-Stable-Change", "Change-Change-Change", and "Stable-Change-Change", respectively. Interestingly, three out of the four states we analyzed, New York, Texas, and California, belonged to the same category, "Change-Stable-Change".Other states also in this category include Georgia, Arizona, North Carolina, and Louisiana. All of these states were in the top ten states with the most COVID-19 confirmed cases. We noticed that the phase I statewide reopening for all these states occurred in mid-May (May 15 for Georgia, May 13 for Arizona, May 8 for North Carolina, May 15 for Louisiana). Therefore, our model did not report any change points for these states between May 1 and May 10. The rest of the 10 states with the most COVID-19 confirmed cases, including Florida, Illinois, and New Jersey, were labeled as "Change-Change-Change", and all of them had a change point between May 1 and May 10. As discussed in Section 6.1, Florida had a change point around May 13 with a credible interval [May 8, May 18]. According to the Executive Order 2020-32 issued by the Illinois governor, the state entered the phase II reopening starting on May 1 with a modified stay-at-home order. For New Jersey, the statewide state-at-home order was not . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint lifted until June 9. However, our model suggested a change point around the end of April with a credible interval [April 25, May 3] with a drop in R 0 from 1.11 to 0.68 (details available at https://shuangj00.github.io/BayesSMILES/). We noticed that on May 3 the New Jersey governor announced a multi-state agreement to develop a regional supply chain for personal protective equipment, other medical equipment and testing. This joint-state protective measure allowed for efficient delivery and reliability of medical equipment for states and therefore best utilized life-saving resources in the face of the COVID-19 outbreak.
Predicting new confirmed cases for U.S. states
Reliable and accurate short-term forecasting of the new daily confirmed COVID-19 casesĊ T f at a future time T f is important for both policy-makers and healthcare providers. We have illustrated how to use BayesSMILES to predict the new confirmed cases in Section 4.4. The idea is to make the short-term forecast only based only on the observed data in the last available segment, ensuring that only the most recent disease characteristics are utilized. We compared BayesSMILES with the standard stochastic SIR model where all observed data from the first time point are used. We named this model FullDataSIR. Figure 8 shows the true values of the new daily confirmed COVID-19 cases and the predictions made by BayesSMILES and FullDataSIR for the four major states. The 7-day forecast was . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint chosen from July 20 to 26. First of all, it is observed that the predictive mean by FullDataSIR tended to be larger than that from BayesSMILES. This was because the basic reproduction numbers in the early stage (i.e. from late March to early April) were usually very large due to the lack of effective interventions. As a consequence of including those data, FullDataSIR inflated the predictions. We then quantified the prediction accuracy using the mean absolute percentage error (MAPE). The MAPE for the 7-day forecast is defined as whereĊ T f andĈ t are the observed and predicted new confirmed cases at a future time T f . The smaller the MAPE value, the more accurate the prediction. The numerical summary is shown in Table 3. For New York, Texas, and Florida, the MAPEs from BayesSMILES were much smaller than those from FullDataSIR, suggesting a better performance of BayesSMILES. However, for California, the two methods were almost the same in terms of short-term forecast. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
Conclusion
In this paper, we proposed BayesSMILES, a Bayesian segmentation model for analyzing longitudinal epidemiological data, to characterize the transmission dynamics of an infectious disease such as COVID-19. Our approach includes a Bayesian Poisson segmented regression model to detect multiple change points from the sequence of actively daily infectious cases. Those identified change points correspond to latent events that significantly altered disease spreading rates, while the resulting segments are characterized by unique epidemiological patterns. We further describe the disease transmissibility for each segment by using a stochastic time-invariant SIR model, assuming that the transmission rate remains the same until the next change point. Our model outputs a series of the basic reproduction numbers R 0 's over stages to track the changes in spreading rates during a pandemic.
We applied BayesSMILES to analyze the COVID-19 daily report data of 50 U.S. states. Our results showed that the COVID-19 outbreak declined substantially after implementing stringent interventions for several states, including New York, Texas, and Florida. Meanwhile, our identified change points matched well with the timelines of publicly announced intervention strategies. The change in the basic reproduction numbers between two adjacent segments might be used to quantify the effectiveness of an intervention, which could help us understand the impact of different control measures. Several downstream analyses based on the BayesSMILES results were conducted. In particular, we clustered the temporal patterns of the 50 U.S. states based on their change point locations, which led to an interesting spatial pattern related to the COVID-19 dynamics. Lastly, we demonstrated that our method could also improve the short-term forecasting of the new daily confirmed cases.
A potential issue of BayesSMILES is that the change point locations, which are identified in the Poisson segmented regression model, are set to be fixed when estimating the basic reproduction numbers using the stochastic SIR model. Such a two-stage approach might underestimate the uncertainties in R 0k 's. A diagnostic method named simulation-based calibration (SBC) (Talts et al., 2018) is available to assess if the model inference has properly quantified the uncertainty. Using SBC to evaluate the soundness of the current MCMC sampling methods could be a future exploration. Another potential extension of the current work is to utilize advanced versions of the MH algorithm in the MCMC algorithms. For example, the MH with delayed rejection (Mira et al., 2001), the combination of delayed rejection and adaptive Metropolis samplers (Haario et al., 2006), the multiple-try Metropolis (Liu et al., 2000;Martino, 2018), as well as the methods discussed in Liang et al. (2011).One may also extend the Poisson error structure in the change point detection model to a negative binomial distribution for modeling the over-dispersed count data. Furthermore, the current BayesSMILES framework can be generalized to characterize temporal patterns in other epidemiological data. To do so, the segmented regression model should not be restricted to countable outcomes. Due to the concern for data accuracy, the result provided by the proposed method must be interpreted with caution. For instance, the number of confirmed cases is largely dependent on the test capacity and the number of recovery cases may suffer from under-reporting issues. How to improve the statistical power and prediction accuracy under those circumstances is worth investigating.
Software
We provide software in the form of R/C++ codes on GitHub https://github.com/shuangj00/ BayesSMILES. It includes the tutorial of implementing BayesSMILES, using U.S. state-level . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint COVID-19 data as an example. Besides, we have designed a website https://shuangj00. github.io/BayesSMILES/ to summarize the inference results for the 50 U.S. states, as a supplement to Section 6. The website shows that 1) the detected change points for each U.S. state; and 2) the COVID-19 transmission dynamics based on the segment-varying basic reproduction numbers R 0 's, including their posterior means and 95% credible intervals. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint (2020) Apr 27 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
A1. Approximate the multivariate normal density function
This section provides the details of the multivariate normal density approximation used to improve the computational efficiency in Section 4.1. We consider a general setting as follows. Let y be an n × 1 vector, W be an n × q matrix, U be an n × p matrix, and X = (W , U ) (which is an n × (p + q) matrix). Let Σ be a (q + p) × (q + p) diagonal matrix where the first q diagonal elements are h 0 and the last p diagonal elements are h 1 . Suppose h 0 , h 1 , σ 2 > 0. By Woodbury identity and Sylvester's determinant identity, we have σ 2 y (XΣX + σ 2 I n ) −1 y = y y − y X(X X + σ 2 Σ −1 ) −1 X y, where | · | denotes the matrix determinant and I n is an n-dimensional diagonal matrix. Define whereŨ (respectivelyỹ) is the residual after regressing out W from U (respectively y). Zhou and Guan (2018) showed that the expressions in (10) and (11) can be further simplified when h 0 → ∞. The results are summarized below with the proof available in the supplement of Zhou and Guan (2018).
The conclusions in Lemma 1 can be used to improve the computational efficiency for approximating the multivariate normal probability density function, shown in Equation (7), in our model. Within each segment k we assumedα k ∼ MN(X k b k , σ 2 k I n k ). Under the prior specification discussed in Section 3.3, we haveα k ∼ MN(0, X k HX k + σ 2 k I n k ), and the corresponding p.d.f. forα k is: where n k is the segment length. Next, we simplify the calculation of X k HX k + σ 2 k I n k and (X k HX k + σ 2 k I n k ) −1 by using Lemma 1. Consider U = (t 1 , . . . , t n k ) and let W be a column vector of 1's with length n k . The vector y in our case matchesα k , and Σ = H = h 0 0 0 h 1 . Lemma 1 states that the inverse and determinant calculation of an n k × n k matrix can be reduced to that of a p × p matrix, and in our case p = 1 (since the regression model only includes "time" as a covariate except the intercept term). Therefore, the computational benefit could be significant when n k was large. We first derive the formula of P as follows, . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ; Next, according to Equation (12), X k HX k + σ 2 k I n k can be approximated by (h 1 /σ 2 k )Ũ Ũ + I n k × h 0 W W + σ 2 k I n k when h 0 → +∞. In particular, we can derivẽ To calculate h 0 W W + σ 2 k I n k , upon noting that W W is an n k × n k matrix, we find According to Sylvester's determinant lemma, we have Therefore, the formula in Equation (16) equals the following, Combining the results in Equations (15) and (17), we can approximate the matrix determinant in (14) as follows when h 0 → ∞, Next, it is straightforward to derive the formula in the exponent part in Equation (14) using the result in Equation (12). In particular, we can derive: Then we approximate the exponent part in Equation (14) as follows under the condition that h 0 → ∞, . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ;https://doi.org/10.1101https://doi.org/10. /2020 We further introduce the following notation, By combining the approximations in (18) and (19), we obtain In practice, assuming that all covariates are standardized, this approximation works well if h 0 /h 1 is large enough. We suggest using h 0 = 10, 000 and h 1 = 10, which yielded highly accurate approximations in our simulation.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
A2. Results for Single Simulated Dataset
This section provides additional simulation results for Scenarios 1, 2, and 3 described in Section 5.1. For each scenario, we randomly selected a single simulated dataset from the 50 replicates. Figure 9 shows the PPM estimates on the change point indicator ζ. The red dashed and blue solid lines represent the true and estimated change point locations, respectively, while the gray ribbons indicate the corresponding 95% credible intervals. As we can see, BayesSMILES correctly pointed out the true change points. The resulting ARI values were 0.87, 0.85, and 0.95, respectively, while the MI values were 1.21, 1.18, and 1.31, respectively. Figure 10 shows our estimates on the basic reproduction number R 0k for each segment partitioned by the identified change points. The red dashed and blue solid line pinpoint the true and posterior mean of R 0k 's, while the two black solid lines mark the boundary of 95% credible intervals. In all scenarios, the true values were within their corresponding 95% credible intervals. The R 0 RMSEs for the single datasets from the three scenarios were 0.33, 0.36, and 0.14, respectively.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted January 18, 2021. ;https://doi.org/10.1101https://doi.org/10. /2020 O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 < l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t > (c) Change point detection performance for Scenario 3 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t > Segment 4
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t > Segment 4
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t >
R0
< l a t e x i t s h a 1 _ b a s e 6 4 = " K W s 6 n q / 0 o r B / R D U s j 8 r p q U q H r V w = " > A A A C A 3 i c b V C 7 T s M w F H V 4 l v I K s M F i U S E x V Q k P w V j B w l g Q f U h N F D m O 2 1 p 1 7 M h 2 k K o o E g u / w s I A Q q z 8 B B t / g 9 N m g J Y r W T 4 6 5 1 7 d c 0 + Y M K q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W I p W Y t L B g Q n Z D p A i j n L Q 0 1 Y x 0 E 0 l Q H D L S C U f X h d 5 5 I F J R w e / 1 O C F + j A a c 9 i l G 2 l C B v e + F g k V q H J s v 8 2 K k h x i x 7 C 7 P A y e w a 0 7 d m R S c B 2 4 J a q C s Z m B / e Z H A a U y 4 x g w p 1 X O d R P s Z k p p i R v K q l y q S I D x C A 9 I z k K O Y K D + b 3 J D D I 8 N E s C + k e V z D C f t 7 I k O x K m y a z s K l m t U K 8 j + t l + r + p Z 9 R n q S a c D x d 1 E 8 Z 1 A I W g c C I S o I 1 G x u A s K T G K 8 R D J B H W J r a q C c G d P X k e t E / q 7 m n 9 / P a s 1 r g q 4 6 i A A 3 A I j o E L L k A D 3 I A m a A E M H s E z e A V v 1 p P 1 Y r 1 b H 9 P W B a u c 2 Q N / y v r 8 A e G C m F A = < / l a t e x i t > 95% Credible interval 95% Credible interval 95% Credible interval 95% Credible interval (c) Basic reproduction number estimation performance for Scenario 3 Figure 10: Simulation study: The posterior distributions of R 0k 's for k = 1, 2, 3, 4 estimated from the segmented time-series data, given the three identified change points as shown in Figure 9. The red dashed and blue solid lines are the true and estimated values of R 0k 's, respectively. The two black solid lines are the lower and upper bounds of the 95% credible intervals.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 18, 2021. ; https://doi.org/10.1101/2020.10.06.20208132 doi: medRxiv preprint | 35,992.6 | 2020-10-08T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Mathematics"
] |
The feasibility of measuring the activation of the trunk muscles in healthy older adults during trunk stability exercises
Background As the older adult population increases, the potential functional and clinical burden of trunk muscle dysfunction may be significant. An evaluation of risk factors including the impact of the trunk muscles in terms of their temporal firing patterns, amplitudes of activation, and contribution to spinal stability is required. Therefore, the specific purpose of this study was to assess the feasibility of measuring the activation of trunk muscles in healthy older adults during specific leg exercises with trunk stabilization. Methods 12 asymptomatic adults 65 to 75 years of age were included in the study. Participants performed a series of trunk stability exercises, while bilateral activation of abdominal and back extensor muscles was recorded by 24 pairs of Meditrace™ surface electrodes. Maximal voluntary isometric contractions (MVIC) were performed for electromyographic (EMG) normalization purposes. EMG waveforms were generated and amplitude measures as a percentage of MVIC were calculated along with ensemble average profiles. 3D kinematics data were also recorded, using an electromagnetic sensor placed at the left lateral iliac crest. Furthermore, a qualitative assessment was conducted to establish the participant's ability to complete all experimental tasks. Results Excellent quality abdominal muscle activation data were recorded during the tasks. Participants performed the trunk stability exercises with an unsteady, intermittent motion, but were able to keep pelvic motion to less than 10°. The EMG amplitudes showed that during these exercises, on average, the older adults recruited their abdominal muscles from 15–34% of MVIC and back extensors to less than 10% of MVIC. There were similarities among the abdominal muscle profiles. No participants reported pain during the testing session, although 3 (25%) of the participants reported delayed onset muscle soreness during follow up that was not functionally limiting. Conclusion Older adults were able to successfully complete the trunk stability protocol that was developed for younger adults with some minor modifications. The collected EMG amplitudes were higher than those reported in the literature for young healthy adults. The temporal waveforms for the abdominal muscles showed a degree of synchrony among muscles, except for the early activation from the internal oblique prior to lifting the leg off the table.
Background
Biomechanical research has demonstrated the role of trunk muscle activation during functional activities and exercise [1]. The significance of the trunk muscles for spinal stability has been previously established in workingage adults, and has been linked to prevention and rehabilitation of low back pain (LBP) [2]. In particular, endurance and coordination of trunk muscle activity are key characteristics to maintain the stability of the spine, and therefore decrease the effects of low back pain [1]. Furthermore, it has been demonstrated that trunk muscle activity occurs during locomotion, and while the precise role is not well understood, the evidence suggests that the trunk muscles fulfill a critical role in the dynamic control of posture [3]. A link between trunk muscle function, low back pain, and physical function in older adults has been reported [4]. It is clear that both the abdominal and the back extensor muscles are key components that contribute to spinal stability [1] and functional stability [3]. Given these interrelationships and the desire to prevent falls in this group, there is a need for better assessments of trunk muscle activation in older adults.
LBP is one of the most common medical disorders in older persons, affecting up to half of those over 65 years of age [5]. In the Iowa 65+ Rural health study, LBP was reported by 23.6% of older women, and by 18.4% of older men in the year prior to the survey [6]. There is an enormous health care cost for LBP in older adults, as nearly 75% of persons with this disorder may utilize medical and chiropractic services, while 25% have at least one hospitalization directly related to LBP, and 5% of participants have low back surgery [6]. Furthermore, 15 to 40% of the elderly respondents reported some type of disability, in the form of limitation in walking, sitting, bending over, and performing household chores, while 21% of the participants attributed sleep disturbance to the LBP [6].
In older adults, trunk extensor strength declines more quickly with aging than does appendicular strength [7]. This loss of strength in axial muscles has been associated with kyphosis [8]. Research has revealed that older adults have a delayed response in the onset of trunk muscle activation, a reduction in amplitude in paraspinal muscle firing, and smaller stretch reflexes in response to dynamic stability perturbations [9]. Furthermore, in a cross-sectional analysis of older adults, through multivariate models, isometric trunk extensor strength was demonstrated to be predictive of maximal walking speed among women, but not men [10]. A more recent study demonstrated that higher fat infiltration around the spine in persons aged 70 to 79 years was associated with reduced functional capacity as reflected by compromised balance [4]. Finally, Hwang et al. (2008) has confirmed that feed-forward responses of the paraspinal muscles are compromised in older adults based on response times reported for expected and unexpected perturbations.
With the older adult population (65+ years) being a fast growing age group in North America, we should expect to see the incidence of LBP and functional decline increase. An evidence-based approach to treatment is required to help mitigate the potential for spiraling health care costs and increased disability. The potential functional and clinical burdens of trunk muscle dysfunction in this population will be significant. Therefore, further evaluation of risk factors is required, including the impact of the trunk muscles with respect to their temporal firing patterns, amplitude of activation, and their contribution to spinal stability. Methods for measuring trunk muscle function have previously been established for younger adults [11,12], but very little is known about these methods in the older adult population. This study will focus on a legloading exercise that requires the abdominal muscles to respond to minimize lumbo-pelvic motion [11,13,14].
The long term goal is to improve diagnostic assessment tools for the "trunk" stabilizing muscles, and to better understand how impairments such as LBP affect the neuromuscular responses, so that we may improve our understanding of therapeutic exercises used in the management of LBP, especially in the older population. The specific objectives of this study were i) to assess the feasibility for measuring the activation of the trunk muscles in an older adult population, ii) to determine the safety of a method for measuring co-activation of trunk muscles, and iii) to quantify trunk muscle activation patterns during a legloading task that has been used as both an assessment tool and a therapeutic exercise.
The original protocol was used to investigate neuromuscular patterns of working-age adults (20-50 yrs) during a dynamic exercise stability test [11] that was designed to challenge spinal stability. Previous work has illustrated that those with LBP respond to the demands of dynamic challenges differently than those without LBP [13,[15][16][17][18]. Although there may be serious consequences of neuromuscular impairments of the trunk muscles, including its effects on LBP and mobility in older adults, the authors are not aware of any published study that focuses on the characterization of trunk muscle activation of older adults during a therapeutic exercise.
Participants
Sixteen asymptomatic healthy adults between 65 to 80 years of age were recruited through word of mouth to participate in the study. Participants were excluded if they had a history of LBP, previous abdominal or back surgery, previous spinal fracture, or any other major musculoskel-etal, cardio-respiratory, or neurological condition. Throughout the screening process, 4 individuals (2 males, 2 females) were excluded due to medical problems (arrhythmia, coronary artery disease, osteoporosis), leaving 12 individuals (7 males, 5 females) to participate in the study.
Screening & Questionnaires
Prior to participation, all individuals were required to read and sign an informed consent approved by the Capital Health District's Research Ethics Board. All participants were interviewed to determine any medical conditions that would prevent their participation in the study, their health status, and information regarding participation in regular abdominal exercise and fitness routines.
Participants were asked to attend two testing sessions: the first one being 30 minutes in duration, and the second session 2 hours. During the first session, a postural and neurological assessment was conducted by a physiotherapist to screen for any obvious fixed abnormal spinal postures (kyphosis, lordosis, or scoliosis) and lower extremity neuromuscular deficits (myotomal strength, reflexes, and dermatomal sensation). Furthermore, a mental status examination was performed to ensure adequate cognitive ability to participate in the research study (score > 23) [19]. Standard demographic data was collected from each individual, including age, sex, occupation, and anthropometric data such as mass (in kilograms), height (in centimeters), and waist circumference (in centimeters). Body mass index (BMI) was calculated from the height and mass measures. Demographic data is found in Table 1.
Trunk Stability Exercise Protocol
Three levels of a trunk stability exercise protocol [11,16] were used to progressively challenge the individual's stabilizing system without placing high mechanical loads on the lumbar vertebrae. During the first testing session, participants were shown and given a verbal description of each exercise and told to practice each of the 3 levels 5 times on 3 separate occasions before returning for their second testing session.
During the second testing session the trunk stability exercises were performed in random order to minimize effects of fatigue and learning. Each exercise was repeated three times with a one-minute rest between trials. The start and end position for each exercise level was standardized, with participants lying in a supine position and knees flexed to 90°. Participants were given instructions to activate the abdominal muscles prior to lifting or extending their legs.
In Level I of the exercise, participants were instructed to lift their right foot off of the table and flex the knee and the hip to 90° and have the thighs contact the wooden frame, then lift the left foot off to the same position, then lower the legs (left then right) to return to the starting position. Level 2 of the exercise protocol added a right knee extension phase after the left thigh came in contact with the wooden frame, as shown in Figure 1. The right heel slid along the table until the knee was fully extended, upon which the hip and knee were then flexed and returned to the 90° hip angle position, and subsequently both legs were lowered to the starting position (left then right). Level 3 repeated the sequence of level 2, except that the foot did not touch the bed until the knee was fully extended [11]. At full extension, only the heel was briefly tapped on the bed before the right leg was returned to the wooden frame without further contact with the bed.
Electronic switches were placed on the right thigh and the plantar surface of the right foot to identify temporal events so that the motion could be subdivided into distinct phases of lift, extend, and lower. The total exercise was defined from right foot off the bed to right foot back on the bed. For levels 2 and 3 the extension phase was from knee off the wooden frame to knee back on the wooden frame
Normalization Exercises
Normalization exercises followed the trunk stability exercises to elicit the maximal voluntary isometric contraction (MVIC). Participants performed a randomized series of isometric exercises in an attempt to recruit all motor units for amplitude normalization purposes. Exercises for each muscle site were chosen based on previous work [20][21][22] and muscle functional testing procedures [23]. While other methods of normalization have been used to answer specific questions [24], normalizing to MVIC is an acceptable standard for comparison but the results need to be interpreted within the limitations of this procedure [25]. The idea of using a variety of exercises is important to elicit maximal contractions from the abdominal muscles [21,26] and providing feedback and motivation to the participant are necessary to increase the potential for elic- iting a maximum effort [27]. The exercises included a resisted sit-up, back extension, lateral bend and trunk rotation and all have been used in previous studies on asymptomatic controls and those with low back pain [11,13,16]. In all exercises, the participants were stabilized using Velcro straps and applied manual resistance to minimize potential movement. Each exercise was repeated twice with a two-minute rest between exercises to minimize fatigue.
Electromyography (EMG)
Surface EMG (3-AMT-8, Bortec™, Canada) was recorded from 24 muscle sites on the abdomen (12) and back (12) while participants performed the trunk stability exercise tasks. The electrode placements were selected to provide information on bilateral sites for a comprehensive set of abdominal and back extensor muscle sites [28] and the sites are consistent with de Seze's recent work [29]. Medi-trace™ Ag/Ag Cl surface electrodes (10 mm diameter, bipolar configuration 30 mm centre-to-centre) were attached after standard skin preparation to the left and right sides of the: i) lower rectus abdominis (LRA): centered on the muscle belly midway between the umbilicus & the pubis [30]; ii) upper rectus abdominis (URA): centered on the muscle belly midway between the sternum & the umbilicus [30]; iii) external oblique anterior fibres (EO1): over the 8 th rib adjacent to the costal cartilage [31]; iv) external oblique lateral fibres (EO2): 15 cm lateral to the umbilicus oriented at 45° [21]; v) external oblique posterior fibres (EO3): midpoint between the lowest part of the ribcage and the iliac crest and vi) internal oblique (IO): centered in the triangle formed by the inguinal ligament and lateral border of the rectus abdominis sheath and the line between the anterior superior iliac spine [32]. For the back extensors, electrodes were attached at. four sites on the erector spinae: vii and viii) lumbar longissimus 3 cm from the spinous process at lumbar level 1 (L1-3) and level 3 (L3-3) [33][34][35]; ix) and x) lumbar ilio-Level 2 of the exercise progression as per the description in the text Figure 1 Level 2 of the exercise progression as per the description in the text. The heel slides along the table until the right knee is fully extended. Electronic switches are located on the right foot [B] which contacts with the metal plate located on the table [A] and the right thigh [C] which contacts with the wooden frame [D] to identify temporal events so that the motion may be divided into distinct phases. The total exercise was from foot off the bed to foot back on the bed. For levels 2 and 3, the knee extension phase was from knee off the wooden frame to knee back on the wooden frame.
There were some difficulties in locating landmarks for EMG electrode placement and electrode adherence due to skin movement. The electrodes were placed during standing and then verified using palpation and resisted exercises to ensure proper placement and to assess cross talk while participants were in the supine position. The exercises for verification attempted to isolate the specific muscle site and were based on manual muscle testing [23,37]. The back extensor sites were stable between the standing and lying position. However, minor adjustments were made for a small number of subjects in which movement due to adipose tissue occurred between standing and lying supine for some abdominal sites. The normalization exercises were used as a validation for proper placement, since exercises were aimed at recruiting specific abdominal muscles and minimizing activity in other muscles.
Motion Capture
A Flock of Birds Motion Capture™ system was used to record the angular motion of the pelvis during the tasks. A sensor was placed on the antero-superior portion of the left lateral iliac crest. The sensors detected changes in 3 D motions with respect to a global reference providing an overall measure of motion, but the measurements can not be related directly to an anatomical reference ( Figure 2). The motion data was synchronized to the EMG data via the external sensors with each motion profile normalized to 100% time. The motion data was used to ensure minimal movement of the trunk and pelvis and to confirm that participants were able to maintain their lumbar pelvic position of a neutral spine throughout all tasks.
Data processing and analysis
Root mean square amplitudes were calculated for the total exercise for all three levels and for the extension phase for levels 2 and 3. These amplitudes were normalized to the % MVIC [11,38]. In addition, the raw EMG signals were full-wave rectified and low pass filtered at 6 Hz using a second order recursive Butterworth filter to yield a linear envelope waveform. The waveforms were time normalized for the total exercise and then amplitude normalized to the %MVIC. Ensemble average waveforms were calculated for each muscle and a coefficient of variation for the waveforms [39].
The maximum difference in angular motion around the three axes was calculated during the entire exercise for all three levels and for the leg extension phase of levels 2 and 3.
Participants
Participants were recruited from the local community and had several years of university/college education (oceanographer (2), teacher/professor (3), social worker (2), supervisor, superintendent, engineer, librarian, and physician). Four older adults were excluded after the first session due to health problems pertaining to the health screening questionnaire, as previously detailed in the methods section. All 12 participants (7 men and 5 women) participated in both testing sessions, leaving a 0% drop-out rate.
Trunk Stability Exercise
Throughout the first session, the older adults were found to have difficulty following and remembering instructions. This problem was resolved by simply demonstrating the actions, in addition to verbalizing the instructions. It was found that the older adults performed the exercises with intermittent, jerky motions as opposed to a smooth motion. An attempt was made to correct this throughout the first session and participants were instructed to practice these smoothed movements at home. Subsequently, performance was improved from a qualitative perspective during the second session.
Normalization and Validation Exercises
Three participants (25%) were instructed to perform the normalization exercises at a sub-maximal effort given their history of hypertension. Furthermore, it was necessary to remind the participants to breathe throughout the trunk muscle contractions, due to the possibly harmful increase in intra-thoracic/intra-abdominal pressure associated with the Valsalva maneuver. The male participants reported "not being able to push as hard if they did not hold their breath". In contrast, it is uncertain whether the women exerted a true maximum effort based on selfreport and the observed absence of visible exertion. During normalization trials the men were not hesitant to express fatigue and the need to rest between trials, but the women often reported "feeling fine" and being "ready to go again" immediately following each trial. However, in order to minimize the effects of fatigue, the standard rest time (120 sec) was applied for both groups.
Motion
Participants were able to minimize pelvic motion during the total exercise to an average less than 10° and to less than 5° during the extension phase of the trunk stability tasks for all three axes as shown in Table 2. Figure 2 Motion capture. The Flock of Birds Motion Capture™ sensor was placed on the antero-superior portion of the left lateral iliac crest. Note that the z-axis is positioned perpendicular to the sensor. This sensor records motion with respect to a fixed global reference and not an anatomical reference.
Electromyography
Although there were some challenges to the fixation of the electrodes to the participants' skin that included: 1) difficulty in locating landmarks for EMG electrode placement because of loose skin and excess adipose tissue, and 2) electrode adherence to the skin during movement, all EMG signals were of high quality, with excellent signal to noise ratios. Examples of the abdominal signals are illustrated in Figure 3. There was low amplitude noise from the magnetic motion sensors, but this had no affect on the Means (SD) are displayed for Yaw (motion about the Z-axis), Pitch (motion about the y-axis), Roll (motion about the x-axis) with respect to a global reference. abdominal muscles and had no visible effect on the back extensor muscles. Pilot work showed that the low level noise decreased the further the sensor was from the transmitter, and that it was negligible when at a distance greater than 60 cm. For the exercise tasks, this distance was greater than 60 cm. The mean amplitude for all 24 muscle sites is presented in Table 3. As expected, participants activated back extensor muscles to low amplitudes, with all sites less than 10% MVIC. The abdominal muscles were recruited to amplitudes from 15% to 35% MVIC (lower amplitudes for level 1, higher amplitudes for level 3). The ensemble average profiles are in Figure 4 for levels 1-3. The coefficients of variation for these waveforms are found in Table 4. These profiles illustrate how the muscles respond to the changes in loading throughout the trunk stability tasks. Of note is the higher bilateral internal oblique amplitude initially for all exercise levels.
Pain
No participants reported pain during the testing, and all participants were able to complete the full protocol with some minor adjustments to the normalization exercises as previously discussed. Follow-up phone calls were made to all participants following the testing sessions. Three men reported mild delayed-onset muscle soreness, although they were not limited in their daily function.
Discussion
Generally, older participants were able to perform the exercises necessary for surface EMG measurement of trunk muscle activation patterns without significant difficulty, much like their younger cohorts demonstrated in previous studies [11,16]. Most importantly, they were able to perform the normalization procedures and exercise tasks safely, and without any major adverse health sequelae. For the former, however, three participants were instructed not to perform maximal level exercises because of their hypertension. All three were males and their percent MVIC values were not the highest for the test exercise. It was assumed that they produced an effort that was within normal variability for maximum efforts. Discussion of the implications of this alteration in procedure is discussed below. The results have important positive implications for future studies that focus on the activation of trunk muscles by demonstrating the feasibility of collecting comprehensive electromyographic data during this standardized task in healthy older adults.
Modifications were made to the protocol for older persons, both to increase the margin of safety and to enhance the probability of success in performing the tasks. For example, during the normalization procedure, participants were constantly reminded to breathe throughout the isometric contractions in order to reduce the potential effect of exercise-induced high blood pressure from a continued Valsalva maneuver, as found in a previous study [40]. Although there were minor challenges in participants learning to perform the trunk stability exercises, the addition of a live demonstration of these tasks to the participants helped to overcome any major obstacles to older adults being able to successfully perform these tasks. Adipose tissue in the older adult participants made it challenging for the investigators to properly palpate musculoskeletal landmarks, and ensure that the electrodes were secure. Shifting of the electrodes occurred during the tasks, and some of the electrodes were required to be readjusted. However, this could happen in younger participants with increased waist circumference, and is not a challenge strictly for the older cohorts. Nevertheless, despite the challenges in the older adults, the validation exercises confirmed that the electrodes were properly placed and quality electromyography signals were recorded from the trunk muscles.
Excellent quality data was successfully recorded in the older adults during the leg-loading exercises. It was important that the older adults performed the exercise correctly and the motion data supports that they were able to minimize pelvic motion during performance of the exercise. The recorded motions gave an overall assessment of the pelvic motion with respect to a global reference and all angular displacement differences were less than 10 degrees. The older adults did not produce high levels of activity in the back extensor sites (< 10% MVIC) during performance of the exercise although they were higher than reports for younger adults [11,16]. The results also showed a degree of symmetry between the left and the right abdominal muscle sites, although a few (3/12) abdominal muscles had greater than a 4% MVIC difference between sides for the single-leg extension exercises i.e. EO1 for level 2 and 3 and EO3 for level 2. The increase in amplitude of activation from level 1-3 is consistent with reports for younger adults. However, the abdominal muscle amplitudes were slightly higher than amplitudes reported for younger adults [11,16]. This may reflect the decrease in strength for an older adult [41] and subsequently their need to work at a higher percentage of MVIC to perform the same task. As previously mentioned, three participants did not perform maximal exercises for normalization purposes due to their heart conditions. This could also explain the slightly higher amplitude for the older adults compared to the literature for young adults [11,16]. While it is recognized that there are limitations in using maximum voluntary activations for normalization purposes, and others have explored different normalization procedures [24,42], the results do provide an indication of the amplitude of muscle activity during the exercise compared to their voluntary maximum and allows for comparisons among muscles [43]. There will be variability in how forceful individuals voluntarily contract their muscles, and on average it has been shown to be 94% of the maximum amplitude produced during a burst superposition for the knee extensors [27]. In the present case, the average difference between including those that did sub-maximum and not including them was 0.76% MVIC. The results show that for all exercise levels, the activation did not exceed 35% of MVIC, so at best these exercises produced moderately low activation amplitudes in this group. It was also noted that in studies that use MVICs for normalization purposes, that comparison among studies show consistency in amplitudes and allows for conclusions to be drawn with respect to the demand that an exercise has on recruiting specific muscles [44]. Therefore the implication of sub maximal efforts is that the amplitudes as a percentage of maximum for the test exercises were overestimated and the exercises elicited even lower percent MVICs than presented in table 2. Table 4.
Sample ensemble average waveforms
Since the amplitudes recruited for the abdominal muscles were less than 35% of MVIC, this indicated that even the highest exercise level would not suffice as a strength training protocol. This conclusion is based on the American College of Sport Medicine guidelines for exercise training in older adults [41]. However, the exercises appear to be beneficial for recruiting trunk muscles, especially the abdominal muscles that are not normally recruited in every day activities. Furthermore, the coordination of trunk muscle activity that would be encouraged by performing the exercises in this protocol are considered important for spinal stability [44]. The coefficients of variation for this older adult are similar to those reported for younger asymptomatic subjects, but lower than those reported for young adults [16] with chronic low back pain [13].
The profiles illustrate the participants activated their abdominal muscles to a relatively consistent level throughout the exercise and did not have specific responses to the different phases within the exercise (i.e. a constant co-activation among the synergists rather than a response to the different loading associated with the leg lift/lower phases and leg extension phase). While one may argue that the instructions were specific to stabilize the pelvis resulting in co-activity, an asynchronous firing pattern was previously reported for those who had low back pain who were given the same instructions [15]. The higher level of activation of the internal oblique prior to the second leg lifting from the table suggests that it has an important role initially for stability, whereas the other muscles respond to the leg lifting portion of the exercise. This pattern for the internal oblique may have been a result of the instructions to stabilize the pelvis prior to lifting the foot off of the table.
There were several limitations regarding the generalizability of this study. Firstly, all of the participants were well-educated and therefore they may have been able to perform this task with a lesser degree of difficulty than an older adult who was not as well educated. This may be especially true, since some of the participants may have been more likely to engage in procedural, executive tasks in their daily work environment. Secondly, only 3 of the participants were over 70 years of age, and the extent that "older old" adults, such as those persons who were over 80 years of age, could perform the tasks was not determined in this study, and should be explored in future studies. Finally, these volunteers were healthy, and the degree to which older adults with back pain or mobility impairments could or would participate has not been determined. However, it is the authors' opinion that the protocol that was utilized could be sufficiently modified to accommodate for individuals with various impairments.
Another concern with the current protocol may be with the normalization procedure using maximal contractions. Kasman [43] acknowledged that while the participant with pathology may not be able to activate their muscles to a maximum amplitude, that it is still important to report the percentage of their maximal voluntary amplitude at which they are willing to recruit. Previous research demonstrated that even participants with joint pathology and pain are able to voluntarily activate their muscles to over 94% of their maximum when provided with learning and feedback [27]. While Lewek's study [27] was performed on lower limb muscles and not trunk muscles, it should be noted that these exercises have been used as a method for normalization for those with chronic low back pain with no adverse effects [13]. Given the lack of gold standard for EMG normalization, while it has its limitations and must be interpreted within these limitations, the MVIC is still the best standard for comparison among muscles [25,45].
Further research will be required to determine whether the protocol can be safely applied to an older adult population with various musculoskeletal disorders, such as LBP, again paralleling the research conducted on younger adults. A comparison study on younger sex-matched adults will help to elucidate the nature of the differences.
Conclusion
Older adults were able to successfully complete the trunk stability exercise protocol to measure trunk muscle activation that was previously used for younger adults, with some minor modifications to the protocol with respect to instructions and normalization exercises. There were no adverse effects reported during or following the procedure. Quality EMG data from 12 bilateral trunk muscle sites were recorded during performance of the exercise allowing for study of co-activation among muscles during specific phases of the protocol. The older adults used low to moderate activation amplitudes to perform the trunk stability exercises. These results provide a foundation for future studies to evaluate the utility of EMG recordings of the trunk muscles during a limb-loading task as a clinical assessment tool for older adults with pathology. The paper established that this protocol for evaluating trunk muscle function was feasible, and that quality EMG records were obtained from this older adult group.
Publish with Bio Med Central and every scientist can read your work free of charge | 7,282.8 | 2008-12-04T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Quantum Approach to Cournot-type Competition
The aim of this paper is to investigate Cournot-type competition in the quantum domain with the use of the Li-Du-Massar scheme for continuous-variable quantum games. We derive a formula which, in a simple way, determines a unique Nash equilibrium. The result concerns a large class of Cournot duopoly problems including the competition, where the demand and cost functions are not necessary linear. Further, we show that the Nash equilibrium converges to a Pareto-optimal strategy profile as the quantum correlation increases. In addition to illustrating how the formula works, we provide the readers with two examples.
The Li-Du-Massar scheme appears to be a generally accepted quantum scheme for duopoly examples. It provides a "minimal" quantum structure of a two-player strategicform game with a continuum of strategies. The scheme, originally designed for Cournot duopoly, enables the players to avoid an inefficient Nash equilibrium by means of quantum resources. Moreover, it preserves the uniqueness of the solution [10,22]. In essence, the result has been proved for a specific Cournot duopoly example. Thus, the natural question arises whether the uniqueness of Nash equilibrium and its efficiency (Pareto optimality) hold in a more general setting. A natural generalization of the classically played Cournot duopoly is due to [23] (see also [24]), where the payoff functions of the players are assumed to depend on the demand and cost functions. Then suitable requirements on the payoff functions such as concavity of the demand function and convexity of the cost function imply a unique Nash equilibrium. This work is intended to generalize the above-mentioned fact to the game played according to the Li-Du-Massar model.
Our presentation is self-contained, and designed to be accessible without a background in game theory and quantum theory. The work starts with the important preliminaries from game theory. We also provide the reader with the idea of quantum game introduced in [10]. In Section 4 our main results are stated and proved.
Preliminaries on Game Theory
For completeness of exposition, we recall some of the standards facts on game theory that will be needed throughout the paper.
The basic object studied in game theory is a game in strategic form [25].
Definition 1 A game in strategic form is a triple (N, (S i ) i∈N , (u i ) i∈N ) in which -N = {1, 2, . . . , n} is the set of players, -S i is the set of strategies of player i, for each player i ∈ N , -u i : S 1 × · · · × S n → R is a function associating each vector of strategies s = (s i ) i∈N with the payoff u i (s) to player i, for each player i ∈ N .
Cournot duopoly problem is one of the earliest economic models of competition between two players [26]. Player 1 and 2 offer quantities q 1 and q 2 of a homogeneous product and compete for the same market of potential buyers. The price of the product is a decreasing function that depends on the total quantity q = q 1 + q 2 . Based on [27], the Cournot duopoly example can be viewed as a strategic form game (N, (S i ) i∈N , (u i ) i∈N ) with the components defined as follows: 1. the set of players is N = {1, 2}, 2. player i's strategy set is S i = [0, ∞) with typical element q i , 3. player i's payoff function u i is given by formula where P (q 1 , q 2 ) represents the price of the product, and c is a marginal cost such that a > c > 0.
Nash equilibrium is a fundamental solution concept of strategic-form games. It is a way of predicting reasonable results of the game, in particular, the result of the Cournot duopoly example. A Nash equilibrium is a strategy vector at which each strategy is a best reply to the other strategies [25].
Definition 2 A strategy vector s * = (s * 1 , . . . , s * n ) is a Nash equilibrium if for each player i ∈ N and each strategy s i ∈ S i the following is satisfied: The Cournot competition defined by (1) and (2) has the unique Nash equilibrium The existence of the Nash equilibrium is due to concavity of the payoff functions. In general, the following theorem can be proved [28].
Theorem 1 Let X 1 and X 2 be two compact convex sets in R m and R n , respectively. Let One can check (see, for example [27]) that the Nash equilibrium in the Cournot competition is not efficient. The players can benefit from playing strategy profile (q 1 ,
Definition 3 Given a collection of payoff functions
for an n-person nonzero sum game, we say that a strategy profile (x * 1 , . . . , x * n ) is Paretooptimal if there is no strategy profile (x 1 , . . . , x n ) such that for i ∈ {1, . . . , n} and for at least one i.
The Li-Du-Massar Quantum Duopoly Scheme
Let us recall the key elements of the Li-Du-Massar approach to duopoly examples [10] (see [29] for more details). Let |00 be the initial state and J (γ ) = e −γ (a † 1 a † 2 −a 1 a 2 ) be a unitary operator, where γ ≥ 0 and a † i (a i ) represents the creation (annihilation) operator of electromagnetic field i. The player i's strategies are unitary operators of the form Then the operator J (γ ) and the strategy profile The quantity q i (or the price p i in the case of Bertrand duopoly examples) is then obtained by performing the measurement X i = a † i + a i / √ 2 on the state | f . The result is We obtain the quantum extension of the classical Cournot duopoly by substituting (9) into (1), We see from (7) that player i's strategies can be identified with choosing Furthermore, (9) shows that the scheme correlates the players' strategies, and the higher the value of γ , the stronger correlation between x 1 and x 2 .
It is worth pointing out that the resulting outputs (9) are not in units of x i 's. Given x 1 and x 2 fixed, we see at once that q i increases with γ , for i = 1, 2.
For convenience, we normalize (9). It can be done by setting It follows easily from (11) that the resulting quantities become Both ways of describing the correlation of x 1 and x 2 are equivalent when studying the Cournot duopoly by means of the Li-Du-Massar scheme. One can check that substituting (12) into (1) results in the unique Nash equilibrium (x * 1 , x * 2 ) such that In the case of (9) the Nash equilibrium strategy is which is simply the division of (13) by e γ . Using (12) is more convenient, for example, for comparing the classical and quantum equilibria. Note that strategy (13) ranges from the classical equilibrium strategy (a −c)/3 to strategy (a −c)/4 being part of the Pareto-optimal profile.
Quantum Approach to Generalized Cournot Duopoly
The generalization of the Cournot duopoly, as presented in [24], assumes that the price P (q 1 , q 2 ) of the product is a function of the demand D(q) that depends on the total quantity q = q 1 + q 2 . The cost function C(q i ) returns the cost of producing q i units of the good. As a result, player i's payoff function is of the form We now determine the payoff functions u i according to the Li-Du-Massar approach. It is easily seen that the normalized quantities (12) satisfy equation q 1 + q 2 = x 1 + x 2 . By substituting (12) into (15) we obtain for x 1 , x 2 ≥ 0. As the following states, the payoff functions (16) determine the game with a unique Nash equilibrium under some further restrictions on D and C.
Proposition 1 Suppose that the demand function
Let the cost function C(x i ) be a strictly increasing, twice-differentiable, non-negative convex function with Then, the game defined by (16) has exactly one Nash equilibrium given by (x * , x * ). The Nash equilibrium strategy is determined by the unique solution of the equation in the interval 0 < x < a.
Proof Note that under the above assumptions, Hence, u i for i = 1, 2 is strictly concave in x i , and by Theorem 1, there exists a Nash equilibrium in the strategic-form game According to the assumptions on D(x 1 + x 2 ), in that case, Since −C((x 1(2) cosh γ +x 2(1) sinh γ )/e γ ) < 0 for x 1 +x 2 ≥ a and γ ≥ 0, and C is strictly increasing, player i is better off choosing x i = 0 rather than x * i = 0. Furthermore, it cannot be the case that (x * 1 , x * 2 ) = (0, 0). Indeed, ∂u 1 (x 1 , x 2 , γ )/∂x 1 is continuous with respect to x 1 and ∂u 1 (0, 0, γ ) By the sign-preserving property, ∂u 1 (x 1 , 0, γ )/∂x 1 > 0 in a nonempty interval (0, ε). It follows that u 1 (x 1 , 0, γ ) is strictly increasing in (0, ε). Hence player 1 gains from switching from strategy x 1 = 0 to x 1 > 0. Let us assume that x * i > 0 for i = 1, 2. Since x * 1 is an Nash equilibrium strategy, the payoff function u 1 (x 1 , x * 2 , γ ) attains its maximum at x 1 = x * 1 . This gives and By assumption, the demand function D depends merely on x 1 + x 2 . This clearly forces Subtracting (23) from (24) yields Suppose, contrary to our claim, that x * 1 > x * 2 . Since C is convex, the derivatives ∂C/∂x i for i = 1, 2 are increasing. Moreover ∂D(x 1 + x 2 )/∂x 1 < 0 for x 1 + x 2 = 0. Using the fact that inequality x 1 > x 2 is equivalent to for all γ ∈ [0, ∞), we conclude that the left-hand side of (26) is negative. So it must be the case that x * 1 ≤ x * 2 . However, by a similar argument, (x * 1 , x * 2 ) with x * 1 < x * 2 does not satisfy (26). We thus get x * 1 = x * 2 = x * . Note that by the chain rule, and Since we can restrict our attention to x 1 = x 2 = x, the derivatives (28) and (29) in this case can be written as Taking into account (30) we can simplify (23) and (24) to which is equivalent to (19). The proof is completed by showing that the equilibrium strategy x * is a unique root of (19). Denote by h(x) the left hand-side of (19). Then, by assumption, Since D is continuous, D(a) = 0, and it has a negative and decreasing derivative, for y ∈ (0, a/2) sufficiently close to a/2 Direct consideration of dh(x)/dx shows that h(x) is strictly decreasing in 0 < x < a/2. Hence h(x) has a unique root in 0 < x < a/2. This finishes the proof.
Remark 1 Proposition 1 becomes a reformulation of Theorem 7.2.4 (see [24]) in case γ = 0. According to that theorem, Nash equilibrium strategy is supposed to be the unique solution of the equation if functions D and C satisfy similar assumptions to ones given in Proposition 1. In fact, (34) does not lead us to the Nash equilibrium as term D (2x) of (34) needs handling with greater care. Applying (34) to the Cournot duopoly example mentioned in the Preliminaries (see also Example 1 below) yields equation a − 4x − c = 0 whose the solution is part of the Pareto-optimal outcome. In order to avoid this issue we need to take the derivative of D(2x) with respect to 2x.
In what follows, we apply Proposition 1 to determine Nash equilibria in the quantum Cournot-type competition. The following example gives us a look at how (19) simplifies the analysis required to find Nash equilibria compared to [10,22].
Example 1 Consider the classical Cournot duopoly example studied in [10,22]. In that case, Now (19) becomes The above equation leads to the unique solution It is equivalent with the result obtained by means of the best response correspondences [22].
As the next example illustrates, Nash equilibria can be easily found also in case the demand function or the cost function are not linear.
Example 2 Let
Equation (19) now reads yielding the unique positive solution Having determined the Nash equilibrium in the game defined by (38) we are now in a position to compare the resulting payoffs for different values of γ . Focusing on strategy profiles in the form (x, x) we can use u i (x 1 , x 2 , 0) to study the payoffs in both the classical and quantum game as we have u i (x, x, 0) = u i (x, x, γ ). A direct calculation shows that as γ increases to infinity (see, Fig. 1).
In the classical version of the Cournot problem presented in Example 1, the Nash equilibrium is not efficient. Both players could strictly benefit from playing ((a − c)/4, (a − c)/4)-the Pareto-optimal profile that is the limit of Nash equilibria in the Li-Du-Massar approach to the game as γ goes to infinity. The same applies to Example 2. The limit of the right-hand side of inequality (41) is the maximal payoff that both players can gain in the game. It turns out that this is a general result. Fig. 1 The values of payoff function u i in Example 2 for a = 1 Proposition 2 Let (x * (γ ), x * (γ )) be a Nash equilibrium in the Li-Du-Massar approach to the generalized Cournot duopoly. Then a strategy profile (x * , x * ) such that x * = lim γ →∞ x * (γ ) is Pareto-optimal.
Proof We obtain the Pareto-optimal profile in the classical Cournot duopoly by solving the following problem arg max By the definition of u 1 and u 2 we have As C is convex, it follows that Hence it is sufficient to consider arg max in order to obtain a solution of (42). Write g(x) = xD(2x) − C(x). Then in the interval 0 < x < a/2. We conclude from (46) that g is strictly concave in 0 ≤ x ≤ a/2, and finally that a local maximum x * of g is unique and global. Clearly Note that (47) is the limit of (19) as γ goes to infinity. We know from Proposition 1 that (19) gives a Nash equilibrium strategy in the quantum Cournot duopoly. Thus, (19) determines the Pareto-optimal equilibrium as γ approaches infinity. This is the desired conclusion.
Conclusions
Studies on quantum game theory so far have given us a lot of information about how specific games can be described in the quantum domain. The work presented in this paper was an attempt to generalize some of the existing results rather than examine another game. Our research has shown that the results concerning a Cournot duopoly example can be extended to a wide class of games. In each case of the Cournot-type competition including nonlinear demand and cost functions, the Li-Du-Massar approach to the game implies the unique Nash equilibrium converging to the Pareto-optimal outcome as the entanglement measure goes to infinity. The equilibrium can be easily found by solving an equation that is of similar complexity than one in the classical case.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 3,882.2 | 2017-10-30T00:00:00.000 | [
"Physics"
] |
Enteric Neural Crest Differentiation in Ganglioneuromas Implicates Hedgehog Signaling in Peripheral Neuroblastic Tumor Pathogenesis
Peripheral neuroblastic tumors (PNTs) share a common origin in the sympathetic nervous system, but manifest variable differentiation and growth potential. Malignant neuroblastoma (NB) and benign ganglioneuroma (GN) stand at opposite ends of the clinical spectrum. We hypothesize that a common PNT progenitor is driven to variable differentiation by specific developmental signaling pathways. To elucidate developmental pathways that direct PNTs along the differentiation spectrum, we compared the expression of genes related to neural crest development in GN and NB. In GNs, we found relatively low expression of sympathetic markers including adrenergic biosynthesis enzymes, indicating divergence from sympathetic fate. In contrast, GNs expressed relatively high levels of enteric neuropeptides and key constituents of the Hedgehog (HH) signaling pathway, including Dhh, Gli1 and Gli3. Predicted HH targets were also differentially expressed in GN, consistent with transcriptional response to HH signaling. These findings indicate that HH signaling is specifically active in GN. Together with the known role of HH activity in enteric neural development, these findings further suggested a role for HH activity in directing PNTs away from the sympathetic lineage toward a benign GN phenotype resembling enteric ganglia. We tested the potential for HH signaling to advance differentiation in PNTs by transducing NB cell lines with Gli1 and determining phenotypic and transcriptional response. Gli1 inhibited proliferation of NB cells, and induced a pattern of gene expression that resembled the differential pattern of gene expression of GN, compared to NB (p<0.00001). Moreover, the transcriptional response of SY5Y cells to Gli1 transduction closely resembled the transcriptional response to the differentiation agent retinoic acid (p<0.00001). Notably, Gli1 did not induce N-MYC expression in neuroblastoma cells, but strongly induced RET, a known mediator of RA effect. The decrease in NB cell proliferation induced by Gli1, and the similarity in the patterns of gene expression induced by Gli1 and by RA, corroborated by closely matched gene sets in GN tumors, all support a model in which HH signaling suppresses PNT growth by promoting differentiation along alternative neural crest pathways.
Introduction
Peripheral neuroblastic tumors (PNTs) comprise a spectrum of neural crest (NC) derived neoplasms that occur along the sympathetic chain, ranging in state of differentiation and malignancy. Ganglioneuromas (GNs) are benign tumors at the most differentiated end of the spectrum, composed of large neuronal cells, surrounded by satellite cells resembling glia. Neuroblastomas (NBs) span the rest of the spectrum, varying in malignancy and displaying variable degrees of neural and glial differentiation [1]. Clinically, the position of a PNT along the spectrum is not invariably fixed, as metastatic, poorly differentiated tumors can undergo spontaneous regression. When these tumors regress, they may necrose or transform into GNs [2]. PNTs thus demonstrate a dynamic inverse correlation between differentiation and malignancy, suggesting that inducing differentiation may be a potent therapeutic strategy [3].
Developmental signaling pathways may regulate differentiation in PNTs. To identify candidate developmental signals, we analyzed differential expression of markers and determinants of neural crest differentiation in GN and NB. We have previously demonstrated that transcription factors advancing NC development are expressed at higher levels in GN than NB [4]. Using transcriptome-wide microarray analysis of GN and NB, we have found that GNs, characterized by their advanced glial and neuronal differentiation, diverge from the sympathetic phenotype of their sites of origin. These highly differentiated, stroma-rich tumors express genes typical of enteric neural and glial fate and function, including key elements on the Sonic Hedgehog (SHH) pathway. We propose that Hedgehog pathway activity re-directs PNTs from a sympathetic nervous system (SNS) differentiation trajectory to an alternative neural crest trajectory, that of the enteric nervous system (ENS).
SHH is a potent regulator of development throughout the nervous system, and a potential oncogene. SHH is a mitogen for diverse CNS progenitors, including cerebellar granule cell precursors [5], and cells of the subventricular zone of the cerebrum [6]. In the cerebellum, mutations that enhance HH signaling are oncogenic, driving the formation of human medulloblastomas, both sporadic and syndromic [7], and murine medulloblastomas in transgenic Ptc+/2 and SmoA1 mice [8,9]. During peripheral neuro-ectodermal development, SHH is critical for specific differentiation of NC cells of the cranium and the gut [10,11].
As NC cells emerge from the dorsal neural tube, they are outside the domain of SHH emanating from the ventral floor plate [12]. Cranial NC cells, however, encounter SHH as they migrate. HH signaling induces cranial NC cells to diverge from typical neuroectodermal fate, giving rise to tissues characteristically of mesodermal origin, including cranio-facial cartilage and bone, and the cardiac outflow tracts [10]. NC cells from the vagal region migrate into the embryonic gut and encounter SHH produced by endodermal cells. For cranial and enteric NC cells, SHH acts as mitogen and morphogen, regulating both proliferation and differentiation [11,13].
While HH signaling is essential for NC development, and is oncogenic in CNS progenitors, a role for HH signaling in PNTs has not previously been defined. Our finding of increased HH activity in GN suggested that the HH pathway might promote PNT differentiation. To test the potential of HH signaling to effect PNT behavior, we transduced NB cells with Gli1, a central effector of the transcriptional response to HH signaling. We then analyzed Gli1-transduced cells for effects on cell growth and gene expression and compared the set of genes induced in NB cells by Gli1 to the sets of genes induced by retinoic acid (RA) or differentially expressed in GN compared to NB. Our findings demonstrate HH signaling can re-direct developmental potential in PNTs, away from typical SNS phenotype, promoting alternative differentiation.
Differential gene expression
Analysis of gene expression by microarray of 79 PNTs, including 11 GNs and 68 NBs, strongly differentiated the two sets of tumors. For each gene in the array, we compared expression values for GNs and NBs by modified t-test [14]; 2500 probe sets representing 2107 genes were differentially expressed in GN and NB with a p-value of ,0.0001 (Table S1). Among these differentially expressed genes, several patterns emerged, demonstrating that GNs differ from NBs not only in extent of differentiation but also in specification within the range of potential NC fates.
Catecholamine biosynthesis enzymes and sympathetic neuropeptides are expressed more strongly by NB than GN Consistent with SNS phenotype, NBs expressed comparatively high levels of catecholamine biosynthesis enzymes tyrosine hydroxylase (TH), dopa-decarboxylase (DDC) and dopamine beta-hydroxylase (DBH; Table 1). These proteins are strongly expressed by sympathetic neurons. Chromogranin A and B (CHGA, CHGB) and neuropeptide Y (NP-Y) are similarly expressed throughout the sympathetic chain [15] and, in our analysis, were specifically expressed in NBs (Table 1). The relatively high expression of sympathetic markers in NB was confirmed by qRT-PCR. GNs, in contrast, expressed these sympathetic markers poorly (Table 1).
We confirmed the differential expression of sympathetic markers in NB by immunocytochemistry for TH, DBH, CHGA and CHGB. All of these markers demonstrated similar expression patterns. Typical images of CHGA and DBH staining are shown in Figure 1. In NB and adrenal medulla, large clusters of cells strongly express TH and CHGA proteins in a cytoplasmic distribution. In both GN and enteric ganglia, expression was weaker and was limited to a subset of neurons, with many neurons demonstrating no expression. Immunostaining for CHGB appeared almost identical to CHGA, while TH strongly resembled DBH (data not shown).
Differential expression of AP-2 transcription factors distinguish GN from the sympathetic lineage
Transcription factors AP2-alpha and AP2-beta are homologous genes expressed in succession during SNS development and differentially expressed in GN and NB. Pre-migratory NC cells express AP2-alpha and this protein is essential for all NC development to proceed [16]. In contrast, post-migratory sympathoblasts express AP2-beta, which is required for adrenergic differentiation [17]. Microarray analysis confirmed, as we have previously demonstrated, AP2-alpha is expressed at higher levels in GN compared to NB [4]. In contrast, NBs expressed markedly higher levels of AP2-beta ( Table 2). The down-regulation of AP2alpha and complementary up-regulation of AP2-beta demonstrates SNS commitment specific to NB and not found in GN. Proteins specific to the enteric nervous system are enriched in GN While GNs expressed low levels of sympathetic markers, genes abundant in the ENS were strongly expressed in GNs. To discern ENS from SNS phenotype we analyzed expression of calcitonin gene related peptide (CGRP), vasoactive intestinal peptide (VIP), glial acidic fibrillary protein (GFAP), and endothelin B receptor (EDNRB). CGRP and VIP are highly expressed in both enteric neurons and dorsal root ganglion cells [18], but only sporadically detectable in sympathetic neurons [19,20] GFAP is widely expressed by enteric glia but uncommon in Schwann cells and limited to the non-myelinating subset [21,22]. EDNRB is expressed during development in both sympathetic and enteric progenitors as well as melanoblasts but functionally required only for enteric and melanocytic differentiation [23,24]. Microarray analysis and qRT-PCR demonstrated that CGRP, VIP, GFAP and EDNRB were all expressed at markedly higher levels in GN relative to NB (Table 1).
To demonstrate differential expression of ENS markers at the cellular level, we used immunocytochemistry to detect GFAP and CGRP in tumors and control tissues (Fig. 2). In NB, as in adrenal medulla, rare scattered cells expressed CGRP. In contrast, in GN, strong CGRP expression was detected in cell bodies of almost all ganglion cells and in neuritic processes. A similar pattern appeared in the ENS, with frequent cytoplasmic labeling of neurons, and intense labeling of axons ( Fig. 2A). GFAP immunostaining in NB was restricted to narrow chords of stroma. GFAP labeling was also seen in rare cells of the adrenal medulla. GN and ENS, however, demonstrated extensive GFAP labeling with a similar fibrillary appearance (Fig. 2B). We have previously demonstrated robust expression of myelin proteins by glial cells in GN [4]; the combination of myelin proteins and GFAP distinguishes the glia of GN from nonmyelinating Schwann cells, suggesting instead an enteric glial phenotype. Taken together, the differential expression of multiple markers in GN confirms divergence from the sympathetic differentiation trajectory, toward an ENS phenotype, an alternative NC fate.
Hedgehog pathway ligand and transcription factors are expressed at high levels in GNs, with predicted effects on known hedgehog target genes Three homologous transcription factors are activated by the HH pathway: GLI1, GLI2 and GLI3. Transcriptional activation of GLI1 is a sensitive reporter of HH pathway activity [25]. Microarray and qRT-PCR demonstrated differential expression of GLI1 and GLI3 in GN (Table 1). Of the 3 HH ligands, SHH and IHH were not expressed at significantly different levels in GN or NB (data not shown). DHH, however, was specifically expressed in GN (Table 1).
To determine the impact of differential expression of HH pathway genes in GN we examined the expression of genes known to be regulated by HH signaling. Microarray analysis demonstrated upregulation of established HH targets, including IGFBP6 [26], cyclin D2 [27] and NR4a1 [28] (Table 2). To validate these observations, we used qRT-PCR to confirm up-regulation of IGFBP6 in a second independent set of GN (Table 1). Genes known to be down-regulated by HH pathway activity, including anti-mullerian hormone (AMH) [28] and plakoglobin [29] were specifically down-regulated in the GNs ( Table 2). The differential expression of HH targets in GN demonstrates the active influence of the HH pathway.
Gli1 alters the proliferation rate of NB cells
To test whether HH pathway activity might direct PNT behavior, we transduced NB cell lines with Gli1 together with GFP or with GFP alone, and then sorted transduced cells to .95% purity by FACS and analyzed proliferation rate and gene expression. We verified Gli1 activity by transducing a GLIactivated luciferase reporter plasmid (data not shown). Transduction of Gli1 profoundly affected the proliferation of NB cells growing in culture (Fig. 3a,b).
We studied effects of Gli1 over-expression on five NB cell lines with different characteristic patterns of growth: the rapidly proliferating lines SH-SY5Y, BE2(S), BE2(N), BE2(C), and slowgrowing cell line SH-EP1. Gli1 reduced the proliferation rate of each of the rapidly growing lines, while minimally affecting the proliferation rate of SH-EP1 (Fig. 3a). In contrast, in explanted murine cerebellar granule cell progenitors (CGCPs) transduction with Gli1 increased proliferation rate 19-fold (Fig. 3a). To confirm that Gli1 reduced in proliferation of NB cells, we prepared and sorted to purity an independent set of SH-SY5Y cells transduced with Gli1 and GFP (SY-Gli) or with GFP only (SY-GFP) and quantified mitoses by PH3 immunostaining. SY-Gli1 cells demonstrated 50% decreased mitotic rate, compared to SY-GFP (Fig. 3b). Immunostaining for cleaved caspase-3, in contrast, demonstrated no increase in apoptosis in the two populations (data not shown).
Gli1 induces NB cells to express genes directing NC development
We determined the transcriptional response of NB cells to Gli1 by comparing gene expression of SY-Gli1 and SY-GFP by microarray hybridization. Diverse genes involved in NC development were differentially expressed. We found Gli1 induced differential expression of 193 out of 13,089 genes (228 out of 22,215 probe sets) on the U133 2.0 with a p-value of ,0.0001 (Table S2). Of these 193 genes, we selected 12 developmentally relevant genes for confirmatory study by qRT-PCR, using independently prepared sets of transduced SY5Y cells raised in triplicate. qRT-PCR confirmed differential expression of 11 of 12 genes with a fold change of 2 or greater (Table 3). Up-regulation of only one of these 12 genes, SOX2, could not be confirmed. We also measured the expression of these 12 genes by qRT-PCR in SH-EP1 cells transduced with Gli1 or GFP. 6 of the 12 genes tested were up-regulated by Gli1 in SH-EP1 cells with a fold change of 2 or greater (Table 3), demonstrating both common elements in the transcriptional response to Gli1, and as well as elements that vary with the cellular context.
Among the 193 genes regulated by Gli1 in SY5Y cells were crucial determinants of cranial and enteric NC development (Table 4). Cranial NC genes up-regulated by Gli1 included KAL1 [30,31], MSX2 [32] and EDNRA [33,34]. Genes critical to ENS development that were induced by Gli1 included RET [35] and GFRA2 [36]. Gli1-regulated genes could also grouped by the different cell types they specified (Table 4): KAL1, RET and GFRA2 regulate neural fates, while MSX2 and EDNRA direct the development of bony NC derivatives. Gli1 induced glial differentiation marker PMP22 and neuronal marker VMAT2, demonstrating the potential of Gli1 to advance both neural and glial differentiation ( Table 4). The observed changes in proliferation and gene expression induced by Gli1 demonstrate the potential for HH signaling to alter the behavior and differentiation trajectory of NB cells.
Gli1 advances differentiation of SY5Y cells
To test further whether Gli1 advanced the differentiation of NB cells in culture, we compared the transcriptional responses of SH-SY5Y cells to either Gli1 transduction or treatment with the differentiation agent RA. RNA from SY-GFP cells treated for 48 Figure 2. Enteric NC markers CGRP (A), and GFAP (B) were expressed more strongly in GN than NB. Antibodies to CGRP labeled rare cells (blue arrowheads) in NB and adrenal medulla (Ad), while labeling neurons (blue arrowheads) and processes (blue arrows) throughout both GNs and enteric ganglia (gut). Antibodies to GFAP labeled stromal cords in NB and rare adrenal medullary cells (blue arrowheads). In contrast, labeling was intense and widespread in GN and in ganglia of the gut (outlines by dotted white lines). doi:10.1371/journal.pone.0007491.g002 hours with either 10 uM all-trans retinoic acid (ATRA) or vehicle was analyzed by microarray and the resulting datasets were compared to determine the sets of genes differentially expressed in response to RA. Comparison of three replicates of each condition, with p,0.0001 defined a set of 326 genes differentially expressed in SY-Gli compared to SY-GFP, and a set of 2123 differentially expressed in SY-GFP treated with ATRA or vehicle. The overlap of these two sets was striking: 147 genes were regulated by both ATRA and Gli1 in the same direction (p,0.00001 by hypergeometric test). The close correspondence between transcriptional response to Gli1 and RA supports the interpretation that Gli1 drives NB cells toward differentiation.
The differential expression of HH pathway genes and targets in GN, the divergence of GN from SNS fate, and the induction of cranial and enteric NC genes in NB cells by Gli1, suggested that HH signaling might drive PNTs toward the GN phenotype. In order to test the hypothesis that HH activity could induce PNTs to become GNs, we determined whether the transcriptional response of NB cells to Gli1 resembled the specific gene expression pattern of GN. We compared the set of genes regulated by Gli1 in SY5Y cells to the set of genes differentially expressed in GN, compared to NB. We found highly significant overlap between the two sets. Of the 2108 genes found on the U95A array to be differentially expressed in GN with a p-value,0.0001, 1811 are represented on the U133 2.0 array and could be used for comparison. In our initial comparison of SY- Gli and SY-GFP, 193 of the 13,089 genes on the U133 2.0 array were differentially expressed with p,0.0001. Of these 193 genes, 57 genes were differentially expressed in GN compared to NB, a correlation very unlikely to occur by chance (p,0.00001 by hypergeometric test). Thus the effect of Gli1 transduction was to slow the proliferation of SH-SY5Y cells, and induce a transcriptional response that resembled both the response to RA and the specific gene expression pattern of GN.
Absence of effect on N-Myc and up-regulation of RET by Gli1
The decreased proliferation and advanced differentiation of NB cells in response to Gli1 contrasted sharply with increased proliferation of Gli1-transduced CGCPs. These alternative responses to HH activity might be mediated by crucial, cellspecific differences in the underlying transcriptional response to Gli1. N-Myc is up-regulated by HH pathway activity in CGCPs and functions as an essential effector of Shh driven proliferation [37]. N-MYC is also an established oncogene in neuroblastoma. We have previously found that the RET proto-oncogene is upregulated by RA in NB cell lines and plays an essential role in RAinduced differentiation [38]. We investigated the potential induction of N-MYC by Gli1 in PNTs by comparing N-MYC mRNA and protein levels in SY-Gli and SY-GFP cells. N-MYC mRNA expression measured by microarray did not correlate with Gli1 transduction (data not shown). Western blot (Fig. 3c) confirmed that N-MYC protein levels were not substantially altered by Gli1. In contrast, RET protein was strongly induced in SY-Gli cells, consistent with the promotion of differentiation.
Differentiation in GN: SNS vs ENS phenotype
Transcriptome analysis of PNTs revealed divergent differentiation trajectories of GN and NB. Among the many differentially expressed genes, we were able to discern a clear shift in markers for NC subsets: NBs expressed sympathetic markers while GNs expressed a pattern of differentiation markers most consistent with the ENS. Like developing ENS cells, GNs demonstrated activity of the HH pathway, marked by expression of both GLI1 and GLI3 and modulation of known HH target expression in predictable directions.
The ENS phenotype of neurons and glia in GN represents divergence from the characteristic pattern of SNS development in PNTs. GNs typically occur at sites along the sympathetic chain, and their progenitor cells have thus migrated as sympathoblasts. At some point after migration, however, under the influence of HH, these progenitors differentiate as ENS neurons and glia. The phenomenon of GNs arising from sites of metastatic NB, moreover, indicates this divergence may occur after malignant transformation has taken place. While the characterization of GNs as resembling ENS is novel, it has long been known that children with GN may present with gastrointestinal complaints. Enteric dysmotility in these patients has been attributed to secretion by the tumor of enteric neuropeptides [39]. The present findings place this enteric presentation within the context of NC development.
HH pathway as morphogenic and tumor suppressive
During development, HH is both a mitogen and a morphogen, regulating both proliferation and differentiation. The diverse potential effects of HH signaling allow for a range of plausible roles in GN pathogenesis. Although HH-mediated Gli1 activation can drive proliferation and tumorigenesis of cerebellar neural progenitors, and we found that Gli1 transduction increased the proliferation of CGCPs, in NB cells, Gli1 inhibited proliferation and induced expression of RA-responsive genes and genes upregulated in GN. Our observation that Gli1 decreased proliferation of NB cells is not consistent with an oncogenic effect.
We suggest that HH acts in GN as a morphogen, driving not proliferation, but specific fate choice, promoting PNT differentiation rather than growth. In the developing zebrafish retina, SHH signaling promotes the differentiation of neuroblasts, and inhibition of SHH supports progenitor proliferation [40]. In hippocampal neural stem cells, moreover, GLI1 can limit proliferation by inducing apoptosis [41]. Accordingly, GNs may demonstrate the highest differentiation specifically because they have achieved the highest levels of HH activity, depleting tumor stem cells through apoptosis or differentiation. While we did not observe prominent induction of apoptosis by SY-Gli cells, we cannot rule out an apoptotic effect on a sub-population of cells with stem-like properties. A tumor suppressive role for HH signaling would account for the benign, differentiated phenotype of GNs and for the decrease in proliferation of NB cells on transduction with Gli1. This hypothesis may be tested in vivo, and the role of HH signaling in NC development and PNT pathogenesis may be further elucidated, through xenograft studies and animal primary PNTs models, using targeted activation of Smo to drive HH activity [42] and targeted activation of MYCN to induce PNTs [43].
DHH and Schwann cell mediated differentiation in GN
The mechanism of HH pathway activation in GN is not clear. The HH pathway may be activated through a cell autonomous mechanism, as seen in medulloblastomas in which mutant SMO is constitutively active [9]. We did not detect differential expression of SMO in GN, but the possibility of activating mutations in GN remains to be evaluated. Alternatively, the HH signaling may be activated by ligand binding.
HH signaling in GN could be mediated by DHH, which, like GLI1 and GLI3 is differentially expressed in these benign tumors. Schwann cells in peripheral nerves secrete DHH [44], and GNs are largely composed of cells that resemble Schwann cells. Several lines of evidence have demonstrated that Schwann cells secrete factors that promote differentiation of NB cells [45,46]. DHH may contribute to this inductive interaction, and glial cells within GN may, unlike typical Scwhann cells [47], respond to DHH in an autocrine manner, to enforce and advance differentiation.
Human Tissue Samples
This study was conducted according to the principles expressed in the Declaration of Helsinki. The study was approved by the Memorial Sloan-Kettering Cancer Center Institutional Review Board (IRB) and Human Tumor Utilization Committee (HTUC). All patients provided written informed consent for the collection of samples and subsequent analysis.
Cell culture techniques
Neuroblastoma cell lines were kindly provided by R. Ross and B. Spengler (Fordham University, Fordham, NY). HEK293E cells (Invitrogen) were used for retroviral packaging. All cell lines were maintained in Opti-MEM with Glutamax (cat#51985; Invitrogen) supplemented with 10% FCS and penicillin/streptomycin at 37uC in a humidified environment with 5% CO 2 . For RA treatment, SY-GFP cells were plated in triplicate sets in 10 cm plates and allowed to adhere for 24 hours, after which 1/1000 th volume of 10 mM retinoic acid dissolved in 100% EtOH, or vehicle alone was added to the medium. After 48 hours in ATRA, cells were processed for microarray.
Murine Gli1 and GFP were transduced into NB cells using a bicistronic retroviral plasmid, pLZIR-GLI (gift of Robert Wechsler-Reya, PhD, Duke University). Control cells were transduced with GFP, using pWZL-GFP. Retroviral stocks were prepared by cotransfecting HEK293E cells with Gag-Pol and VSVG plasmids along with the either pLZIR-GLI or pWZL-GFP, using Fugene 6.0 (cat#11814443001; Roche). Packaging cells were transferred to 32 C 24 hours after transfection, and viral stocks were harvested at 48 and 36 hours. For transduction, NB cells were exposed to retroviral stock at 32 C for 4-6 hours, recovered for 3-4 days at 37 C and sorted for GFP expression by FACS. GFP+ populations were then recovered and expanded through 2-3 passages before use in experiments.
Proliferating cells were labeled by EdU and Alexa647 using the Invitrogen Click-iT EdU Alexa Fluor 647 Flow Cytometry Assay kit (cat# A10202), then analyzed by FACS. For PH3 and caspase staining, 50,000 cells per well were plated in 96 well plates and maintained in culture for 2 days, then fixed in 4% formaldehyde, immunostained for PH3 and counterstained with DAPI. We performed automated quantification of PH3+ and DAPI+ cells with 8-fold replicates using an INCell 1000 automated microscope (GE/Amersham Biosciences).
Microarray analysis
Microarray hybridization of tumor derived samples was carried out on Affymetrix U95A chips, as previously described [48]. RNA was prepared from SY-Gli and SY-GFP cell lines using Trizol Reagent (Invitrogen cat# 15596026), processed and analyzed on Affymetrix U133 2.0 chips, according to manufacturer's protocol. All microarray expression data were processed with Robust Multichip Analysis (RMA; R software; the R Foundation for Statistical Computing) [49]. Differential expression analysis was performed to identify genes between sample groups applying an Empirical Bayes t test to each gene [14]. All microarray data are described in accordance with MIAME guidelines. Microarray data are deposited in the caArray Array Data Management System, Experiment Identifier gersh-00292.
To derive gene lists for comparison, a p-value cutoff of 0.0001 was used to select differentially expressed genes. The number of tests (i.e. probesets) with p,0.0001 expected by chance equals 0.00016 total number of probesets. For the U95 platform, which is used for the tumor sample study, the total of number of probesets is 12,533 and hence 1 probeset is expected to have a p-value,0.0001 just by chance. For the U133A platform, which is used for the cell line study, the total of number of probesets is 22,215 and hence 2 probesets are expected to have a p-value,0.0001 just by chance. The actual sets of genes demonstrating differential expression with p,0.0001 were compiled, and a hypergeometric test was then used to calculate the probability of overlapping sets.
Quantitative real-time reverse transcriptase PCR (qRT-PCR)
We confirmed gene expression patterns noted by microarray by qRT-PCR using a 2-step method. Total RNA for template was purified from tumor and cell line samples independent from those used for microarray; 12-18 tumor samples and 12 cell line samples were used in each qRT-PCR assay. First strand cDNA was synthesized using the Superscript III kit (Invitrogen). We performed qRT-PCR reactions on the BioRad iCycler and IQ SYBR Green Master Mix (170-8882; BioRad). All reactions were performed in duplicate. All primer pairs were designed to span at least 1 intron-exon boundaries, except for VIP, which has no introns (Table S3). Each primer pair was validated by RT-PCR and the products were cloned and sequenced to confirm specificity. Cloned PCR products were then used as copy number standards for qRT-PCR. Each primer pair yielded measured Ct values in control reactions that varied linearly with the log of copy number over at least 6 orders of magnitude. We normalized all values to the measured expression of beta actin (BA). Fold change was computed by comparing the median copy number for each group.
Immunocytochemistry and Western blots
All immunocytochemistry was performed on paraffin embedded material from pathology archives of The Memorial Sloan-Kettering Cancer Center (MSKCC). Antigen retrieval and staining techniques were as previously described [4]. Cultured cells were not subjected to antigen retrieval. For Western blots, SY-Gli and SY-GFP cells were prepared in triplicate wells, and lysed in situ, then subjected to SDS-PAGE, and developed using chemiluminescence. Sources of primary antibodies and concentrations used are as listed ( Table S3). | 6,399.4 | 2009-10-16T00:00:00.000 | [
"Biology",
"Medicine"
] |
Structure analysis of free and bound states of an RNA aptamer against ribosomal protein S8 from Bacillus anthracis
Several protein-targeted RNA aptamers have been identified for a variety of applications and although the affinities of numerous protein-aptamer complexes have been determined, the structural details of these complexes have not been widely explored. We examined the structural accommodation of an RNA aptamer that binds bacterial r-protein S8. The core of the primary binding site for S8 on helix 21 of 16S rRNA contains a pair of conserved base triples that mold the sugar-phosphate backbone to S8. The aptamer, which does not contain the conserved sequence motif, is specific for the rRNA binding site of S8. The protein-free RNA aptamer adopts a helical structure with multiple non-canonical base pairs. Surprisingly, binding of S8 leads to a dramatic change in the RNA conformation that restores the signature S8 recognition fold through a novel combination of nucleobase interactions. Nucleotides within the non-canonical core rearrange to create a G-(G-C) triple and a U-(A-U)-U quartet. Although native-like S8-RNA interactions are present in the aptamer-S8 complex, the topology of the aptamer RNA differs from that of the helix 21-S8 complex. This is the first example of an RNA aptamer that adopts substantially different secondary structures in the free and protein-bound states and highlights the remarkable plasticity of RNA secondary structure.
INTRODUCTION
Over the past several years, high-resolution structure studies of ribonucleoprotein complexes have revealed a wealth of detailed information on structural motifs that contribute to protein-RNA specificity (1)(2)(3)(4)(5). RNA molecules employ a diverse repertoire of secondary structure motifs including bulged nucleotides, non-canonical base pairs and base triples, terminal (hairpin) loops and internal loops to create architectures that serve as protein-specific conformational signatures. Internal loops, regions of double-stranded nucleic acid within a base-paired helix that do not maintain Watson-Crick secondary structure, occur in a variety of RNA systems and widely differ in their size and nucleotide content (6). Hydrogen bonding, base stacking and divalent metal ion coordination can stabilize complex folds of these regions, but with a few notable exceptions such as the loop E motif, kink-turns, tetra-loops, C-loops and the A-minor motif, it remains difficult to predict the interactions that form among the internal loop nucleotides in free or proteinbound forms of an RNA (6,7).
The complex formed between bacterial ribosomal protein S8 (r-protein S8) and 16S rRNA is a well-studied interaction that is specified by an internal loop. The binding of r-protein S8 to 16S rRNA has been extensively characterized using a variety of techniques including chemical modification and protection assays (8)(9)(10), filter binding assays (11)(12)(13) and mutagenesis (13,14). These studies showed that the majority of protein-RNA contacts localize to helices 21 and 25 and that a minimal RNA fragment located in helix 21 is sufficient to confer specificity and high affinity to the S8-RNA interaction (12). This primary binding site consists of a helix interrupted by an internal loop of seven phylogenetically conserved nucleotides ( Figure 1). The same conserved secondary structure element is found in the 5 untranslated region of the rplE gene at the beginning of the spc operon (15). The translation of genes encoded by the spc operon, including those of S8 and several other ribosomal proteins, is repressed by the binding of r-protein S8 at this site (15).
In addition to structural conservation of the primary RNA binding sites for S8, the overall fold of S8 r-proteins is conserved. The S8 protein has two domains, N-and Cterminal (16)(17)(18), and the arrangement of ␣-helices and -Figure 1. Sequence and secondary structures of primary RNA binding sites for r-protein S8. The natural binding sites are helix 21 from Bacillus 16S rRNA and the spc mRNA from E. coli. The RNA aptamer constructs used for the structural studies are RNA-1 (NMR) and RNA-2 (X-ray) and the randomized element from the selection is boxed. 5 -fluorescein-labeled RNA hairpins were prepared by ligation of a chemically synthesized oligonucleotide (italics) with enzymatically synthesized oligonucleotides.
sheet strands that make up the domains is maintained when S8 binds to RNA (19)(20)(21). In addition, many of the intermolecular interactions between r-protein S8 and RNA are the same for helix 21 and the spc mRNA binding sites. Notably, these protein-RNA interactions are maintained within the 30S ribosomal RNA subunit. These complexes reveal that the S8-RNA binding involves electrostatic and hydrogen-bond interactions and shape complementarity.
The primary RNA binding site for r-protein S8 contains non-canonical structural elements important for specificity and affinity. A previous systematic evolution of ligands by exponential enrichment (SELEX) study suggested the presence of a base triple (G 597 -C 643 )-U 641 located in the internal loop of helix 21 (22). The RNA molecules that bound tightly to the S8 protein contained nucleotide combinations at the positions corresponding to 597/643/641 that were isosteric with the proposed (G-C)-U base triple. In addition to the base triple, an adenine nucleotide corresponding to the invariant A 642 was present in the selected aptamers (22), underscoring the importance of this residue. In the free RNA, U 641 participates in a bifurcated hydrogen bond with the G 597 -C 643 base pair and the A 642 base (23). The internal loop also contains the base triple A 595 -(A 596 -U 644 ) (23). These elements are important for shaping the trajectory of the sugar-phosphate backbone to display a distinctive set of structural features and are preserved in S8-RNA complexes (19)(20)(21)24). A complementary study involving the randomization of residues 597/641/643 and performed in vivo using Escherichia coli confirmed the functional importance of the nucleotide triplet (25).
The nucleotide sequence and secondary structure of the primary binding site for r-protein S8 on helix 21 are highly conserved ( Figure 1B). These conserved elements impose a shape to the RNA that optimizes electrostatic and van der Waal's interactions with the protein surface. The S8 protein contains a secondary RNA binding site with a large electropositive surface associated with helix 25 in the 30S subunit but does not display sequence specificity. To search for RNA secondary structures that differ from the conserved bacterial motif and investigate how RNA sequence might adapt to binding restrictions imposed by the S8 protein, a SELEX experiment was performed. The selection was based on an RNA stem-loop scaffold containing symmetric and asymmetric internal loops of 16 randomized nucleotides. An RNA aptamer sequence that is not predicted to adopt the structural features of the highly conserved asymmetric internal loop motif of the natural binding site was chosen for structure analysis. The affinity of the aptamer for the S8 protein is 2-fold tighter than the affinity exhibited by the native helix 21. In the free state, the internal loop of the hairpin stem contains G-A, U-U and A-A mismatches with an overall helical A-form geometry. To bind r-protein S8, the internal loop undergoes a dramatic rearrangement of secondary structure to form a base triple and a base quartet. Many of the protein contacts observed in native S8-RNA complexes are now recapitulated in a novel manner in the S8-aptamer complex. It is remarkable that a molecule whose secondary structure is far removed from the native target forms many of the same contacts as the natural binding site. This is the first example of an RNA aptamer shown to have one dominant secondary structure in the free state and a substantially different structure in the proteinbound state. The S8-aptamer complex demonstrates the remarkable plasticity of RNA to form unexpected structures that meet biological function.
Protein expression and purification
The Bacillus anthracis and E. coli S8 r-proteins were expressed as N-terminal 6X-His tagged fusion proteins. The rpsH genes were polymerase-chain-reaction-amplified from genomic DNA, cloned into the pET28b vector (Novagen) using the Nhe1-Xho1 restriction sites, and the sequences confirmed. The proteins were expressed in BL21(DE3) cells, isolated in the form of inclusion bodies, and dissolved with 7 M urea as described (13). The B. anthracis S8 rprotein also was expressed from cells cultured in M9 media supplemented with 50 mg/l selenomethionine. The ureasolubilized S8 r-protein solutions were applied to an affinity (Ni 2+ ) column that was equilibrated with buffer A (7 M urea, 0.1 M NaH 2 PO 4 , 10mM tris-HCl, pH 8.0 and 2 mM -mercaptoethanol). The column was washed with buffer A plus 20 mM imidazole and 500 mM NaCl and the protein was eluted using 200 mM imidazole in buffer A. Fractions were collected and those containing S8 (>95% purity) were combined and the protein renaturated over 3 days via serial dialysis in buffer B (50 mM KCl, 20 mM sodium cacodylate, pH 6.8) containing decreasing molar concentrations of urea: 7.0, 4.0, 2.0, 1.0, 0.5 and 2× 0.0 M. The refolded S8 proteins were concentrated (Amicon) and quantified using the Bradford method.
In vitro aptamer selection
The RNA aptamer selection was performed as described (26-28) using 5 -rNTPs. The RNA transcript was designed to form a hairpin with the sequence 5 -GAGGCUUCCU(N X )CUUCGG(N Y )GGGAAGCCUC-3 . X and Y designate the number of randomized nucleotides (X = 7, 8, 9 and Y = 9, 8, 7) so that the sum of X and Y was fixed at 16. The aptamer sequence chosen for study was identified after 10 rounds of selection and forms a secondary structure with a symmetric internal loop, in contrast to the asymmetric internal found in natural S8 binding sites. Additional details of the selection are given in Supplementary Information.
PREPARATION OF RNA SAMPLES
The aptamer molecule for X-ray crystallography ( Figure 1) was purchased (Dharmacon). The aptamer molecules for NMR ( Figure 1) were synthesized in vitro using T7 RNA polymerase and a synthetic DNA template. Unlabeled and isotopically labeled RNA molecules were prepared as described (29). The polyacrylamide gel electrophoresis (PAGE) purified RNA molecules were dialyzed extensively against 10 mM KCl, 10 mM sodium cacodylate, pH 6.6 and 0.02 mM EDTA and lyophilized. The RNA samples were suspended in 0. 35 Two-dimensional (2D) 13 C-1 H HSQC spectra were collected to identify 13 C-1 H chemical shift correlations. Sugar spin systems were assigned using 3D HCCH-TOCSY (8 ms and 24 ms DIPSI-3 spin lock) experiments and 2D HCN experiments were used to identify intra-residue baseribose correlations. Pyrimidine C2 and C4 resonances were assigned from H6-C2 and H5-C4 correlations using 2D H(CN)C and 2D CCH-COSY experiments and a 2D H(N)CO experiment for uridine NH-[C2, C4] resonances (30)(31)(32). Sequential assignments and distance constraints for the non-exchangeable resonances were derived at 26 • C from 2D 1 H-1 H NOESY spectra (t m = 90, 180 and 320 ms) and 3D 13 C-edited NOESY spectra (t m = 180 and 360 ms). Assignments and distance constraints for the exchangeable resonances were derived at 12 • C from 2D NOESY spectra (t m = 160 and 360 ms) acquired in 90% 1 H 2 O. 3 J H-H , 3 J P-H and 3 J C-P coupling constants were estimated using DQF-COSY, 31 P-1 H HetCor and CECT-HCP (33) experiments, respectively. NOE peak intensities were classified as very strong, strong, medium, weak, or very weak and distance constraints applied (Supplementary Table S1).
Structure refinement was carried out with simulated annealing and restrained molecular dynamics (rMD) calculations using Xplor-NIH v2.19 (34). The aptamer was generated as a linear molecule and starting coordinates were based on A-form geometry. Beginning with the energy minimized starting coordinates, 50 structures were generated using 18 ps of rMD at 1200 K with hydrogen bond, NOEderived distance and base-pairing restraints. The system then was cooled to 25 K over 12 ps of rMD. Force constants used for the calculations were increased from 2 kcal mol −1Å−2 to 30 kcal mol −1Å−2 for the NOE and from 2 kcal mol −1 rad −2 to 30 kcal mol −1 rad −2 for the dihedral angle constraints. After minimization, NOESY spectra were re-examined for predicted NOEs absent from the constraint list. The calculations were repeated using revised constraint lists and eight structures were selected for the final refinement using criteria that included lowest energies, fewest constraint violations and fewest predicted unobserved NOEs. A second round of rMD was performed on these structures using a starting temperature of 300 K followed by cooling to 25 K over 28 ps of rMD. The eight refined structures were analyzed using Xplor-NIH and Chimera. The data and structure statistics are reported in Supplementary Table S1.
Crystallization and structure of B. anthracis S8 and B. anthracis S8-aptamer complex
Crystals of B. anthracis S8 were obtained by sparse matrix screening of S8 at 10 mg/ml at 4 o C. Preliminary results were followed by optimization of the successful condition manually using the sitting drop vapor diffusion method. The bestquality crystals were grown in 48-51% Tacsimate at 20 o C. No cryoprotectants were required for cryopreservation in liquid nitrogen.
Data collection and processing
Diffraction data sets for S8 protein were collected at 100 K at 1.9Å at the Cornell High Energy Synchrotron Source beam line. The data were integrated, scaled and merged using the HKL-2000 package (35). B. anthracis r-protein S8 crystallized in space group C2221 with the unit cell parameters a = 118.33, b = 148.82, c = 68.62Å, ␣ = ␥ = 90 • ,  = 98.7 • . Data collection and processing statistics are listed in Supplementary Table S2.
Crystals of S8-RNA were passed briefly through cryoprotectant solutions consisting of 0.3 M sodium citrate pH 7.0, 100 mM sodium chloride, 10 mM spermidine supplemented with 5, 10, 15, or 20% (v/v) glycerol. Diffraction data for the S8-aptamer complex was collected to 2.6Å resolution using a NOIR-1 Molecular Biology Consortium (MBC) detector system at the beamline 4.2.2 at the Advanced Light Source synchrotron (Berkeley, CA). The crystal belonged to space group P212121 with unit cell parameters a = 55 The data was pro-cessed using D*TREK (36) with R merge = 9.2% and completeness 99.9%. R merge and completeness in the outermost shell (64.3Å) was 99.9%.
Structure determination
The structure of B. anthracis S8 was solved by the standard method of single anomalous dispersion (SAD). Heavy atom sites from the metabolically incorporated selenomethionines were found by the online application SHARP (37). SAD electron density map was calculated using CCP4 (38) and map integration and model building were performed with the program O (39). Molecular replacement for three copies in the asymmetric unit, refinement and composite omit maps was computed using CNS (40). The model was then rebuilt manually and further refined. The final structure has an R factor of 22.3% and R free of 23.3%.
A molecular replacement for the S8-aptamer complex was found using program Phaser for MR (41) from CCP4 (38) suit using the B. anthracis S8 r-protein (solved inhouse) as a search model. The initial solution suggested one monomer per asymmetric unit consistent with the Matthew's coefficient of 3.16 (65% of solvent). The molecular replacement was further confirmed by the initial (2F o -F c ) map generated using Coot (42) that clearly indicated electron density for the RNA aptamer not included in the original search model. The S8-RNA model has been refined to the R of 18.9% and R free 27.0% (Supplementary Table S2). Ramachandran plots and root-mean-square deviations (rmsd) from ideality for bond angles and lengths for S8/RNA were determined using a structure validation program, MolProbity (43).
Fluorescence anisotropy
A Beacon 2000 fluorescence polarimeter (PanVera Corp.) was used for the fluorescence anisotropy experiments. 5fluorescein-labeled RNA hairpin samples were extensively dialyzed against a buffer of 25 mM Tris-Acetate (pH 7.6) and 150 mM potassium acetate. RNA samples were heated to 90 • C for 60 s, snap cooled on ice and dialyzed against 25 mM Tris-Acetate (pH 7.6), 150 mM potassium acetate and 10 mM magnesium acetate. The concentration of RNA was kept constant at 1.0 nM and the concentration of S8 protein ranged 1.0-500 nM. Samples were mixed by addition of protein solution to RNA and incubated at 4 • C for 30 min. Four measurements were averaged for each S8 concentration. Experiments were performed in triplicate. The apparent K d values were determined from a non-linear leastsquares fit of the data to a binding model for a single-site using GraphPad Prism 5 (GraphPad Software, Inc.).
RESULTS
The SELEX experiment was performed to identify RNA sequences that do not maintain the conserved features of helix 21 (Figure 1) but retain the ability to bind the S8 protein with high affinity and specificity. The starting library was composed of molecules with 16 randomized nucleotide positions inserted within the stem of an RNA hairpin (Figure 1C). After 10 rounds of selection, the RNA pool was Nucleic Acids Research, 2014, Vol. 42, No. 16 10799 cloned and 40 inserts sequenced. Alignment of the RNA aptamer sequences showed the presence of native-like (asymmetric internal loop) binding sites including sequences corresponding to helix 21 of E. coli and Bacillus 16S rRNA in addition to non-natural binding sites with symmetric internal loops. Electrophoretic mobility shift assays (EM-SAs) were used to qualitatively assess the S8 binding of non native-like aptamers and the sequence ( Figure 1) containing a symmetric internal loop chosen for this study.
Fluorescence anisotropy was used to measure the interaction affinity of the aptamer RNA with r-protein S8 from Bacillus and E. coli. Fluorescein-labeled RNA aptamer and an RNA molecule corresponding to the primary binding site on helix 21 were titrated with S8 protein and the change in anisotropy of the RNA monitored ( Supplementary Figure S1). The Bacillus S8 protein binds the RNA aptamer with a K d of 110 ± 30 nM and the helix 21 site with a K d of 180 ± 60 nM. The affinity of E. coli r-protein S8 for the RNA molecules are 8-10-fold tighter, K d = 19 ± 4 nM and K d = 28 ± 7 nM for the aptamer RNA and helix 21 RNA, respectively. The affinity of E. coli r-protein S8 for the helix 21 sequence element is consistent with filter-binding measurements (9,10,12). Filter binding experiments using the archaeal Methanococcus vannielii S8 protein yielded an apparent K d for its 16S rRNA helix 21 binding site of 220 nM, an affinity similar to that of the Bacillus S8 protein for helix 21 (44). S8 proteins from thermophilic and hyperthermophilic archael organisms show 10-to 100-fold tighter binding to their respective 16S rRNA targets (17,44). The affinity of Aquifex aeolicus S8 protein for the minimal RNA binding site is 1.5 nM, but the protein has very high affinity (0.018 nM) for an RNA construct containing the three-way junction formed by Helices-20-21-22 (17).
Solution NMR resonance assignments of the aptamer molecule
The core of the aptamer sequence for NMR analysis was introduced into a hairpin capped by a UUCG tetraloop ( Figure 1). Cross peaks in the NH 15 N-1 H HSQC spectrum are consistent with the predicted secondary structure including the signature peak at 9.80 ppm from the UUCG tetraloop. Since the selection was performed in the presence of Mg 2+ , the NMR spectrum of RNA-1 was monitored to assess metal ion binding, but only the G-C base pair triplet at the end of the stem exhibited significant metal ion association. Therefore, the solution NMR study of the RNA aptamer was performed in the absence of multivalent cations.
Sequential assignments for the non-exchangeable resonances were made using 2D NOESY and 3D 13 C-edited NOESY experiments. The sequential base-1 NOE connectivities at m = 180 ms ( Figure 2) are discontinuous between nucleotides U 18 and U 19 of the tetraloop and very weak at steps U 12 -G 13 and U 27 -C 28 within the internal loop. The loss of connectivity in the tetraloop is characteristic of the UUCG sequence. Most sequential base 6/8 NOEs are observed except for A 10 -G 11 , U 12 -G 13 and G 13 -A 14 in the internal loop. Notably, none of the resonance pairs exhibit exchange broadening (Figure 2 and Supplementary Figure S2) and the nucleotides in the tetraloop are the only residues with ribose resonances that have anomalous chem-ical shifts. The inter-base pair NOE connectivities of the NH resonances are continuous from G 2 to G 30 and from U 15 to G 21 . The U 12 and U 27 NH resonances are at 11.08 and 10.53 ppm, and the NH resonances of G 11 and U 25 are not observed. All cytidine NH 2 resonances were assigned including those of C 28 (8.01 and 7.04 ppm), which are indicative of base pairing. The inter-nucleotide phosphate 31 P resonances are clustered between −3.32 and −5.05 ppm except the U 27 pC 28 31 P resonance that has a chemical shift of −2.40 ppm. A complete list of resonance assignments is given in Supplementary Table S2.
Solution structure of the RNA Aptamer
The global fold the aptamer is a hairpin capped by a canonical UUCG tetraloop and the stem interrupted by an eight-nucleotide internal loop ( Figure 3). The internal loop is composed of nucleotides A 10 -G 13 and A 26 -A 29 and is flanked on one side by a distorted A 14 -U 25 base pair. The internal loop is characterized by two non-standard base pairs, a sheared A-G and a U-U, and a Watson-Crick G-C base pair. The bases of A 10 and A 29 form an inter-strand stack with each other. The spectral data support the presence of these base-base interactions, but the arrangement of nucleotides is not as tightly ordered as observed in other structures. The H1 resonance of U 27 has a chemical shift of 5.03 ppm and is consistent with a partially sheared base pair configuration between G 13 and A 26 (31). The U 12 and U 27 residues that are adjacent to the G 13 -A 26 pair form an asymmetric U-U base pair. The U 12 -U 27 base pair is arranged with the hydrogen-bond pattern U 7 N3H-U 22 O4 and U 22 N3H-U 7 O2 (Supplementary Figure S2). As with the neighboring G 13 -A 26 pair, the gap between the uridine bases is relatively wide and the imino protons are accessible for solvent exchange. Residue C 28 pairs with G 11 as indicated by the NH 2 and C2 resonances of C 23 , but the G NH resonance exchanges with solvent and is not observed. The A 10 and A 29 bases each extend across the helix axis with A 24 stacked on the G 11 -C 28 base pair. This conformation is supported by unusually strong cross-strand H2-H1 NOE cross peaks. The A 10 base is laterally displaced toward the minor groove and is positioned slightly below the plane of the C 9 -G 30 base pair. In the converged structures, the A 10 NH 2 consistently forms a hydrogen bond with the C 9 O2. The moderately downfield-shifted A 10 N6 resonance (82.5 ppm) is consistent with this hydrogen bond.
The sugar-phosphate backbone conformations of the aptamer nucleotides within the internal loop are surprisingly uniform (Figure 3). Only the G 13 ribose has a C2 -endo ring pucker conformation and the uniformly small (<5 Hz) P-C2 coupling constants for the loop residues place the ⑀ torsion angles in the trans conformation characteristic of A-form RNA. Although torsion angles ␣ and were left unconstrained, the angle between U 27 and C 28 is translike rather than gauche − in all structures. This configuration is consistent with the relative downfield 31 P shift of the involved phosphate. ␣ and at other positions are consistently gauche − or exhibit trans/gauche − variability between converged structures. Table S2) and the heavy atoms superimpose on the average structure with an average rmsd of 1.54Å 2 . The color scheme is: magenta, residues in the non-canonical core (A 10 -G 13 and A 26 -A 29 ); green, the tetraloop nucleotides (U 18 -G 21 ), orange, nitrogen atoms; blue, oxygen atoms. (C) Arrangement of the U 12 -U 27 and G 13 -A 26 non-canonical pairs present in the aptamer core.
Crystal structure of the aptamer-S8 complex
The crystal structure of the aptamer RNA-2 ( Figure 1) in complex with Bacillus ribosomal protein S8 was solved by molecular replacement using the structure of unliganded B. anthracis S8 and refined against a 2.6Å data set. The refined model contains all 38 nucleotides of the aptamer and residues 4-132 of the S8 protein.
The S8 protein has two closely packed domains that are composed of the N-and C-terminal halves of the molecule (Figure 4). The N-terminal domain is made up of an ␣--␣-- fold and the ␣-helices stack on the surface of the -strands. The C-terminal domain contains a short (six residue) ␣-helix pressed against an anti-parallel four-strand -sheet. A fifth strand perpendicular to the helix and sheet connects these two elements. The structure of the aptamer-bound protein is very similar to the free protein (0.65Å rmsd of the back bone atoms). The majority of residues that contact the aptamer are in the C-terminal domain of S8 and are generally located in turns at the ends of the  strands. The distribution and arrangement of these secondary structure elements is largely the same as reported for other S8 proteins from thermophilic and mesophilic bacteria (16)(17)(18)(19)(20)(21).
The structure of aptamer RNA-2 is well defined with a global fold of a hairpin terminated on one end by the UUCG tetraloop. The tetraloop nucleotides adopt the archetypal conformation with U1 and G4 of the loop pairing and G4 adopting the syn configuration about the glycosidic bond. The canonical A-form helical stem of the aptamer is interrupted by an internal loop that includes nucleotides A 10 -A 14 on the 5 strand and U 25 -A 29 on the 3 strand. This central core of the aptamer has several nonstandard structure features and is characterized by a complex network hydrogen bonds among the bases. A 10 and A 29 at one end of the loop adopt a cis Watson-Crick/Watson-Crick base pair via an A 10 N1-A 29 NH 2 hydrogen bond and weak A 10 H2-A 29 N1 hydrogen bond (similar to that between A 1912 -A 1927 in Haloarcula marismortui 23S rRNA). Stacked against the A 10 -A 29 pair is a G 11 -(G 13 -C 28 ) base triple. The base of G 11 is coplanar with the Watson-Crick G 13 -C 28 base pair and is joined to the pair through hydrogen bond G 13 O6-G 11 NH 2 . The G 11 base is further locked into position by a hydrogen bond between G 11 O6 and U 25 2 -OH. This base triple stacks on a base quartet composed of residues U 12 , A 14 , U 25 and U 27 (Figure 4). A 14 and U 27 form a buckled Watson-Crick A-U base pair. The U 12 N3H and O4 atoms form hydrogen bonds with A 14 N7 and N6H 2 , respectively. Residue U 25 hydrogen bonds with both U 12 (U 25 N3H to U 12 O4) and A 14 (U 25 O2 to A 14 NH 2 ) and is coplanar with A 14 and U 25 (Figure 4). The base of A 26 stacks beneath U 27 and is positioned by hydrogen bonds between A 26 NH 2 and U 15 and U 25 O2 atoms. A 26 is displaced to the minor groove side of the helix axis and terminates the base In contrast, the 3 strand of the phosphate backbone maintains its pitch through the internal loop. In particular, the leapfrog effect of the U 12 and G 13 bases that occupy adjacent planes and the displaced A 26 nucleotide alters the register of the phosphate groups along the 5 and 3 strands of the stem, respectively ( Figure 5).
The interaction between r-protein S8 and the RNA aptamer involves one face of the RNA and extends from base pairs A 4 -U 35 to C 17 -G 22 . This interaction buries approximately 923Å 2 of protein surface area which is similar to the 870Å 2 and 940Å 2 reported for the E. coli S8-spc mRNA and Methanococcus jannaschii S8-rRNA complexes, respectively (19,20). There are about twice as many protein-RNA contacts arising from the C-terminal domain of r-protein S8 than from the N-terminal domain and include electrostatic, hydrogen-bond and hydrophobic interactions. All but one of the protein-RNA contacts involve the sugar-phosphate backbone; the only base interaction is between A 26 N3 and the side chain hydroxyl of S107 ( Figure 6). The side chain of K31 forms a salt bridge with the pC 2 phosphate and the backbone amide forms a hydrogen bond with pG 1 phosphate. The side chains of T4 and Q57 interact with pA 4 and the A 4 2 OH and the side chain of K56 forms a hydrogen bond with the C 36 2 OH.
The interface between the RNA and the C-terminal domain of S8 includes specific interactions in the core of the RNA and non-specific interactions with the sugarphosphate backbone of the internal loop and stem. The tetraloop nucleotides do not interact with the protein. The phosphoryl oxygens and 2 OH groups of C 16 , C 17 , A 24 , U 25 , U 27 and A 26 form salt bridges or hydrogen bonds with side chain or backbone amide functional groups of E126, S107, G124, K110, S109, A91 and T123 ( Figure 6). In addition to forming the only base-specific contact, the side chain of S107 also forms a hydrogen bond with the A 26 2 OH. Additional protein-RNA interactions in the complex are mediated by water molecules and include base contacts to internal loop residues U 27 O2 to E126 OE1 and C 28 O2 to Y88 OH. Also, the peptide bond between the highly conserved residues S107-T108-S109 stacks against the purine ring of A 26 . An analogous stacking interaction is present in the complex between r-protein S8 and its natural RNA targets and involves A 642 (20,21).
DISCUSSION
Ribosomal protein S8 is highly conserved among bacteria and archaea and serves as a translational repressor of ribosomal protein genes in the bacterial spc operon (15). The contacts between S8 and its RNA targets are largely the same within the contexts of the spc mRNA (19), helix 21 of 16S rRNA (20) and the 30S ribosomal subunit (21). Many of the native-like contacts also are present in the S8-aptamer complex, but the primary structure of the aptamer requires a novel network of nucleobase interactions to form the complex.
Comparison of the aptamer structures in the free and S8bound states
The structure of the protein-free RNA aptamer in solution is well ordered and exhibits negligible intermediate timescale dynamics. Nucleotides G 11 -A 14 and U 25 -C 28 form the central portion of the stem and core binding site for rprotein S8. The non-canonical U 12 -U 27 and G 13 -A 26 base pairs are somewhat relaxed from idealized geometries and the purine rings of the A 10 -A 29 mismatch lie on overlapping planes, leading to a small kink in the helix. The G-C and A-U base pairs that flank the internal loop exhibit increased solvent accessibility as evidenced by rapid NH solvent exchange. The conformation of the RNA binding site core region is substantially altered in the complex (Figure 7). The G 13 -A 26 pair is disrupted as the base of G 13 leapfrogs over U 12 to pair with C 28 and forms a base triple via the minor groove edge of G 11 (Figures 4 and 8) (45). The base of A 26 is displaced from the helix but continues to stack beneath the plane the adjacent U 27 residue. The U 12 -U 27 and A 14 -U 25 base pairs are remolded into a base quartet tethered together by an array of hydrogen bonds largely involving functional groups of the major groove base edges ( Figure 5) (45). Residue A 14 continues to participate in a Watson-Crick-type base pair, but its pairing partner changes from U 25 to U 27 . This arrangement of core nucleotide interactions appears unique among free or RNA-ligand complexes (46).
Comparison of the S8-aptamer complex with native S8-RNA complexes
Despite the obvious sequence and structural differences between the native RNA sites and the aptamer, the structure of the aptamer is dramatically remodeled in the S8 complex to produce a conformation with remarkable similarities to native S8-RNA complexes (19)(20)(21). Alignment of residues and superposition of peptide backbone atoms from Bacillus S8 with those of E. coli S8 (21), M. jannaschii S8 (20) and A. aeolicus S8 (17) result in rmsds of 0.57Å, 0.78Å and 0.65Å, respectively (Figure 7). Also, many of the intermolecular interactions common to the native S8-RNA complexes, which are generally well conserved, are recapitulated in the RNA aptamer-S8 complex. Shape complementarity, electrostatic and hydrogen-bond interactions are key features of the S8-RNA interaction. The invariant A 642 in helix 21 participates in the only conserved base-specific contact, a hydrogen bond between the conserved serine 106 side chain (E. coli numbering) and the A 642 N3 atom. A 26 functionally replaces A 642 in the S8-aptamer complex ( Figure 6). The only other base contact in some of the natural S8-RNA complexes is a hydrogen bond between the G 597 N2 and the Y85 side chain OH. In the archaeon M. jannaschii, R124 is positioned at the site that Y85 occupies in E. coli (and other eubacterial S8 proteins). The R124 side chain NH 2 interacts with the G 597 N3 and U 598 O4 . In the aptamer-S8 complex, an interaction analogous to Y85-G 597 involving the G 11 -(G 13 -C 28 ) base triple is not observed. Many of the other interactions present in the S8-RNA structures correlate with earlier biochemical analyses (11)(12)(13)47,48). One notable exception is the hydrogen bond from the S107 side chain to the A 640 2 -OH present in native complexes. Substitution with deoxy-A at 640 does not affect protein binding (48). This interaction also is present in the S8-aptamer complex between the homologous S109 and A 24 (which is isomorphic with A 640 ).
The RNA selection was designed to identify alternative modes that the S8 protein could use to bind RNA. We expected the topology of the S8 binding site on helix 21 to be incompatible with a symmetric internal loop, but the S8-RNA interface is remarkably well preserved. In addition, a criticalstacking interaction involving the purine ring of A 642 and the T106-S107 (E. coli S8) peptide bond is recapitulated. This interaction is facilitated in 16S rRNA and spc mRNA by the odd number of nucleotides in the internal loop. In the aptamer, the stacking of the A 26 base is made possible when the G 13 and A 26 bases shift above and below the plane of the base quartet, respectively. In the natural RNA targets, A 642 participates in an i to i+1 base pair with residue 641 (20,21,23), and U 25 and A 26 form a similar hydrogen bond. Rotamer analysis reveals the phosphate backbones at steps U 25 -A 26 of the aptamer and N 641 -A 642 of the natural RNA sites have the same geometry and that it is characteristic of i to i+1 base pairs (49). The base triple is another feature common to the aptmer and natural RNA binding sites. In E. coli helix 21, the triple is A 595 -(A 596 -U 644 ) and in T. thermophilus, G 595 -(C 596 -G 644 ). In the spc mRNA binding site, the corresponding base triple is A +80 -(A +81 -U +11 ). Although the base triples are isosteric, the non-Watson-Crick residue of the base triple in the aptamer RNA, G 11 -(G 13 -C 28 ), is not sequential with either nucleotide of the base pair. This nucleotide topology difference for the aptamer base triple is reflected in the local geometry of the backbone on the face opposite the bound S8 protein.
The backbone geometry at the G 11 -U 12 step is characteristic of the loop E motif (49), and although the corresponding positions of natural RNA sites are non-A-like, they do not have the loop E motif geometry ( Figure 5). Thus, backbone perturbations caused by the symmetric internal loop of the aptamer are contained to the RNA face opposite the bound S8 protein ( Figure 5).
Implications for aptamer-protein structure and interaction
Two sites on r-protein S8 interact with 16S rRNA, one site involves helix 21 and a second site involves helix 25. Therefore, S8 presents at least two surfaces for an RNA aptamer. The helix 21 binding site on S8 is lined with a strip of electropositive charge along which the phosphate backbone of the aptamer traverses from residues A 4 -C 7 and U 27 -A 29 (Supplementary Figure S3). Nodes of electropositive density also are centered at residues C 17 and U 25 , but an electronegative patch in this primary binding site contours to the minor groove edge of A 26 . The electropositive surface charge that lines the secondary RNA binding site on S8 is more extensive than that on the primary face, but RNA binding in this region is weaker and non-specific. Given the potential for multiple charge-charge interactions, it is somewhat surprising that the secondary binding site was not identified as a preferred target during the selection. However, a site that accommodates multiple types of interactions (electrostatic, hydrogen bond, van der Waals) might be favored since electrostatic contributions toward binding diminish due to shielding effects caused by increasing salt concentrations used during the selection.
Protein surfaces present specific structured sites, or epitopes, that are recognized by aptamers and often the same protein epitope can bind aptamers of different sequence and potentially different structure (50)(51)(52). The characterization of most aptamer-protein interactions has been limited to affinity or kinetic measurements with few high-resolution structures of aptamer-protein complexes reported (53) and only four complexes involving nucleic acid binding proteins (52,(54)(55)(56). Thus, although a protein epitope can bind aptamers from different sequence (and potentially of different structure) classes, the extent of similarity among the binding modes, the conservation of intermolecular interactions and the structural heterogeneity of the aptamers must largely be inferred.
Three complexes that offer a basis for comparison of free and bound aptamers as well as comparison of binding modes of aptamer and natural targets involve the MS2 capsid protein, NF-B p50 homodimer and nucleolin. Aptamers against the MS2 capsid protein have the same basic secondary structure as the natural RNA binding site, an RNA hairpin capped by a four-nucleotide loop, and form many of the same protein-RNA interactions (52). One class of aptamer, though, adopts a hairpin that contains a threenucleotide loop, yet forms many of the same interactions with capsid protein as the natural RNA ligand. In the case of the NF-B p50 homodimer, the RNA aptamer forms a hairpin with a seven-nucleotide internal loop capped by a GNRA tetraloop (57). In the complex, the aptamer binds to each monomer of the dimer and forms several base-specific protein contacts. The structure differences between free and bound aptamer are small but include altered base stacking in the tetraloop and stabilization of a U-C base pair in the internal loop (54,57). Thus, for the MS2 capsid protein and the NF-B dimer, not only do the natural nucleic acid binding sites serve as epitopes, but the aptamers bind the core regions using the same chemistry as the natural ligands. In addition, the conformations of the free and bound states of the aptamers are well ordered and exhibit few differences. In the case of nucleolin, protein-RNA interactions that comprise a natural RNA ligand:nucleolin complex are a subset of the interactions present in the aptamer:nucleolin complex (58). Nucleotides that are not conserved within the natural RNA targets or that are not part of the consensus sequence of the aptamer RNA become ordered only upon protein binding (56,58).
As with the NF-B and MS2 capsid protein complexes, the interactions between the aptamer and S8 recapitulate those of the native complexes. However, only the S8 aptamer has significant structural dissimilarity between free and protein-bound forms (Figure 8). The secondary structure properties of the aptamer also contrast those of the natural targets of S8 which are the same in free and bound states (20,21,23,24). RNA aptamers against proteins that do not naturally bind nucleic acids also are found to adopt the bound conformation in the free state (59)(60)(61)(62)(63). The S8 apatmer is the first example of an RNA aptamer that adopts substantially different secondary structures in the free and protein-bound states. It is possible that the bound conformation of the S8 aptamer also is present in solution, albeit at very low abundance and in rapid exchange with the duplex conformation, which could be captured by r-protein S8. Although the number of examples is limited, the breaking and reorganization of multiple secondary structure elements within an RNA aptamer upon protein binding appears to be uncommon.
ACCESSION NUMBERS
Coordinates have been deposited in the Protein Data Bank under accession numbers PDB ID: 2LUN, solution structure of the RNA aptamer, and 4PDB, crystal structure of the S8-aptamer complex. Chemical shifts have been deposited in the Biomolecular Magnetic Resonance Bank under accession numbers BMRB ID: 18532. | 9,626.4 | 2014-08-19T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Fine grained event processing on HPCs with the ATLAS Yoda system
High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.
Introduction
With the increased data volume recorded during LHC Run2 and beyond, it becomes critical for the experiments to not only efficiently use all CPU power available to them, but also to leverage computing resources they don't own. From this perspective, high performance computing resources (HPC) are very valuable for HEP experimental computing. Due to the massive scale of many HPC systems, even fractionally small utilization of their computing power can yield large returns in processing throughput.
Let us consider one example: Edison supercomputer at the National Energy Research Scientific Computing Center (NERSC), Berkeley, USA. With 130K Intel Haswell CPU cores, this machine was #25 on the TOP 500 Supercomputer Sites list in November 2014. One such machine, if hypothetically fully available for the ATLAS experiment [1], could satisfy all ATLAS needs in Geant4 [2] simulation (∼5 billion events/year).
Porting of regular ATLAS workloads (e.g. simulation, reconstruction) to HPC platforms does not come for free. For efficient usage of HPC systems the application needs to be flexible enough to adapt to the variety of scheduling options -from back-filling to large time allocations. In ATLAS this issue has been addressed by implementing a new approach to the event processing, a fine-grained Event Service [3], in which the job granularity changes from input files to individual events or event ranges. After processing each event range, the Event Service saves the output file to a secure location, such that Event Service jobs can be terminated practically at any time with minimal data losses.
Another requirement for efficient running on HPC systems is that the application has to leverage MPI mechanisms in order to be able to run on many compute nodes simultaneously. For this purpose we have developed an MPI-based implementation of the Event Service (Yoda), which is able to run on HPC compute nodes with no internet connectivity with the outside world.
In section 2 of this paper we describe the concept and the architecture of the Event Service. Section 3 describes the implementation details of Yoda and the flexibility it offers in choosing between available job scheduling strategies (back-filling vs allocation). Finally, in section 4 we present the current status of Yoda developments and the results of scaling Yoda up to 50K parallel event processors when running ATLAS Geant4 simulation [4] on the Edison supercomputer at NERSC.
ATLAS Event Service
A new implementation of the ATLAS production system [5] includes the JEDI (Job Execution and Definition Interface) extension to PanDA [6], which adds a new functionality to the PanDA server to dynamically break down the tasks based on optimal usage of available processing resources. With this new capability, the tasks can now be broken down at the level of either individual events or event clusters (ranges), as opposed to the traditional file-based task granularity. This allows the recently developed ATLAS Event Service to dynamically deliver to a compute node only that portion of the input data which will be actually processed there by the payload application (simulation, reconstruction, data analysis), thus avoiding costly prestaging operations for entire data files. The Event Service leverages modern networks for efficient remote data access and highly-scalable object store technologies for data storage. It is agile and efficient in exploring diverse, distributed and potentially short-lived (opportunistic) resources: "conventional resources" (Grid), supercomputers, commercial clouds and volunteer computing.
The Event Service is a complex distributed system in which different components communicate to each other over the network using HTTP. For event processing it uses AthenaMP [7], a process-parallel version of the ATLAS simulation, reconstruction and data analysis framework Athena. A PanDA pilot starts an AthenaMP application on the compute node and waits until it goes through the initialization phase and forks worker processes. After that, the pilot requests an event-based workload from the PanDA JEDI, which is dynamically delivered to the pilot in the form of event ranges. The event range is a string which, together with other information, contains positional numbers of events within the file and an unique file identifier (GUID). The pilot streams event ranges to the running AthenaMP application, which takes care of the event data retrieval, event processing and output file producing (a new output file for each range). The pilot monitors the directory in which the output files are produced, and as they appear sends them to an external aggregation facility (Object Store) for final merging.
Yoda -Event Service on HPCs
Supercomputers are one of the important deployment platforms for Event Service applications. However, on most HPC machines there is no internet connection from compute nodes to the outside world. This limitation makes it impossible to run the conventional Event Service on such systems because the payload component needs to communicate with central services (e.g. job brokerage, data aggregation facilities) over the network.
In Summer 2014 we started to work on an HPC-specific implementation of the Event Service which would leverage MPI for running on multiple compute nodes simultaneously. To speed up the development process and also to preserve all functionality already available in the conventional Event Service, we reused the existing code and implemented lightweight versions of the PanDA JEDI (Yoda, a deminitive Jedi) and the PanDA Pilot (Droid), which communicate to each other over MPI. Figure 1 shows a schematic of a Yoda application, which implements the master-slave architecture and runs one MPI-rank per compute node. The responsibility of Rank 0 (Yoda, the master) is to send event ranges to other ranks (Droid, the slave) and to collect from them the information about the completed ranges and the produced outputs. Yoda also continuously updates event range statuses in a special table within an SQLite database file on the HPC shared file system. The responsibility of a Droid is to start an AthenaMP payload application on the compute node, to receive event ranges from Yoda, to deliver the ranges to the running payload, to collect information about the completed ranges (e.g. status, output file name and location) and to pass this information back to Yoda.
Yoda distributes event ranges between Droids on a first-come, first-served basis. When some Droid reports completion of an event range, Yoda immediately responds with a new range for this Droid. In this way, Droids are kept busy until all ranges assigned to the given job have been processed or until the job exceeds its time allocation and gets terminated by the batch scheduler. In the latter case, the data losses caused by such termination are minimal, because the output for each processed event range gets saved immediately in a separate file on the shared file system.
Connection with PanDA
In order to use Yoda for running ATLAS production workloads, it has to be connected with the ATLAS production system. For this purpose we have developed a special version of the PanDA Pilot (runJobHPC), which provides such connection. The runJobHPC application runs on the interactive compute nodes of HPC systems. Thus, it is able to communicate with central PanDA services over the network. The runJobHPC application pulls job definitions from the PanDA server and stages in all required input files on the HPC shared file system. It submits Yoda jobs to the HPC batch queue, monitors their statuses and also streams out the output files to an external aggregation facility (Object Store), where they are used by separate merge jobs for producing final outputs.
Job scheduling options
Yoda is flexible in defining duration and size of MPI jobs. We have successfully scaled Geant4 simulation within the Yoda system up to two thousand ranks and have not observed any performance penalties coming from the MPI communication between the ranks. No slowdowns in the average event processing time have been detected either.
On the other hand, the fact that event processors within Yoda write a new output file to the shared file system for each event range gives us the flexibility of preemption without the application need to support or utilize check-pointing. Yoda jobs can be terminated practically at any time with minimal data losses; only the data corresponding to event ranges currently being processed will be lost. This means that Yoda jobs can be submitted to the HPC batch queue in a back-filling mode: the runJobHPC application can detect the availability of a number of HPC compute nodes for certain period of time and promptly submit a properly sized Yoda job to the batch queue for utilizing the available resources.
Development status and performance tests
We have chosen ATLAS Geant4 simulation as a first use-case for Yoda (and for the Event Service in general). Simulation jobs used more than half of ATLAS CPU-budget on the Grid in 2014. Thus, by offloading simulation to other computing platforms (e.g. HPC, clouds), we can free a substantial amount of ATLAS Grid resources. Also, simulation is a CPU-intensive application with minimal I/O requirements and relatively simple handling of meta-data. These characteristics allowed us to make rapid progress in the development of a first version of the Event Service, which was delivered in Summer 2014. After that we switched our development efforts to Yoda. By reusing the code of the conventional Event Service, we developed a first working implementation of Yoda in October 2014 and presented it at Supercomputing-2014 in the DOE ASCR demo 1 .
In early 2015, Yoda was validated by running a series of ATLAS Geant4 production jobs. The output of these jobs was compared to the output of the same simulation jobs on the Grid. The comparison confirmed that Geant4 simulation within Yoda on HPC and the simulation on the Grid produce the same results.
Also in early 2015 we ran a series of tests on Edison supercomputer with the goal to check how the performance of Yoda scales with the number of MPI-ranks. The results of these tests are presented on Figure 2, which shows very good scaling of Yoda up to 50K CPU-cores (more than 2K MPI-ranks). The key to such good scaling was to avoid heavy load on the Edison shared file system. This was achieved by delivering ATLAS software releases to the RAM of each compute node. Otherwise the initialization time of Yoda payload applications (AthenaMP) would not scale past 100 MPI-ranks.
Summary
We have developed Yoda, an MPI-based implementation of the Event Service, specifically for running ATLAS workloads on HPCs. ATLAS Geant4 simulations within Yoda have been successfully validated for physics, which proves that Yoda is ready to run ATLAS simulation production workloads on supercomputers. Thanks to its flexible architecture, Yoda allows efficient usage of available HPC resources by running jobs either in large time allocations or in back-filling mode. The performance tests have demonstrated that Yoda scales very well with the number of MPI-ranks, which makes it possible to efficiently run Yoda applications on thousands of HPC compute nodes simultaneously. | 2,760.8 | 2015-01-01T00:00:00.000 | [
"Computer Science",
"Physics"
] |
On some mathematical model of turbulent flow with intensive selfmixing
Paper contains description of a new non laminar mathematical model of turbulent flow. Equations of the model are given and main conservation laws for this model are proven.
The model allows a new approach to certain problems of turbulent flow. For example there are possibilities to introduce some models of turbulent chemical reactors, or description of the unpredictable explosions of turbulence in the calm laminate flow. In this model we introduce mathematical tools that allow definition of various mixing states of the fluid. It is possible to formulate in the appropriate mathematical language, the idea that the given fluid can have various mixing states. It is also possible to determine the influence of a given mixing state on the flow parameters such as velocity field and density. The first formulation of the model is given in [1]. 1.The general feature of the model For the balls ω ε (x) = {y ∈ R 3 : |y − x| < ε}, ε > 0, consider the sets V ε (x, t) ⊂ R 3 of velocities of fluid particles moving at the time t in ω ε (x). The velocity sets may be measured. The laminar model and our model may be shortly characterized in the following way.
Laminar model 1) The flow is described by family of regular invertible mappings (diffeomorphisms): where S(t, t o , ω) is the set containing at t > t o the whole fluid contained at t o in ω, and no another one. It means that the fluid contained in ω at the time t does not mix with any fluid in the neighbourhood, and keep for ever its identity.
2) There exist the Euler velocity field v(x, t), Our model. II. Instead of the field of Euler velocities (1) we assume only the existence of field of velocity sets such that for numbers ∆ > 0, ε > 0 small enough the following approximation holds in the sense of measure. Basing on the assumptions I, II we can formulate the integral conservation laws of mass, impulse, energy and impulse momentum. Moreover, we obtain the equivalent closed integro-differential system. In this way we obtain a model of fluid flow in which different fluid portions are mixing one with another loosing their identity.
To this end we have to introduce first the notion of the physical α-quantities. We begin with the definition of the α-density. This is a nonnegative function with the following properties. For the mass portion filling up ω at the time t, the following equalities hold where ̺(x, t) is the usual density, and κ > 0 is some scaling constant.
We shall construct an approximation of ̺(x, t, α) explaining its physical sense. The sets where ∆ > 0, ε > 0 small enough, are the minimal sets that contain at the time t + ∆ the whole fluid filling up ω ε (x) at the time t. This fluid has in S(t + ∆, t, ω ε (x)) at t + ∆ density g(y) for y ∈ S(t + ∆, t, ω ε (x).
Using (2), we have and we obtain the following approximation of the α-density In our model we use the general notion of the α -quantity. For example we define the α -impulse as α̺(x, t, α), so that the impulse of the fluid portion m(ω, t) is equal to α̺(x, t, α)dxdα If some physical expression contains the k-fold integration over the variable α, then we multiply the integral by κ k . Given the α-quantities we may obtain the observed mean Euler velocity v(x, t) of the flow: v(x, t) = lim In almost all physical situations one may take some ball A ⊂ R 3 , for which we know that ̺(x, t, α) = 0 for α ∈ R 3 \A, and we may integrate over A only.
2.The Mass Conservation Law
We have to introduce the fluid portions m(ω, a, t), a ⊂ A, which are parts of the fluid in ω moving at the time t with the velocities α ∈ a ⊂ A. We have |m(ω, a, t)| = κ ω a ̺(x, t, α)dxdα, |m(ω, A, t)| = |m(ω, t)|.
Moreover, we introduce the mass mixer: which describes the following mixing processes: If in the mixing process one of the portions m(ω, a, t), m(ω, b, t) is underlined, then it means that we ask amount of mass (positive or negative equal to the integrals in the frames on the right) which is transported to the underlined portion in the unit time . It turns out that M(x, t, α, β) = −M(x, t, β, α) The following three mixing processes (4), (5), (6) should be taken into account in the mass conservation low for the portion m(ω, a, t), where a ⊂ A: (4) m(ω, a, t) ⇐⇒ m(ω, A\a, t) (5) m(ω, a, t) ⇐⇒ m(R 3 \ω, a, t) (6) m(ω, a, t) ⇐⇒ m(R 3 \ω, A\a, t).
The amount of mass transported in the unit time in this processes to the portion m(ω, a, t) is respectively equal to where n(x) is the external normal unit vector to the boundary ∂ω.
In (4) we used the properties (3) of the mass mixer M(x, t, α, β). We consider the process (5) like in the laminar model without any mixer, so as in this process we have on the both sides of the boundary ∂ω, fluids with the same kinematical characteristics. In the process (5) we take into account the mass transport of the diffusion type with the constant E > 0. The process (6) is a mixing process of two fluid portions with different kinematical characteristics across the boundary ∂ω. Therefore we have to introduce the boundary mass mixer B, It describes the amount of mass transported in the unit time to m(ω, a, t) in the process (6).
We shall use the following simple
Localization Theorem
Let D : R 7 → R, F : R 7 → R. Then the condition is equivalent to the following system of equations: Applying the Localization Theorem to integral conservation law (7), we obtain the following integro-differential system equivalent to the integral law:
3.The Impulse Conservation Law
We introduce the following impulse mixer J(x, t, α, β). It is the function where αM(x, t, α, β) describes the transport of impulse caused by the mass transport, and i(x, t, α, β) describes other types of impulse changes. The physical sense of the mixer J(x, t, α, β) is the following. The mixer J(x, t, α, β) describes the amount of impulse transported in a unit time to the portion m(ω, a, t) in the process This amount is equal to On the one hand, the change of the impulse in a unit time in the portion m(ω, a, t) is equal to On the other hand, this change is equal to the force acting on m(ω, a, t). This force is the sum of the following forces The forces acting on m(ω, a, t) by fluid portions: m(ω, A\a, t), m(R 3 \ω, a, t), m(R 3 \ω, A\a, t). + The external forces acting on m(ω, a, t).
Let us determine this forces. The forces acting on m(ω, a, t) are equal to the amount of impulse transported in a unit time to m(ω, a, t).
In the process m(ω, a, t) ⇐⇒ m(ω, A\a, t) (according to the definition of the impulse mixer) the amount of impulse transported to m(ω, a, t) in a unit time is equal to In the process the amount of impulse transported in a unit time to m(ω, a, t), is equal to: In the process the increase of impulse in the unit time in the portion m(ω, a, t) is equal to where we introduced the boundary impulse mixer J B (x, t, α, β). It is a matrix J B : R 10 → R 3 × R 3 . Finally, let us take into account the processes (4),(5),(6), and the external forces. Then, after changing the surface integrals into volume ones, we obtain the following integral form of the impulse conservation law for the portion m(ω, a, t): is the external force acting on m(ω, a, t). In the case of the earth gravitation field, we have f (x, t, α) = g̺(x, t, α), g = const.
Applying to (13) the Localization Theorem, we obtain the following integro-differential system equivalent to the integral impulse conservation law:
4.The Energy Conservation Law.
Let us introduce the α-inner energy of the flow. This is a function Denote by e(ω, a, t) the inner energy of the fluid portion m(ω, a, t). Then the physical sense of the α-energy is explained by the formula: e(ω, a, t) = κ ω a ε(x, t, α)̺(x, t, α)dxdα.
Denoting by e(ω, t) the inner energy of the fluid portion m(ω, t), we obtain the following equality: Hence, we get The force F (ω, a, t, ) acting on the portion m(ω, a, t) is equal to the increase of the impulse of m(ω, a, t) in the unit time. Hence, from the impulse conservation law we obtain The increase of the energy of the portion m(ω, a, t) in the unit time is equal to the power due to the force F (ω, a, t). Let us evaluate it. Divide ω and a into small parts ∆ω ⊂ ω, ∆a ⊂ a, and notice that F (ω, a, t) acts on m(ω, a, t) is such a way that on the small parts m(∆ω, ∆a, t) ⊂ m(ω, a, t) the following forces are acting: The power due to F (ω, a, t) is the sum of the powers P (∆ω, ∆a, t) due to the forces F (∆ω, ∆a, t) where ∆ω ⊂ ω, ∆a ⊂ a. For o x∈ ∆ω, o α∈ ∆a, we obtain the following approximation for small ∆t: Finally, for the whole power P (ω, a, t) due to the force F (ω, a, t), we obtain Now, we can write down the integral energy conservation law for the portion m(ω, a, t): Putting a = A and taking into account (15), we obtain the following relation In this way, one determine the mean inner energy ε(x, t) for given α-density ̺(x, t, α). Applying to (16) the Localization Theorem, we obtain the following integro-differential system equivalent to the integral energy conservation law:
5.The Impulse Momentum Conservation Law.
Applying the principle The derivative ∂ t of the impulse momentum = The momentum of acting forces and our Impulse Conservation Law we can formulate the Impulse Momentum Conservation Law for the fluid portion m(ω, a, t).
, for F, H ∈ R 3 , then we obtain the following integro-differential system equivalent to the integral Impulse Momentum Conservation Law 6.The constitutive relation and the full closed system of the model.
In order to formulate the constitutive relation let us express approximately the boundary mixer B(x, t, α, β) as a function of the mixer M(x, t, α, β). To this end, introduce the following notations where ε is a positive small number.
Consider two following complex mixing processes (1 o ) and (2 o ). These processes , taken together, give us an approximation of the process (6): The complex process 1 o gives us: This process represents a part of the process (6). The second complex process is of the form: This complex process gives us m(d ε ω, a, t) ⇐⇒ m(R 3 \ω, A\a, t). The process 1 o , (b) is of the type (5). Replacing in the description of the process (5) ̺(x, t, α) by κ A\a M(x, t, α, β)dβ and neglecting for simplicity the mass transport of diffusion type, we obtain that the amount of mass transported in the process (1 o ), (b) in a unit time to m(ω, a, t) is approximately equal to The process (2 o ).
The density of mass transported in the unit time in the process (2 o ) (a) to m(d ε ω, A\a, t) is equal to x ∈ d ε ω, and the α-density of this mass is equal to The process (2 o )(b) is of the type (5), but in the opposite direction from ω to R 3 \ω. Hence neglecting for simplicity the mass transport of diffusion type the amount of mass transported in the unit time in the process (2 o )(b) to the portion m(R 3 \ω, A\a, t) is approximately equal to In the final description of the mass transport to m(ω, a, t) in the process (6) we have to take the last expression with sign minus because it describes the mass learving ω. Considering together the complex processes (1 o ) and (2 o ), we see that the amount of mass transported in the unit time in the process (6) to m(ω, a, t) is approximately equal to −κ 2 ∂ω a A\a < n(x), αM(x, t, α, β) + βM(x, t, β, α) > dxdαdβ.
We have the integro differential system of 16 equations: (11) 2 equations (the mass conservation law), (14) 6 equations (the impulse conservation law), (18) 2 equations (the energy conservation law), (19) 6 equations (the impulsemomentum conservation law). The system contains 18 unknown functions, α-quantities, and mixers: Basing on the relation (20), we introduce the following constitutive relation: is a new unknown function. In this way we reduce the number of the unknown functions to 16, and we obtain the general closed system of the model.
Simplified Models.
The full integro-differential system of the model is very complicated and we are forced to seek for some simplified models. There exist many possibilities of such simplification. We shall give here the simplest of them. The main idea is to suggest some special form of the mass mixer M(x, t, α, β), for example (22) M(x, t, α, β) = Φ(α, β)µ(x, t, α, β), If d = |(β)|̺(x, t, β) − |α|̺(x, t, α) ≤ 0, then the mass of a portion moving with the velocity β loses (in the unit time), in favour of the portion moving with the velocity α, the amount of mass proportional to cos ϑ 2 ̺(x, t, β)r(Dd).
2 o We introduce the form (22) of the mass mixer M(x, t, α, β), defining in this way the mixing state of the fluid. This is closely related to the impulse conservation law, so as our mixer (22) decides how the impulse situation governs the mixing process.
Observe that for the flow described by the α-quantities the construction of the sets must be based on the following relations: S(τ + ∆, τ, ω δ ) ≈ y∈ω δ y + ∆V(y, τ ) , τ ≥ t o , ω δ = {y : |y − x| < δ}; hence the smaller ∆ > 0, the better approximation. The value of ∆ > 0 should be chosen according to the properties of the sets V(x, t), t ≥ 0 that appear in the initial and boundary conditions. If V(x, t) changes in x, t rapidly, then we have to take ∆ > 0 small enough otherwise ∆ > 0 may be larger.
The question of determination of parameters ε > 0 and κ = 3 4π ( ∆ ε ) 3 is as follows. First we must realize that without strict definition of κ, the system (23) and the model are not defined in full. Taking κ = const larger or smaller we decide that the mixing step in the flow considered will be larger or smaller, respectively.
For instance, this may be seen in our condition S(t + ∆, t, ω ε ) ≈ x + ∆V(x, t).
The right side does not depend on ε. Hence, for big κ and small ε the set ω ε is very small, and the very small mass portion m(ω ε , t) must be spread on the set x + ∆V(x, t) which does not depend on ε.
The first numerical experiments for our simplified model may be found in [2]. Bibliography. | 3,794 | 2011-07-27T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Inclusive electron scattering off $^{12}$C, ${}^{40}$Ca, and ${}^{40}$Ar: effects of the meson exchange currents
The scattering of electrons on carbon, calcium, and argon targets are analyzed using an approach that incorporates the contributions to the electromagnetic response functions from the quasielastic (QE), inelastic processes, and two-particle and two-hole meson exchange current ($2p-2h$ MEC). This approach describes well the whole energy spectrum of data at very different kinematics. It is shown that the accuracy of the $(e,e')$ cross section calculations in the region between the QE and delta-resonance peaks, where the $2p-2h$ MEC contribution reaches its maximum value, depends on the momentum transfer $|\q|$ and at $|\q|>500$ MeV the calculated and measured cross sections are in agreement within the experimental uncertainties.
I. INTRODUCTION
The current [1,2] and future [3,4] long-baseline neutrino experiments aim at measuring the lepton CP violation phase, improving the accuracy of the value of the mixing angle θ 23 , and determing neutrino mass ordering. To evaluate the oscillation parameters, the probabilities of neutrino oscillations as functions of neutrino energy are measured. The neutrino beams are not monoenergetic and have broad distributions that range from tens of MeVs to a few GeVs. This is one of the problems in achieving a high level of accuracy of the oscillation parameters measurements.
In this energy range, charged-current (CC) quasielastic (QE) scattering induced by both one-and two-body currents and resonance production are the main contributions to the neutrino-nucleus scattering. The incident neutrino energy is reconstructed using the calorimetric methods, that relay not only on visible energy measured in the detector, but also on models of the neutrino-nucleus interactions that are implemented in neutrino events generators. In addition to its role in the reconstruction of the neutrino energy, the neutrino-nucleus scattering model is critical for obtaining background estimates and for correct extrapolations of the near detector constraints to the far detector in analyses aimed at determing the neutrino oscillation parameters.
The modeling neutrino-nucleus interactions in the energy range ε ν ∼ 0.2 ÷ 5 GeV is one of the most complicated issues of the neutrino oscillation experiments. The description of nuclear effects is one of the largest sources of systematic uncertainties despite the use of the near detector for tuning the nuclear models employed in the neutrino events generator. A significant systematic uncertainty arises from the description of scattering induced by the two body meson exchange currents (MEC), which may produce two-particle and two-hole final states. Such excitations are induced by two-body currents, hence, they go beyond the impulse approximation scheme in which the probe interacts with only a single nucleon and corresponds to the 1p − 1h excitations. A poor modelling of the MEC processes leads to a bias in the reconstruction of neutrino energy and thereby to large systematic uncertainties in the neutrino oscillation parameters [5].
In recent years, many studies have been presented to improve our knowledge on leptonnucleus scattering [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24]. Approaches, which go beyond the impulse approximation were developed in Refs. [11-14, 18, 19, 21, 22]. As neutrino beams have broad energy distributions, various contributions to the cross sections can significantly overlap with each other making it difficult to identify, diagnose and remedy shortcoming of nuclear models. On the other hand, in the electron-scattering the energy and momentum transfer are known and therefore the measurements with kinematic and targets of interest to neutrino experiments give an opportunity to validate and improve the description of nuclear effects. Electron beams can be used to investigate physics corresponding to different interaction mechanisms, by measuring the nuclear response at energy transfers varied independently from threemomentum transfer. The neutrino detectors are typically composed of scintillator, water, or argon. There is a large body of electron-scattering data on carbon and calcium and only a few data sets available for scattering on argon.
Weak interactions of neutrinos probe the nucleus in a similar way as electromagnetic elec-tron interactions. The vector part of the electroweak interaction can be inferred directly from electron-scattering and the influence of nuclear medium is the same as in neutrino-nucleus scattering. Precise electron-scattering data give a unique opportunity to validate nuclear model employed in neutrino physics. The model unable to reproduce electron measurements cannot be expected to provide accurate predictions for neutrino cross sections. So, the detailed comparison with electron scattering data (semi-inclusive and inclusive cross sections and response functions) is a necessary test for any theoretical models used for description of the lepton-nucleus interaction.
In this work we test a joint calculation of the QE, 2p − 2h MEC, and inelastic scattering contributions (RDWIA+MEC+RES approach) on carbon, calcium, and argon, using the relativistic distorted-wave impulse approximation (RDWIA) for quasi-elastic response and meson exchange currents response functions for 2p − 2h final states presented in Ref. [15].
For calculation of inelastic contributions to the cross sections we adopt parameterizations for the single-nucleon inelastic structure functions given in Refs. [25,26], which provide a good description of the resonant structure in (e, e ′ ) cross sections and cover a wide kinematic region. We compare the RDWIA+MEC+RES predictions with the whole energy spectrum of (e, e ′ ) data, including the recent JLab data for electron scattering on carbon and argon.
We also perform a comparison and analysis of the calculated cross sections and data at the momentum transfer that corresponds to the region between the QE and ∆-resonance peak, where the 2p − 2h response is peaked.
In Sec.II we briefly introduce the formalism needed for studying electron scattering off nuclei with quasielastic, 2p − 2h MEC, and resonance production contributions. We also describe briefly the basic aspects of the models used for the calculations. The results are presented and discussed in Sec.III. Our conclusions are summarized in Sec.IV.
MEC, AND INELASTIC RESPONSES
We consider the inclusive electron-nucleus scattering in the one-photon exchange approximation. Here k i = (ε i , k i ) and k f = (ε f , k f ) are the initial and final lepton momenta, p A = (ε A , p A ) is the initial target momentum, q = (ω, q) is the momentum transfer carried by the virtual photon, and Q 2 = −q 2 = q 2 − ω 2 is the photon virtuality.
A. Electron-nucleus cross sections
In the inclusive reactions (1) only the outgoing lepton is detected and the differential cross section can be written as where Ω f = (θ, φ) is the solid angle for the electron momentum, α ≃ 1/137 is the finestructure constant, L µν is the lepton tensor, and W µν is the electromagnetic nuclear tensor.
In terms of the longitudinal R L and transverse R T nuclear response functions the cross section reduces to where is the Mott cross section. The coupling coefficients are kinematic factors depending on the lepton's kinematics. The response functions are given in terms of components of the hadronic tensors and depend on the variables (Q 2 , ω) or (|q|, ω). They describe the electromagnetic properties of the hadronic system. The relations between the response functions and cross sections for longitudinally σ L and transversely σ T polarized virtual photons are where K = ω − Q 2 /2m is the equivalent energy of a real photon needed to produce the same final mass state and m is the mass of nucleon.
All the nuclear structure information and final state interaction effects (FSI) are contained in the electromagnetic nuclear tensor. It is given by expression were J µ is the nuclear electromagnetic current operator that connects the initial nucleus state |A and the final state |X . The sum is taken over the scattering states corresponding to all of allowed asymptotic configurations. This equation is very general and includes all possible channels. Thus, the hadron tensor can be expanded as the sum of the 1p − 1h and 2p − 2h, plus additional channels, including the inelastic electron-nucleus scattering W in : The hadronic tensors W 1p1h , W 2p2h , and W in determine, correspondingly, the QE, 2p − 2h MEC, and inelastic response functions. Therefore, the functions R i in Eq.(6) can be written as a sum of the QE (R i,QE ), MEC (R i,M EC ), and inelastic response functions (R i,in )
B. Model
We describe genuine QE electron-nuclear scattering within the RDWIA approach. This formalism is entirely based on the impulse approximation, namely one body currents. In this approximation the nuclear current is written as a sum of single-nucleon currents and nuclear matrix element in Eq. (8) takes the form where Γ µ is the vertex function, t = ε B q/W is the recoil-corrected momentum transfer, For electron scattering, we use the electromagnetic vertex function for a free nucleon where σ µν = i[γ µ , γ ν ]/2, F V and F M are the Dirac and Pauli nucleon form factors. We use the approximation of Ref. [27] for the Dirac and Pauli nucleon form factors and employ the de Forest prescription [28] and Coulomb gauge for the off-shell vector current vertex Γ µ , because the bound nucleons are off-shell.
In RDWIA calculations the independent particle shell model (IPSM) is assumed for the nuclear structure. In Eq.(11) the relativistic bound-state wave function for nucleons Φ are obtained as the self-consistent solutions of a Dirac equation, derived within a relativistic mean-field approach, from a Lagrangian containing σ, ω, and ρ mesons [29]. The nucleon bound-state functions were calculated by the TIMORA code [30] with the normalization factors S relative to full occupancy of the IPSM orbitals. For carbon an average factor S ≈ 89% and for of 40 Ca and 40 Ar the occupancy S ≈ 87% on average. These estimations of the depletion of hole state follow from the RDWIA analysis of 12 C(e, e ′ p) [31,32] and 40 Ca(e, e ′ p) [15]. In this work we assume that the source of the reduction of the (e, e ′ p) spectroscopic factors with respect to the mean field values are the NN short-range and tensor correlations in the ground state, leading to the appearance of the high-momentum and high-energy component in the nucleon distribution in the target.
In the RDWIA, final state interaction effects for the outgoing nucleon are taken into account. The distorted-wave function of the knocked out nucleon Ψ is evaluated as a solution of a Dirac equation containing a phenomenological relativistic optical potential. This potential consists of a real part, which describes the rescattering of the ejected nucleon and an imaginary part for the absorption of it into unobserved channels. The EDAD1 parameterization [33] of the relativistic optical potential for carbon and calcium was used in this work.
A complex optical potential with a nonzero imaginary part generally produces an absorption of the flux. However, for the inclusive cross section, the total flux must be conserved. The inclusive responses, i.e., no flux lost, can be handled by simply removing the imaginary terms in the potential. This yields results that are almost identical when calculated via relativistic Green's function approach [34,35] and Green's function Monte Carlo method [36] in which the FSI effects are treated by means of complex potential and total flux is conserved.
The inclusive cross sections with the FSI effects, taking into account the NN correlations were calculated using the method proposed in Ref. [8] with the nucleon high-momentum and high-energy distribution from Ref. [37] renormalized to value of 11% for carbon and of 13% for calcium and argon. The contribution of the NN-correlated pairs is evaluated in the impulse approximation, i.e., the virtual photon couples to only one member of the NN pair. It is a one-body current process that leads to the emission of two nucleons (2p − 2h excitation).
The evaluation of the 2p−2h MEC contributions is performed within the relativistic Fermi gas model [17,38]. The short-range NN-correlations and FSI effects were not considered in this approach. The elementary hadronic tensor W µν 2p−2h is given by the bilinear product of the matrix elements of the two-body electromagnetic MEC. Only one-pion exchange is included. The two-body current operator is obtained from the electroweak pion production amplitudes for the nucleon [39] with the coupling a second nucleon to the emitted pion.
The two-body electromagnetic current is the sum of seagull, pion-in-flight, and Delta-pole currents. The seagull terms are associated with the interaction of the virtual proton at the NNπ vertex, whereas the pion-in-flight operator is referred to the direct interaction of photon with the virtual pion. The ∆ peak is the main contribution to the pion production cross section. However, inside the nucleus ∆ can also decay into one nucleon that rescatters producing two-nucleon emission without pions. As a result, the MEC peak is located in the dip region between the QE and Delta peaks, i.e., the invariant mass of the pion-nucleon pair where m π is the mass of pion.
The exact evaluation of the 2p−2h hadronic tensor in a fully relativistic way performed in Refs. [17,38] is highly non-trivial. In the present work we evaluate the electromagnetic MEC response functions R i,M EC of electron scattering on carbon using accurate parameterizations of the exact MEC calculations. The 2p − 2h MEC contributions for 40 Ca and 40 Ar were calculated using the parameterization for 12 C rescaled for calcium and argon according to Ref. [40]. The parameterization form employed for the different electroweak responses is the function of (ω, |q|) and valid in the range of momentum transfer |q| = 200 ÷ 2000 MeV. The expressions for the fitting parameters are described in detail in Refs. [19,20,41].
Finally, the inelastic response functions R i,in were calculated using the parameterization for the neutron [25] and proton [26] structure functions. This approach is based on an empirical fit to describe the measurements of inelastic electron-proton and electron-deuteron cross sections in the kinematic range of four-momentum transfer 0 < Q 2 < 8 GeV 2 and final state invariant mass 1.1 < W x < 3.1 GeV, thus starting from pion production region to the highly-inelastic region. These fits are constrained by the high precision longitudinal σ L and transverse σ T separated cross section measurements and provide a good description of the structures seen in inclusive (e, e ′ ) cross sections.
III. RESULTS AND ANALYSIS
Before providing reliable predictions for neutrino scattering, any model must be validated by confronting it with electron scattering data. The agreement between the model's predictions and data in the vector sector of electroweak interaction gives us confidence in the extension of this phenomenological approach and its validity at least in the vector sector of the electroweak interaction.
To test the RDWIA+MEC+RES approach we calculated the double-differential inclusive with the RDWIA+MEC+RES predictions. The data are from Ref. [49] (filled squares) and Ref. [43] (filled triangles).
this should be expected since this is the region where the impulse approximation conditions may not be satisfied and collective nuclear effects are important.
The agreement between theory and data in the inelastic region also is good within the experimental uncertainties. The inelastic part of the cross section is dominanted by the ∆ peak that contributes to the transverse response function. In particular, ω QE = |q| 2 + m 2 − m corresponds roughly to the center of the quasielastic peak, and ω ∆ = |q| 2 + m 2 ∆ − m to the ∆-resonance [m ∆ is the mass of ∆(1232)]. When the momentum transfer is not too high these regions are clearly separated in data allowing for a test of theoretical models for each specific process. On the other hand, for increasing values of the momentum transfer the peaks corresponding to the ∆ and QE domains become closer, and their overlap increases significantly. In this case only the comparison with a complete model including inelastic processes is meaningful.
In addition to the previous analysis, we have also tested the validity of the RD-WIA+MEC+RES approach through the analysis of the recent JLab data [50,51] for inclusive electron scattering data on carbon and argon at incident electron energy E = 2.222 GeV and scattering angle θ = 15.54 • . As observed in Fig. 3, the agreement between theory and data is very good over most of the energy spectrum, with some minor discrepancy seen only at the ∆-resonance peak. For completeness, we also present in this figure the electronargon scattering spectrum measured at the beam energy E = 700 MeV and scattering angle θ = 32 • [52]. Note that the 2p − 2h MEC response, peaked in the dip region between the QE and ∆ peaks, is essential to reproduce the data.
In the SLAC experiment [43] the inclusive cross sections dσ/dεdΩ for electron scattering on 12 C and 40 Ca were measured in the same kinematical conditions, i.e., at incident electron energy E = 500 MeV and θ = 32 • . Using the SLAC and JLab data we estimated the measured (Ca/C) = (dσ Ca /dεdΩ) nucl /(dσ C /dεdΩ) nucl and (Ar/C) = (dσ Ar /dεdΩ) nucl /(dσ C /dεdΩ) nucl ratios, where the differential cross sections (dσ i /dεdΩ) nucl are scaled with the number of nucleons in the targets. Figure 4 shows the measured ratios as functions of energy transfer as compared to the RDWIA+MEC+RES calculations in the QE peak region. The calculated (Ca/C) [15] and (Ar/C) ratios agree with data where the observed effects of ≈ 15% in the QE peak region is higher than experimental errors. In Ref. [15] it was shown that the ground-state properties of these nuclei and FSI effects give the dominant contributions to the difference between the 12 C and 40 Ca( 40 Ar) differential cross section per nucleon. The difference between the results for the carbon and argon tar- We calculated R i dip = (dσ i /dεdΩ) cal /(dσ i /dεdΩ) data ratios at the momentum transfer |q| dip that corresponds to the minimum of the measured cross section, where (dσ i /dεdΩ) cal and (dσ i /dεdΩ) data are calculated and measured cross sections, correspondingly, for electron scattering off carbon (i=C) and calcium (i=Ca). The values of |q| dip running from ≈ 250 MeV to ≈ 1100 MeV for carbon and 340 ≤ |q| dip ≤ 660 MeV for calcium. We also calculated the 2p − 2p MEC contributions to the (e, e ′ ) differential cross sections, i.e., δ M EC = (dσ/dεdΩ) M EC /(dσ/dεdΩ) ratios, where the (dσ/dεdΩ) M EC ) is the 2p − 2h MEC differential cross sections for electron scattering off nuclei. Figure 5 shows the ratios R i dip and δ M EC as functions of |q| dip . The result presented in Fig. 5(a) demonstrates that the R C dip ratio increases with |q| dip from 0.7 at |q| dip ≈ 250 MeV to ≈ 1 at |q| dip ≈ 500 MeV at |q| dip ≈ 250 MeV (|q| dip ≈ k F ) to 10% at |q| dip ≥ 500 MeV (|q| dip ≥ 2k F ), where k F is the Fermi momentum. We can use this estimation as conservative estimate of the accuracy of the 2p − 2h MEC response calculation in the vector sector of the electroweak interaction.
IV. CONCLUSIONS
In this article, we studied the quasielastic, 2p − 2h MEC, and inelastic electron scattering on carbon, calcium, and argon targets in the RDWIA+MEC+RES approach. This approach was extended to the whole energy spectrum, incorporating the contributions coming from the QE, inelastic and 2p − 2h meson exchange currents. In calculation of the QE cross sections within the RDWIA, the effects of FSI and short-range NN-correlations in the target ground state were taking into account. An accurate parameterization of the exact MEC calculations of the nuclear response functions was used to evaluate the MEC response.
The inelastic response functions were calculated using the parameterization for the neutron and proton structure functions. These functions were obtained from the fit of the measured inelastic electron-proton and electron-deuteron cross sections.
The present approach is capable of reproducing successfully the whole energy spectrum of (e, e ′ ) data at very different kinematics, including the recent JLab data for inclusive electron scattering on carbon and argon. It was shown that the measured and calculated in the RDWIA model the QE cross sections per nucleon target of electron scattering on 40 Ca ( 40 Ar) are lower than those for 12 C. The effect of 15% is observed in the QE region and is higher than experimental errors.
For electron scattering on the carbon and calcium targets we evaluated the ratios of the calculated inclusive cross sections to the measured ones at the momentum transfer |q| dip that corresponds to the minimum of the measured cross sections in the dip region. We also estimated the 2p − 2h MEC contribution to the (e, e ′ ) cross section at |q| dip . At the |q| dip < 250 MeV the RDWIA+MEC+RES approach underestimates the measured cross sections by about 30% and is in agreement with data within the experimental uncertainties at |q| dip ≥ 500 MeV. The MEC contribution decreases with |q| dip from 65% at |q| dip = 250 to 20% at |q| dip = 1000. These results depend weakly on electron beam energy. So, we validated the RDWIA+MEC+RES approach in the vector sector of the electroweak interaction by describing 12 C, 40 Ca, and 40 Ar data. | 4,986.8 | 2020-04-09T00:00:00.000 | [
"Physics"
] |
Recent results on search for new physics at BaBar
We present some recent measurements for the search of New Physics using 514 fb−1 of e+e− collisions collected with the BABAR detector at the PEP-II e+e− collider at SLAC. First we present a search for the decay Υ(1S ) → γA0, A0 → cc̄, where A0 is a candidate for the CP-odd Higgs boson of the next-tominimal supersymmetric standard model. No significant signal is observed and we set 90% confidence-level upper limits on B(Υ(1S ) → γA0) × B(A0 → cc̄). We report the search for a light non-Standard Model gauge boson Z′ coupling only to the second and third lepton families. Our results significantly improve current limits and further constrain the remaining region of the allowed parameter space. Finally, we present a search for a long-lived particle L that is produced in e+e− annihilations and decays into two oppositely charged tracks. We do not observe a significant signal and we and set 90% confidence level upper limits on the product of the L production cross section, branching fraction, and reconstruction efficiency as a function of the L mass. In addition, upper limits are provided on the branching fraction B(B → XsL), where Xs is an hadronic system with strangeness -1.
Introduction
The Standard Model (SM) of particle physics has proven to be very successful in explaining the fundamental laws of nature, and most of the theoretical results of the SM agree with the experimental data. The SM, however, cannot be quantified as a complete theory of fundamental interactions as it shows theoretical drawbacks and it is unable to explain some experimental observations. The so called "hierarchy problem" requires the fine tuning of the Higgsboson mass in order that the quadratical divergences occurring in loop corrections are systematically canceled and is deemed unnatural by many theorists. Some extensions of the SM account for this problems and naturally stabilize the hierarchy between the electroweak and the Plank scale [1]. Experimentally, astrophysical and cosmological observations indicate that a fraction of energy density in the universe is due to non-baryonic matter, usually called dark matter (DM). The nature of DM is currently unknown, and the SM does not have viable candidate for DM particles. Many models of physics beyond the SM predict the existence of a new gauge groups with gauge boson masses below 10 GeV which can typically interact with other SM elementary particles through so-called portals [2].
We herein present three different searches for new physics candidates using data recorded by the BABAR detector at the PEP-II asymmetric-energy e + e − storage rings operated at the SLAC National Accelerator Laboratory. The data sample consists of 424 fb −1 of e + e − collisions recorded at the center-of-mass (CM) energy of the Υ(4S ) resonance, 27.8 fb −1 of data recorded at the Υ(3S ) and a e-mail<EMAIL_ADDRESS>13.6 fb −1 of data collected at the Υ(2S ) resonance. A detailed description of the BABAR detector is given elsewhere [3].
Search for a light Higgs in radiative decays of the Υ(1S ) with a charm tag
An appealing extension of the Standard Model is represented by the next-to-minimal supersymmetric standard model (NMSSM) since it solves both the μ problem of the minimal supersymmetric standard model (MSSM) as well as the hierarchy problem of the SM [4,5]. The NMSSM has an extended Higgs sector composed of two charged, three neutral CP-even, and two neutral CP-odd bosons. The mass of the lightest NMSSM Higgs boson A 0 could be low enough to be produced in the decay of the Υ(1S ) [6]. The branching fractions of the A 0 to SM particles will depend on the A 0 mass and the NMSSM parameter tan β [7]. BABAR has already performed searches below the charm mass threshold searching for A 0 → μ + μ − [8,9] and for A 0 → gg or ss [10] as well as above the charm mass threshold in A 0 → τ + τ − searches [11]. None of these searches have observed a significant signal and most of the NMSSM parameter space has been ruled out. Here we report a search for the decay Υ(1S ) → γA 0 , A 0 → cc with A 0 masses ranging from 4.00 to 9.25 GeV/c 2 . For this search we use 13.6 fb −1 of data collected at the Υ(2S ) resonance, corresponding to (98.3 ± 0.9) × 10 6 Υ(2S ) mesons [12], which includes an estimated (17.5 ± 0.3) × 10 6 Υ(2S ) → π + π − Υ(1S ) decays [13]. The non-Υ(2S ) backgrounds are studied using 1.4 fb −1 of "off-resonance" data collected 30 MeV below the Υ(2S ) resonance. A Monte Carlo (MC) simulation is used to estimate the signal efficiency and optimize the search as well as for background studies. The signal decay chain e + e − → Υ(2S ) → π + π − Υ(1S ), Υ(1S ) → γA 0 , A 0 → cc is simulated with EVTGEN [14], for A 0 masses between 4.0 and 9.0 GeV/c 2 in 0.5 GeV/c 2 steps and for A 0 masses of 9.2, 9.3, and 9.4 GeV/c 2 . The A 0 decay width is assumed to be 1 MeV. The hadronization of the cc system is simulated using the JETSET [15] program. The detector response is simulated with GEANT4 [16]. Υ(1S ) decays are tagged by the presence of a pion pair from Υ(2S ) → π + π − Υ(1S ). To tag the A 0 → cc decay at least one charmed meson such as a D 0 , a D + , or a D * (2010) + is required. In this analysis the A 0 is not fully reconstructed, instead the search for the A 0 is performed in the spectrum of the invariant mass of the system recoiling against the π + π − γ system. Events are required to contain at least one photon candidate with E > 30 MeV. Each photon candidate is taken in turn to represent the signal candidate in the Υ(1S ) → γA 0 decays, allowing multiple candidates per event. D meson candidates are reconstructed in five channels: Events are required to have at least one dipion candidate; m R , the invariant mass of the system recoiling against the dipion in the Υ(2S ) → π + π − Υ(1S ), is calculated as where M Υ(2S ) is the nominal Υ(2S ) mass, m ππ and E ππ are the measured dipion mass and energy in the center-ofmass (CM) frame. Signal candidates must satisfy 9.45 < m R < 9.47 GeV/c 2 . The mass of the A 0 candidate, m X , is determined from the mass of the system recoiling against the dipion and photon through where P denotes four-momentum measured in the CM frame. Simulated background include Υ(2S ) and e + e − → qq events, where q is a u, d, s, or c quark. For m X greater than 7.50 GeV/c 2 , events with low-energy photons constitute an important background source, for this reason, the analysis is divided into a low mass region, for 4.00 < m A 0 < 8.00 GeV/c 2 and a high mass region, for 7.50 < m A 0 < 9.25 GeV/c 2 . The overlap of the two regions is motivated by the need to have sufficient statistical precision for the background determination in each region. We train one Boosted Decision Tree (BDT) classifier [17] taking 24 different variables as input to separate background from signal candidates for each of the five D channels and for each mass region. To train the BDTs samples of simulated signal events, simulated generic Υ(2S ) events, and the off-resonance data are used. For each channel in the two mass intervals, we require the BDT output to exceed a value determined maximizing the quantity S /(1.5 + √ B) [18], where S is the expected number of signal events and B the expected number of background events, respectively. After selection, 9.8 × 10 3 and 7.4 × 10 6 candidates satisfy the selection criteria in the low-and high-mass regions, respectively. The corresponding distributions of m X are shown in Fig. 1.
The backgrounds consist of Υ(1S ) → γgg, other Υ(1S ) → X decays, Υ(2S ) → X decays without a dipion transition and e + e − → qq events. The corresponding contributions are 35%, 34%, 15% and 16% respectively in the low-mass region and 1%, 66%, 18%, and 15% respectively in the high-mass region. The contribution from Υ(1S ) → γgg decays reaches a maximum near 5.5 GeV/c 2 and decreases above 7GeV/c 2 . We search for the A 0 resonance as a peak in the m X distribution; the scan is performed by means of extended maximum likelihood fits as a function of test-mass values, denoted m 0 A . For these fits the shape of the signal distribution are fixed while the parameters of the background PDF, the number of signal events N sig , and the number of background events are determined in the fit. The signal PDF is modeled with a Crystal Ball function [19]. The background PDF is modeled with a second-order polynomial. The fits are performed in steps of 10 and 2 MeV/c 2 in the low-and high-mass regions, respectively. We use a local fitting range of ±10σ CB around test-mass values, where σ CB is the width of the Gaussian component of the Crystal Ball function. The σ CB parameter varies between 120 and 8 MeV/c 2 for values of m 0 A between 4.00 and 9.25 GeV/c 2 . In the high-mass region we exclude the interval 8.95 < m 0 A < 9.10 GeV/c 2 because of a large background from Υ(2S ) → γχ bJ (1P), χ bJ (1P) → γΥ(1S ) decays. The fitting procedure is validated with background-only pseudo-experiments; a fifth-order polynomial PDF is used to fit the data and generate the pseudoexperiments in the low-mass region, while a sum of four exponential functions and six Crystal Ball functions is used for the high-mass region. The Crystal Ball functions describe the Υ(2S ) → γχ bJ (1P) and χ bJ (1P) → γΥ(1S ) transitions while the exponential terms describe the nonresonant background. The fitting procedure is found to require a correction to N sig for values of m A 0 near 4.00 and 9.25 GeV/c 2 . The corrections are determined from the average signal yield found in the fits to the background-only pseudo-experiments. The maximum correction reaches 50 candidates in the high-mass region; the uncertainty of the correction is taken to be half its value. The overall efficiencies range from 4.0% to 2.6% between 4.00 and 9.25 GeV/c 2 . Potential bias introduced by the fitting procedure is evaluated using pseudo-experiments which use differ- ing on m A 0 , the extracted product branching fraction is found to be up to (4 ± 3)% higher than the value used to generate the events. The systematic uncertainties associated with the reconstruction efficiencies are dominated by the differences between data and simulation, including the BDT output modelling, cc hadronization, D-candidate mass resolution, dipion recoil mass and likelihood modeling, and photon reconstruction. Other systematic uncertainties are associated with the fit bias, the dipion branching fraction, the finite size of the simulated signal sample, and the Υ(2S ) counting. The BDT output distributions in off-resonance data and e + e − → qq simulation, are slightly shifted from one another; the associated systematic uncertainty is estimated by shifting the simulated distributions so that the mean values agree with the data, and then recalculating the efficiencies. The reconstruction efficiencies decrease by 7% and 2% in the low-and high-mass regions, respectively. Uncertainty associated with cc hadronization is evaluated by comparing D meson production in offresonance data and e + e − → cc simulation; the difference varies from 1% to 9% for the five D decay channels. We conservatively assign a global multiplicative uncertainty of 9% that includes effects due to the hadronization modeling, particle identification, tracking, π 0 reconstruction, and luminosity determination of the off-resonance data. The uncertainty due to the discrepancy between the reconstructed D mass resolution in data and simulation is estimated by gaussian smearing of the D mass input in simulation to match the data and measuring the difference in the reconstruction efficiency. Further corrections to account for data and simulation differences in reconstruction efficiencies are estimated with similar methods. Corrections are applied to account for the dipion recoil mass recon-struction, the dipion likelihood modeling, and the photon reconstruction [20].
The highest observed local significance in the lowmass region is 2.3σ, at 4.145 GeV/c 2 , and 2.0σ at 8.411 GeV/c 2 in the high-mass region. Such fluctuations occur in 54% and 80% of pseudo-experiments, respectively, hence we conclude that our data are consistent with the background-only hypothesis [21]. Bayesian upper limits on the product branching fraction B(Υ(1S ) → γA 0 ) × B(A 0 → cc) at 90% confidence level (C.L.) are determined assuming a uniform prior, with the constraint that the product branching fraction be greater than zero. The distribution of the likelihood function for N sig is assumed to be gaussian with a width equal to the total uncertainty on N sig . The upper limits obtained from the low-mass region are combined with those from the high-mass region to define a continuous spectrum. The results are shown in
Search for a muonic dark force
In the SM particles and interactions are insufficient to explain cosmological and astrophysical observations of dark matter. A possible scenario is represented by new hidden sectors that are only feebly coupled to the SM. In the simplest case of a hidden U(1) gauge interaction, such a sector contains it's own gauge bosons Z and SM fields may directly couple to the Z , or alternatively the Z boson may mix with the SM hypercharge boson [22]. In the latter case, the Z couplings are proportional to the SM gauge couplings; however, due to large couplings to electrons and light-flavor quarks, such scenarios are strongly constrained by existing searches [23]. If SM fields are directly charged under the dark force instead, the Z may interact preferentially with heavy-flavor leptons, greatly reducing the sensitivity of current searches. Such interactions could account for the experimentally measured value of the muon anomalous magnetic dipole moment [24], as well as the discrepancy in the proton radius extracted from measurements of the Lamb shift in muonic hydrogen compared to observations in non-muonic atoms [25,26]. In the following we report a search for dark bosons Z with vector couplings only to the second and third generations of leptons [27] in e + e − → μ + μ − Z , Z → μ + μ − . For this search we use the full Υ(4S ) data sample plus about 28 fb −1 data at Υ(3S ), 14 fb −1 data at Υ(2S ), and 48 fb −1 off-resonance data. About 5% of the data set is used to validate and optimize the analysis method, the rest of the data was only examined after finish finalizing the analysis method. For the background study we use MC samples. Signal MC events are generated using MadGraph 5 [28] and hadronized in Pythia 6 [29] for Z mass hypotheses from the dimuon mass threshold to 10.3GeV/c 2 . Background samples include the direct processes of e + e − → μ + μ − μ + μ − generated with Diag36 [30], which includes the full set of the lowest order diagrams. The events of the process of e + e − → e + e − (γ) are generated using BHWIDE [31] while of e + e − → μ + μ − (γ) and e + e − → τ + τ − (γ) are generated using KK2f [32]. The off-resonance data samples, e + e − → qq (q = u, d, s, c), are simulated using JETSET. The events processes of e + e − → ψ(2S )γ followed by ψ(2S ) → π + π − J/ψ and J/ψ → μ + μ − were generated using EVTGEN with appropriate phase-space model. Finally the detector acceptance and reconstruction efficiency are determined using GEANT4. Figure 3. The distribution of the four-muon invariant mass, m 4μ , for data taken at the Υ(4S ) peak together with Monte Carlo predictions of various processes normalized to data luminosity. The e + e − → μ + μ − μ + μ − MC does not include ISR corrections.
We select events with exactly two pairs of oppositely charged tracks, consistent with the e + e − → μ + μ − Z , Z → μ + μ − final state. The muons are identified by multivariate particle identification algorithms for each track. We require the sum of energies of the deposits in the electromagnetic calorimeter that are not associated to any track to be less than 200 MeV. We reject events coming from the Υ(3S ) and Υ(2S ), where Υ(2S , 3S ) → π + π − Υ(1S ), Υ(1S ) → μ + μ − decays if the dimuon combination lies within 100 MeV/c 2 of the Υ(1S ) mass. The distribution of the four-muon invariant mass after all selections is shown in Fig. 3. The lower part of the four-muon invariant mass spectrum, m 4μ < 9 GeV/c 2 , is well reproduced by the Monte Carlo simulation while the MC simulation overestimates the full energy peak by about 30% and fails to reproduce the radiative tail. This, however, is expected because the Diag36 simulation does not simulate the initial state radiation (ISR). We select e + e − → μ + μ − μ + μ − events by requiring a four-muon invariant mass distribution within 500 MeV/c 2 of the nominal center-of-mass energy. We also require the tracks to originate from the interaction point to within its uncertainty and constraining the center-of-mass energy of the system to be within the beam energy spread. We do not attempt to select a single Z → μ + μ − candidate per event, but instead consider all possible combinations. The most important contribution on the invariant mass peak besides the QED process e + e − → μ + μ − μ + μ − comes from Υ(2S ) → π + π − J/ψ, J/ψ → μ + μ − as can be seen in Fig. 3. We extract the signal yield by a series of unbinned likelihood fits to the spectrum of the reduced dimuon mass m R , defined as m R = m 2 μ + μ − − 4m 2 μ , within the range of 0.212 < m R < 10 GeVc 2 and 0.212 < m R < 9 GeVc 2 for the Υ(4S ) resonance data and Υ(2S ) and Υ(3S ) resonances data, respectively. We exclude a region of ±30 MeVc 2 around the nominal known J/ψ mass. We probe a total of 2219 mass hypothesis. The signal efficiency at low masses is about 35% and it rises to about 50% around m R = 6 − 7 GeV/c 2 to drop again at higher values of the reduced dimuon mass. The signal efficiencies include a correction factor of 0.82, which accounts for the impact of ISR not included in the simulation, as well as differences between data and simulation in trigger efficiency, charged particle identification, and track and photon reconstruction efficiencies. This correction factor is obtained from the ratio of the m R distribution in simulated e + e − → μ + μ − μ + μ − events to the observed distribution in the mass region 1-9 GeV/c 2 . We also assign a systematic uncertainty of 5% to cover the small variations between the uncertainties on the e + e − → μ + μ − μ + μ − and data taking period. We calculate the ISR contribution based on the quasi real electron approximation [38].
The cross section of e + e − → μ + μ − Z , Z → μ + μ − is extracted as a function of Z mass. The gray band indicates the excluded region. We find the largest local significance is 4.3σ around Z mass of 0.82 GeV/c 2 that is corresponding to the global significance of 1.6σ and it is consistent with the null-hypothesis [33]. We also derive 90% confidence level (CL) Bayesian upper limit on the cross section of e + e − → μ + μ − Z , Z → μ + μ − as shown in Fig. 4. We consider all uncertainties to be uncorrelated except for the uncertainties of the luminosity and efficiency. We finally extract the corresponding 90% CL on the coupling parameter g by assuming the equal magnitude vector couplings muons, taus and the corresponding neutrinos together with the existing limits from Borexino and neutrino experiments as shown in Fig. 5. We set down to 7 × 10 −4 near the dimuon threshold. Figure 5. The 90% CL upper limits on the new gauge coupling g as a function of the Z mass, together with the constraints derived from the production of a μ + μ − pairs in ν μ scattering ("Trident" production) [34]. The region consistent with the discrepancy between the calculated and measured anomalous magnetic moment of the muon within 2σ is shaded in red.
Search for long lived particles
Anomalous astrophysical observations [35]- [37] have generated some interest in GeV-scale hidden-sector long-lived states [38]. BABAR offers an ideal laboratory to search for such a particle in the O(1GeV/c 2 ) mass range. We thus search for a neutral, long-lived particle L, which decays into one of the following final states f = e + e − , μ + μ − , e ± μ ∓ , π + π − , K + K − , or K ± π ∓ . The signature for such a decay is a displaced vertex consistent with a two-body decay. The search is performed by fitting the L-candidate mass distribution. We provide "model-independent" limits on the product σ(e + e − → LX)(L → f ) ( f ) of the inclusive production cross section σ(e + e − → LX), the branching fraction B(L → f ), and efficiency ( f ) for each of the two-body final states f , where X is any set of particles. Detailed tables for efficiency as a function of L mass m, transverse momentum, and proper decay distance cτ are given in [39]. We also provide "modeldependent" limits on the branching fraction for the decay B → X s L, where X s is an hadronic system with strangeness -1. To determine signal mass resolution and reconstruction efficiency we use MC simulations; signal events are produced with the EVTGEN generator, assuming the L spin to be zero, in two different ways: for the "model-independent" search the L is produced at 11 different masses, m MC = 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 9, and 9.5 GeV/c 2 ; for m MC ≤ 4 GeV/c 2 , the L is created in the process e + e − → BB, with one B meson decaying to L + Nπ (N = 1, 2, or 3) and the other B meson decaying generically. while at higher masses the production process is assumed to be Υ(4S ) → L + Nπ. The production processes are chosen in order to populate properly the phase space and do not reflect any preferred production hypothesis; in both regions, the L is produced uniformly throughout the available phase space, with an average transverse decay distance of 20 cm. Efficiencies for other decay lengths are obtained by reweighting the events. For the "modeldependent" presentation we generate B → X s L decays for the seven mass values m MC = 0.5, 1, 2, 3, 3.5, 4, and 4.5 GeV/c 2 . The composition of the X s is taken to be 10% K, 25% K * (892), and 65% K * (1680), with the high-mass tail of the X s spectrum suppressed by phase-space limitations. The other B meson in the event decays generically. In addition to the signal MC samples, we use the following background MC samples in order to optimize the event selection criteria and study the signal extraction method: e + e − → BB (produced with EVTGEN), τ + τ − , μ + μ − (KK2f), e + e − (BHWIDE), and qq events (JETSET), where q is a u, d, s, or c quark. The detector response is simulated with GEANT4.
Each track must satisfy d 0 /σ d0 > 3, where d 0 is the transverse distance of closest approach of the track to the e + e − interaction point (IP), and σ d0 is its uncertainty. The two tracks have to originate from a common vertex, and the χ 2 value of the fit is required to be smaller than 10 for one degree of freedom. We require the distance r between the IP and the vertex in the transverse plane to be in the range 1 < r < 50 cm, and the uncertainty on r is required to satisfy σ r < 0.2 cm. We also require the angle α between r and the L-candidate reconstructed transverse momentum vector to be α < 0.01 rad. The uncertainty σ m on the measured L-candidate mass m must be less than 0.2 GeV/c 2 . The L candidate is discarded if either of the tracks has associated hits in the detector between the IP and the vertex, or if the vertex is within the material of the beampipe wall, or the detector bulk material. Depending on the specific final state, the candidates must satisfy the following invariant-mass criteria: m e + e − > 0.44 GeV/c 2 , m μ + μ − < 0.37 GeV/c 2 or m μ + μ − > 0.5 GeV/c 2 , m e ± μ ∓ > 0.48 GeV/c 2 , m π + π − > 0.86 GeV/c 2 , m K + K − > 1.35 GeV/c 2 , and m K ± π ∓ > 1.05 GeV/c 2 . These criteria reject most background from K 0 S → π + π − and Λ → pπ − decays. In addition, except for the μ + μ − mode, they exclude low-mass regions in which the mass distributions of background MC events are not smooth, and therefore incompatible with the background description method outlined below. We require at least one of the tracks of L → μ + μ − candidates with m ≥ 8 GeV/c 2 to have an SVT hit. This rejects candidates that decay into μ + μ − within the material of the finalfocusing magnets, and thus have poor mass resolution. These selection criteria are found to yield near-optimal signal sensitivity given the broad range of m and r values of this search. For each decay mode, we determine the full efficiency for different values of m MC , cτ , and p T , including the impact of detector acceptance, trigger, reconstruction, and selection criteria. The dominant source of background consists of hadronic events with high track multiplicity, where large-d 0 tracks originate mostly from K 0 S , Λ, K ± , and π ± decays, as well as particle interactions with detector material. Random overlaps of such tracks comprise the majority of the background candidates.
We extract the signal yield for each final state as a function of L mass with unbinned extended maximumlikelihood fits of the m distribution. The procedure is based on the fact that signal MC events peak in m while the background distribution varies slowly. The fit probability density functions (PDFs) for signal and background are constructed separately for each mode and each data sample. We scan the data in search of an L signal, varying m 0 in steps of 2 MeV/c 2 . At each scan point, we fit the data in the full mass range using the PDF n S P S + n B P B , where the signal and background yields n S and n B are determined in the fit. The statistical significance S =sign(n)2 log(L S /L B ), where L S is the maximum likelihood for n s signal events over the background yield, and L B is the likelihood for n S = 0, is calculated for each scan point. We find two scan points with a significance S greater than 3, both in the μ + μ − mode in the Υ(4S )+offresonance sample. The highest significance is S = 4.7, with a signal yield of 13 events at the low-mass threshold of m 0 = 0.212 GeV/c 2 . The second-highest significance of S = 4.2 occurs at m 0 = 1.24 GeV/c 2 , corresponding to a signal yield of 10 events. To obtain the p-values for these significances, we perform a large number of pseudo experiments on m μ + μ − spectra generated according to the background PDF, obtained from the data. We find that the probability for S ≥ 4.7(4.2) anywhere in the μ + μ − spectrum with m μ + μ − < 0.37 GeV/c 2 (m μ + μ − > 0.5 GeV/c 2 ) is 4 × 10 −4 (8 × 10 −3 ). Further study provides strong indication for material interaction background in the 0.212 GeV/c 2 region. Specifically, most of the 34 μ + μ − vertices with m μ+μ− < 0.215 GeV/c 2 occur inside or at the edge of detector-material regions, including 10 of the vertices that also pass the e + e − selection criteria and 10 that pass the π + π − criteria. Thus, the peak is consistent with misidentified photon conversions and hadronic interactions close to the mass threshold. We conclude that no significant signal is observed.
Systematic uncertainties on the signal yields are calculated for each scan fit separately. The dominant uncertainty is due to the background PDF; this uncertainty is a few signal events on average, and generally decreases with mass. The uncertainty due to the signal mass resolution is evaluated by comparing the mass pull distributions of K 0 mesons in data and MC, whose widths differ by 5%. A conservative uncertainty of 2% on the signal reconstruction efficiencies is estimated from the K 0 S reconstruction efficiency in data and MC. Smaller uncertainties on the efficiency, of up to 1%, arise from particle identification, and signal MC statistics.
The likelihood L S which is a nearly normal function of the signal yield, is convolved with a Gaussian representing the systematic uncertainties in n S , to obtain the modified likelihood function L S . The 90% confidence level upper limit U S on the signal yield is calculated from US 0 L S dn S / inf 0 L S dn S = 0.9. Dividing U S by the luminosity yields an upper limit on the product σ(e + e − → LX)B(L → f ) ( f ). This limit is shown for each final state as a function of m 0 in Fig. 6 [39]. Determining the efficiency from the B → X s L signal MC sample, we obtain upper limits on the product of branching fractions B(B → X s L)B(L → f ) for each of the final states f . These limits are shown in Fig. 7. | 7,445.2 | 2017-04-01T00:00:00.000 | [
"Physics"
] |
EVOLUTION OF TRADITIONAL BANKS’ BUSINESS MODELS
The article considers topical issues of how banks are developing in the new realities of the digital economy, and presents the results of research on theoretical and practical aspects of banks’ business models evolution. The methodology uses such approaches as scientific abstraction, system and factor analysis, methods of grouping, detailing, and synthesis. The topic is relevant for banks first of all from the economic science point of view, since until now the conceptual apparatus of not only the types of bank business models but also the definition of bank business models and their ecosystems remain controversial. However, in practice the applying of new strategic solutions is needed to solve the triune task: to ensure profitability of banks’ business while maintaining liquidity and minimizing risks. The obtained results allowed us to reveal the important role of the banks’ business models innovative transformation, as well as to reveal the tendency of improving the intermediary function of banks on the modern stage of digital technologies' development. The article substantiates the need to develop new approaches to administering business processes, expanding banking activities using digital platforms, and reveals the problem that requires the speediest solutions in developing regulation standards for financial ecosystems.
INTRODUCTION. NEW TRENDS
Problems of the banks' evolution have been agitating the minds of scientists and practitioners for centuries. In the 1970s and 1980s scientists argued that banks were losing their influence and their role in the economy was sharply reduced under the influence of scientific and technological progress, active automation of bankers' work, saturation of bank branches with computer equipment, and the development of new technologies. And at the beginning of the 21st century the discussions regarding promising banks' business models have intensified again under the influence of the process of the internationalization and deglobalization of the world financial markets, global financial crises (2008)(2009), the oil crisis of 2020 and the Covid-19 coronavirus pandemic.
The increasing risks of inflation negatively affect the world economy. One of the new trends in the financial markets, which were influenced by macroeconomic factors, technological challenges and increasing regulatory burden on banks, was the changes in the economy in the context of the Fourth industrial revolution [12,13,14]. According to the latest data, the penetration of information technology into the financial industry in a number of countries was as follows: China -60%, the Russian Federation -43%, Kazakhstan -21%. China, Russia, Great Britain, Canada, countries of the European Union and EAEU are actively implementing their pilot projects to introduce state digital currencies on their territory and are discussing the consequences of these decisions for the world financial markets [4,5,9].
The global fintech industry is growing rapidly, using technological innovations to capture market share from existing companies in many financial services sectors. Whereas until recently fintech has focused mainly on the development of payments, direct (P2P) lending and crowd funding sectors, in the past decade there has been an increase in capital markets operations. New fintech solutions have faced banks around the world. In these conditions banks began to compete with fintech companies and develop cooperation with them.
During the technological transformation at each stage of value creation there are a significant number of suppliers of high-quality personalized financial products and services offered to the market at low prices. Competition with such companies is difficult for traditional banks and, in order not to lose out in competition to new market participants they are forced to optimize the supply of their products, concentrating on areas where they have a competitive advantage or strategic importance.
The global fintech industry is developing rapidly, using technological innovations to capture market share from existing market participants in many financial services sectors. The main innovative areas of financial engineering include: Cloud technology; Process and service externalization; Robotic Process Automation (RPA); Advanced Analytics; Digital transformation; Blockchain; Smart contracts; Artificial Intelligence (AI); Internet of Things. [8,15].
Fintech companies aim to capture at least 33% of the traditional banking business. Citibank predicts that by 2025, the growing impact of financial technology could lead to job losses for 30% of bank employees. [11,16,17].
The big banks offer huge opportunities for fintech companies by giving them access to global financial markets while transforming their own business models. Confirmation of these changes was the rapid development of the so-called "shadow" (unregulated) banking system, whose participants perform intermediary and credit functions for households and companies often faster and at less cost than traditional banks. However, innovation, while supporting the efficiency of financial intermediaries, has at the same time led to an increase in systemic risks that materialized during the global financial and banking crisis of 2008-2009 and the Covid-19 coronavirus pandemic. [10,13].
Currently there is an active search for new and transformation of traditional banks' business models on the basis of digitalization; and the formation of new types of platform-type companies, which are based on a wide variety of partners and customers that did not exist before the ubiquitous spread of the Internet and smartphones.
In economic science, new terms are emerging to characterize the business models of banks -e.g. nonbanks, hybrid banks, aggregators, platforms, infrastructure providers, ecosystems. Their advantages and disadvantages are discussed in comparison with traditional business models, but emphasize the need for banks to perform their basic functions under the supervision of financial market regulators. [11,9].
In these conditions, the transformation of the banking industry, the change of the classical bank's business model, which is gradually gaining new features of digitalization, the formation of new business models and ecosystems form the basis for discussions about the prospects of banks and national banking systems in the near and distant future.
METHODOLOGY
The methodology uses such approaches as scientific abstraction, system and factor analysis, methods of grouping, detailing, and synthesis; comparison methods, benchmarking of market practices and recommendations of international consultants [1,3,6,7] and other theoretical and empirical studies.
The analysis was based on reports and other materials of the World Bank, Bank for International Settlements, Bank of Russia, commercial banks and consulting companies.
The main objectives of the article are to reveal a modern problem of evolution of traditional banks' business models and to show the role of banks in the economy in future.
The analysis of international experience and Russian practice shows that there is no single approach to determining the bank's business model now. For example, the International Standard for Integrated Reporting (p.4B) defines an organization's business model as "a system of resource transformation through its commercial activities into products and results aimed at achieving the organization's strategic goals and creating value over the short, medium and long term", thus highlighting key elements of the business model -resources, business, products and results. And in IFRS 9 "Financial Instruments" the business model is characterized as the way in which an enterprise manages its financial assets in order to generate cash flows and allocates three types of business models: (a) focused on holding an asset to receive cash; (b) to withhold an asset to receive cash and sell an asset (hold to collect and sell), с) other business models.
The Bank for International Settlements (BIS) uses only a range of products as a criterion for classifying banks' business models. McKinsey identifies two criteria for grouping banks' business models: the type of products/services and the way they are sold.
Banks in the changing world of financial intermediation highlight the following promising business models: 1) the innovative, end-to-end ecosystem orchestrator; 2) Low-cost manufacturer; 3) the bank is focused on specific business segments; 4) the traditional but fully optimized and digitized bank. We consider this approach as the most complete and correct.
Banks at the initial stage of transformation and adaptation of their business models to the conditions of the new economic reality begin to make daily operations in digital mode (for example, to pay for the services of mobile operators and the Internet, payment of utilities, rent, etc.). Banks provide new products and services for their customers (opening accounts, accepting deposits and issuing loans, issuing plastic cards, etc.). Bank offices are functionally adaptable to the global digitization of the banking industry. The transition to the paperless branch format is carried out, where the client confirms the transaction and signs the documents in the bank's mobile application in the presence of the employee who forms the document and its digital image. Information is provided to the customer remotely in a visual format. Then the client can remotely apply for a bank loan, open his personal mortgage office, get the approval of the loan, after which the bank will transfer funds to the borrower's account. Customers save time and money by receiving on-line information about banks' products and services and have an opportunity to purchase them remotely. Now banks determine the type of their business model taking into account the level of market competition, the degree of its monopolization, the level of technical and technological equipment of the bank and its competitors, professionalism of banks' employees and ability of customers to use banks' innovations. Nature and scope of banks' activities, profitability, specialization, competitive advantages, as well as the need to comply with regulatory requirements have a great influence on the choice of a promising business model. After the global financial crisis (2008-2009) regulators introduced new requirements aimed at maintaining the stability of banks, including capital adequacy and liquidity levels. This affected the profitability of banks and largely determined the range of products they offered, which in turn predetermined the emergence of new business models. The use of crowd funding and crowd investing on the basis of P2P platforms offered an alternative to traditional banking. As a result now all banks can be divided into two groups: 1) traditional (classic) and 2) banks with new business models (among them are the so called hybrid and neo-banks models). Experts consider them promising, as these banks offer low tariffs, flexible and personalized service to customers, convenient mobile banking service, which allows them to attract more customers and receive higher profit. The new models also include: infrastructure banks for fintech companies and/or banks; aggregator banks; platform-based business models (remote customer identification platforms, fast payment platforms, financial products and services marketplace platforms, and new platforms based on distributed ledger technologies and cloud technologies). The largest and most successful banks create the so-called ecosystems. [11,16]
RUSSIAN CASE
The traditional banking business model is to provide services for retail and corporate customers. Our analysis of the development of banks in Russia with focus on institutional and functional approaches has shown that banks are the dominant element of Russian financial market. The share of their assets in the total assets of financial intermediaries exceeds 80%. At the same time, the Russian financial market is actively consolidating the banking sector ( Source: Bank of Russia The largest Russian banks -Sberbank and VTB were ranked 67th and 116th, respectively, in 2019 in the International rankings of globally systemically important banks.
Sberbank is currently the largest banking group in Central and Eastern Europe. In Russia it serves 110 million individuals and 1 million companies; operates in 22 countries and has more than 11 million customers abroad. The share of foreign assets of the bank is 14% in the total amount. The creation of a foreign network is conditioned by the need to provide banking services to Russian companies abroad, diversify the business and increase profitability through operations in high-margin markets. Sberbank plans to resume expansion into the Western European markets of France, Germany and Great Britain after the lifting of sanctions.
ECOSYSTEMS AS BUSINESS MODELS OF MODERN BANKS
The main approaches to define ecosystems are the following: a) business model with Bigtech companies as the core of the ecosystem; b) business model in which fintech start-ups are in the center of the ecosystem; c) business model in which the core of the ecosystem is the traditional financial intermediary. The last business model dominates in Russia. [3]. Sberbank and VTB Bank are bright representatives of this approach. Sberbank began to create ecosystem in 2014. Now it brings together dozens of partner companies from retail supermarkets to logistics companies, cinemas, etc. The evolution of the traditional Sberbank business model included the following steps [7]. A large-scale technological transformation and the development of the basis of the technology platform were set in 2014-2017. At that period Sberbank made the transition from the centralized automated systems to the banking platform, which involved the integration of infrastructure and the centralization of databases. The evolution of the platform for banking and non-banking services (internal cloud, cloud-ready applications, and single development environment) is dated by 2018 year. In 2019 Sberbank created a platform for the ecosystem (launch of the platform into industrial operation; platform with Cloud-native component for ecosystem). The priority for the 2022 year is 80% of customer operations should be transferred to the platform. Currently, Sberbank believes that AI-transformation is already yielding the first results. In 2020 the effect of the introduction of AI amounted to about $1 billion. Approximately 40% of private client visits are handled by a chatbot. The volume of approved applications for fixed-term credit (the loan for 7 minutes) amounted to about 100 billion rubles.
The ecosystem uses targeted platforms (e.g. NLP, speech analysis, biometrics, etc.). During the corona crises pandemic in the Sberbank ecosystem there were its beneficiaries (Delivery Club, Okko, docdoc, DomClick.ru, SBERMARKET, take!) and affected services (Citi mobile, Rambler/cash, SBERFOOD). But overall, the financial situation in the Sberbank ecosystem in 2020 has improved (capital adequacy ratio raised to 13.31% and the ratio of loans and deposits was 93.5%).
Systemically important Russian banks -VTB, Alpha Bank, Gazprombank, Rosbank, Rosselkhozbank -are currently implementing their projects of developing ecosystems [11]. However, this process carries many risks. The returns from nonbank businesses may be less, and later than the bank (which is at the center of the ecosystem) expects. At the same time, fintech companies are coming to financial wounds, which seek to expand the range of their customers, increase income and profitability of their business. For example, company Yandex confirmed at the end of 2020 its desire to create its own financial ecosystem and applied to the financial markets regulator to register 17 trademarks in banking, investment and insurance.
CONCLUSIONS / RECOMMENDATIONS
The speed of transformation of the banking sector often creates problems for regulators, who are forced to balance between the opposite goals: to maintain the stability of financial markets and stimulate the development of banks and other financial intermediaries. Analysis of Russian and foreign practice has shown that the main problem of the bank traditional business model is the following. Many banks have realized that cooperation with fintech companies is the key to survival and prosperity in the conditions of technological revolution and began to look for new approaches to the development of their business, to build new business models.
Bank of Russia, as financial regulator, develops market infrastructure, coordinates activities of financial agents and state institutions. The key features of the Bank of Russia strategy in this area are: legal regulation, information infrastructure and information security. "The main areas of financial technology development for the period 2018-2020" are synchronized with the national program "Digital Economy of the Russian Federation" and other projects in the field of financial technology development. [2].
The implementation of these projects contributes to the digitization of financial markets and the availability of financial services to the people in all regions of the country. Bank of Russia's main objectives in implementing financial technology policy documents include promoting competition in the financial market, improving accessibility, quality and range of financial services, reducing financial risks and costs, and improving the competitiveness of Russian technologies in general. The Bank of Russia sees banks in the long term as a driver of economic growth and the development of financial services. Currently, the Bank of Russia is actively stimulating the introduction of new technologies, including the creation of the so-called end-to-end customer ID and platform for cloud services. Distributed registry technology is presented in the Bank of Russia's strategy primarily by the Master Chain platform, which the Bank is developing in conjunction with the FinTech Association. Pilot projects are under way to account for electronic invoices, digital letters of credit and digital bank guarantees.
The development of digital technologies is accompanied by the growth of cyber threats, which require the regulator and financial intermediaries to quickly prevent or minimize them. To this end, the Bank of Russia pays special attention to the legal regulation of the use of breakthrough technologies and their direct implementation and development in the Financial Market of Russia and the EAEU space. In particular, new digital tools and services are being tested at RegTech (regulatory sandbox -test site), which is a tool to quickly check the consequences of innovations in the financial market, analyze risks and form proposals to change the current regulatory framework, which allows to select those projects that are important for the population and the national economy. The introduction of SupTech (Supervision Technology -Surveillance Technology) allows the Bank of Russia to more actively analyze the affiliation of borrowers, determine the current demand for cash and its prospects analyze and maintain the financial stability of banks and identify fraudulent transactions. At the same time, the Bank of Russia promotes the electronic interaction between government agencies and financial intermediaries and their clients, as well as improves the quality of new products and services, creates conditions for improving financial literacy of the population.
At the international level, new regulatory approaches (e.g., the EU's so-called "open banking" initiative -the PSD2 directive, which obliges credit organizations to share customer information with service providers at the request of the consumer), initiatives to simplify customer authorization processes, as well as favorable macroeconomic conditions and increased use of technology have led to the emergence of new financial intermediaries and increased the need to transform the bank's traditional business model. On the other hand, the international banking standards set out in the Basel agreements to ensure the stability of banks require them to create additional capital buffers, which lead to higher business costs and the displacement of highly risky assets outside the banking industry. Other directives (e.g. MIFID II financial instruments markets directive, aimed at protecting investors and standardizing financial institutions in the EU and restoring industry confidence) suggest that banks' approaches to their business strategies change in the new environment.
The future of banks is being shaped today. Influenced by technological innovations, the business model of banks is changing dramatically. Banks with great technological potential use fundamentally new business models analyze the activities of aggregators, marketplaces, ecosystems and on their basis create technological services for customers. Banks operating in the new paradigm of digital banking are changing their operations and transforming the landscape of financial markets as a whole. This study analyzes the current situation and the prospects for the development of banks' business models, taking into account the impact of systemic and individual risks.
Studying the evolution of traditional banks' business models allowed us to assess the current realities of the development of Russian banks, identify barriers and the promising areas for banks' business development in the era of the Fourth industrial revolution. An explanation for organizational changes in the banking industry, confirmation of these changes was the rapid development of so-called "digital platforms" and "ecosystems" whose participants perform intermediary financial services for private clients and companies much faster and cheaper than traditional banks. However, while the benefits of innovation improve the efficiency of banks, significant increases in systemic and other risks must also be taken into account. The following conclusions can be drawn in this regard.
The development of financial technologies for banks becomes a fundamental direction to change their business models and develop the financial market as a whole. Creation and development of ecosystems have both positive and negative aspects. The increase in income and value of banks, the decrease in the cost of attracting funds and customers, increasing their loyalty can be attributed as positive aspects. The negatives are: reduction of competition or even monopolization of the market, as large banks may restrict the access of other financial market participants to the distribution channels of products and services; creating non-market competitive advantages for those organizations that have gained access to the network under the bank's brand; strengthening the position of the major ecosystem banks-administrators; creating obstacles to the growth of financial service providers, which have limited access to the ecosystem. The systemic risks of the ecosystem should not be passed on to the customers of the ecosystem-organizer bank. Obviously, the client should not suffer because individual members of the ecosystem (not banks) have specific risks and/or low yields, etc.
The regulation of ecosystems has also not yet been unambiguously defined. Therefore, the problem of regulating the activities of banks, which have chosen the strategy of development of their business in the paradigm of the ecosystem, in our opinion, should be considered from the point of view of differentiated supervision. They need to introduce deeper consolidated supervision and regulation, as the actual merger under a single brand of the bank of companies from different sectors of the economy is creating new risks that can have a significant impact on the financial stability of the group and the financial market as a whole. The regulation of digital ecosystems (including bank-based and fintech-based companies) needs to be studied at the global and national levels, which is important to maintain competition and eliminate any discrimination against ecosystem users.
In general, it is almost impossible to draw an unambiguous conclusion in favor of a particular business model of the bank, as the choice of business model for each bank is different, as each bank operates in the unique conditions of its market (in terms of competition, specialization, quality of the client base, structure and quality of the portfolio, the level of training of staff, etc.) and determines the prospects of its chosen business model. In the future, it is expected that in the process of digital transformation of banks' business models will benefit the entire financial market, which will become more transparent, efficient and stable.
At the same time it is obvious (at least in Russia) that the role of banks in the national economy is not declining, but rising. Sberbank, for example, was highly involved in the process of the National Strategy for the Artificial Intelligence Development, approved by the President of the Russian Federation. | 5,213.6 | 2021-01-01T00:00:00.000 | [
"Economics",
"Business"
] |
Sulforaphane prevents age‐associated cardiac and muscular dysfunction through Nrf2 signaling
Scheme depicts hypothesized decreases of heart and SKM function during aging via ROS and partial reversal by SFN activation of Nrf2 that results in significant restoration of function of both types of muscle. A decline in heart and skeletal muscle function was observed in aged mice, with altered mitochondrial structure and gene expression, accompnied by decreases in mitochondrial complex activity, Nrf2 binding to antioxidant‐responsive DNA elements and physical endurance. The addition of sulforaphane (SFN) to the diet improved these age‐related changes in older mice to levels observed in younger ones. We demonstrated in this paper that SFN alleviates age‐associated oxidative damage and improves mitochondrial and cardiac function as well as physical endurance in old mice.
| INTRODUC TI ON
Aging is a complex biological process regulated by both intrinsic and extrinsic factors. The mechanisms of aging include the actions of reactive oxygen species (ROS), telomere shortening, and hormonal changes. Age-associated mitochondrial dysfunction and oxidative damage are primary causes for multiple health problems including sarcopenia and cardiovascular disease (CVD) . Several chemically and functionally diverse scavengers of ROS and antioxidant products have been evaluated for the ability to mitigate age-induced muscle loss (Barrera et al., 2018), but with little success. The major reasons for their lack of benefit may include low bioavailability and/or low scavenging efficacy toward oxidants and electrophiles in a cellular system. Therefore, promoting the activity of intrinsic antioxidant and cytoprotective pathways may represent a more effective strategy to counter loss of muscle function in the elderly.
The transcriptional regulation of enzymes (and other proteins) that are critical for the adaptation of cells to oxidative and electrophilic stress is controlled by antioxidant-responsive DNA elements (AREs), also called electrophilic response elements (Suh et al., 2004).
A central regulator of cellular responses to electrophilic stress is the transcription factor Nrf2 (nuclear erythroid-2-p45-related factor-2), shown to be essential for detoxification gene activity (Itoh et al., 2004), including in mammalian cardiac cells and other components of the cardiovascular system (Dastani et al., 2012). Though the role of Nrf2 in myopathy remains poorly defined, its downregulation has been implicated in both sarcopenia and CVD . Under physiological conditions, Nrf2 is bound in the cytosol by its repressor, the Kelch-like ECH-associating protein 1 (Keap1).
Keap1 regulates degradation of Nrf2 in response to electrophiles (Itoh et al., 2004). Electrophiles, including the natural compound sulforaphane (SFN), activate antioxidant and cytoprotective pathways through thiol modification of Keap1, causing it to dissociate from Nrf2 in the cytoplasm. Upon release, Nrf2 accumulates in nuclei, heterodimerizes with a small Maf protein, and activates the transcription of target genes through their AREs (Cao et al., 2006). Nrf2 Defective mitochondria that generate excessive ROS are detrimental to cells and are frequently cleared by the autophagy/mitophagy pathway, followed by induction of mitochondrial biogenesis in an attempt to restore mitochondrial mass and function (Dinkova-Kostova & Abramov, 2015) Several recent findings suggest a role of Nrf2 in preserving mitochondrial morphology and integrity during oxidative stress through a direct activation of protective mechanisms (Dinkova-Kostova & Abramov, 2015). SFN, a phytochemical in cruciferous vegetables, is a non-toxic compound recognized for its anti-aging, anti-cancer, anti-diabetic, antimicrobial, and chemopreventive activity in different animal models of disease (Kensler et al., 2013). Enhanced Nrf2 signaling and consequent cytoprotective gene activation is considered the prime mechanism of SFN action (Bai et al., 2013). SFN has shown potential in reducing microglial mediated neuroinflammation and can ameliorate neurobehavioral deficits and reduce the αβ burden in Alzheimer's disease model mice (Uddin et al., 2020). SFN improves muscle function, pathology, protects dystrophic muscle, and alleviates muscle inflammation (Sun et al., 2015). SFN also inhibits dexamethasone-induced muscle atrophy in myotubes via Akt/Foxo signaling (Son et al., 2017). Moreover, SFN repairs vascular smooth muscle cell dysfunction in age-related cardiovascular diseases and protects against skin aging through promoting the antioxidant machinery (Sedlak et al., 2018). Currently, several ongoing preclinical and clinical trials study its effect on cancers, insulin resistance, schizophrenia, and autism (Kensler et al., 2013;K. Singh et al., 2014).
Because SFN is known to activate Nrf2 and alleviate oxidant injury, in this study we examined whether treatment with SFN can restore Nrf2 activity, mitochondrial function, and skeletal muscle and heart function of old mice. We also evaluated transcriptome alteration in skeletal muscle and heart of SFN-treated and control mice.
In this study, we established a basis for upregulating Nrf2 activity as a novel therapeutic strategy to mitigate age-induced muscle and cardiac dysfunction.
| ME THODS
This study conformed to the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health, and the work was performed in accordance with a protocol (IACUC#646767-5) approved by the Central Arkansas Veterans Healthcare System Institutional Animal Care and Use Committee. Two-month-old (young) and the development of age-related disease processes. Therefore, the restoration of Nrf2 activity and endogenous cytoprotective mechanisms by SFN may be a safe and effective strategy to protect against muscle and heart dysfunction due to aging.
K E Y W O R D S
cardiac functions, mitochondrial dysfunction, Nrf2, Oxidative stress, sarcopenia, Sulforaphane 21-to 22-month-old (old) male C57BL/6 mice were obtained from aged rodent colonies of the National Institutes of Health (Bethesda, MD 20892). Mice were housed in the Veterinary Medical Unit at the Central Arkansas Veterans Healthcare System in Little Rock, Arkansas. A maximum of 4 mice per cage were housed together, and each mouse was individually identified by ear punch. The animals had free access to water and diet and were maintained on a 12-hour dark/12-hour light cycle. Food and water consumption and body weight of mice were recorded weekly.
| Animal treatment protocol for the study
All analyses were performed on 4 groups (n = 10/group). Starting at the age of 2 months and at the age of 21-22 months, mice were fed with either TD 96163 (Teklad, Madison, WI) control diet (n = 20/control groups) or TD 96163 diet supplemented with SFN (442.5 mg/kg body weight; n = 20/treated groups) for 12 weeks. D,L-sulforaphane (SFN) was obtained from Toronto Research Chemicals, North York, ON. The experiment was repeated with the same number of animals under similar conditions to test reproducibility of certain experiments.
| Intraperitoneal glucose tolerance test (IPGTT)
The test was performed at the completion of SFN diet administration, according to a procedure described by Carvalho et al. (2005).
Mice were fasted overnight for approximately 10-12 hrs by transferring them to clean cages with access to drinking water. The mice received an intraperitoneal injection of glucose (2 g/kg body weight).
Immediately before, and 15, 30, 60, 120 min after glucose injection, mice were briefly anaesthetized using isoflurane, and ~5 µl blood was sampled from the tail vein. Glucose concentration was determined using the hand-held glucometer Alpha-TRAK 2 Veterinary Blood Glucose Monitoring Meter Kit (Abbott Laboratories). At the end of the test, the mice were returned to their home cages with food and water available ad libitum.
| Skeletal muscle (SKM) function test
The test was performed at the completion of the SFN diet administration, according to the procedure described by Handschin et al. (2007). Mice were trained and allowed to run on a motorized, speed-controlled, modular treadmill system, Exer 3/6 (Columbus Instruments, OH). The treadmill was equipped with an electric shock stimulus (set at 1 Hz, 20% output) and an adjustable inclination angle. All mice were acclimatized to the treadmill (first with the belt unmoving, followed by shock grids off but the belt motor turned on) for 15 min/day for three consecutive days followed by 10 min at 8 m/min for three consecutive days before an experiment. For the exercise tolerance test, mice were allowed to warm up at 8 m/ min and 0% incline for 5 min. At every 2-min interval, workload was augmented by alternating increases in belt speed of 2 m/min or increases in incline of 10% grade until mice developed exhaustion or a maximum speed of 46 m/min was reached. Exhaustion was defined when mice were unable to avoid repetitive electrical shocks for 5 continuous seconds. Running time until exhaustion was measured, and the running distance, work, and power were calculated.
Distance is a function of time and speed of the treadmill. Work (kJ) was calculated as the product of bodyweight (kg), gravity (9.81 m/s 2 ), vertical speed (m/s × angle), and time (s).
| Strength test
Kondziela's inverted screen test of forearm isometric muscle strength was performed at the completion of SFN diet administration, according to the procedure described by Au-Deacon (Au -Deacon, 2013). The mice were placed in the center of a wire mesh screen, and the screen was rotated to an inverted position over 2 sec with the mouse's head declining first. The screen was held steadily 40-50 cm above a padded surface. The time until the mouse fell off was noted, or the mouse was removed when the criterion time of 60 s was reached.
| Immunostaining of PAX7/MAYOD
At the completion of 12 weeks of SFN treatment, all mice were euthanized and the extensor digitorum longus (EDL) (Rosenblatt et al., 1995) was obtained from the lower hindlimb. Myofiber clus-
| Echocardiography
Prior to euthanasia, control and SFN-treated mice were subjected to cardiac analysis by echocardiography at the completion of diet administration, using the Vevo 770 instrument (FUJIFILM VisualSonics). Each mouse was anesthetized with 1-2.5% isoflurane, and hair was removed from the thorax with depilatory cream.
Echocardiographs were obtained in the short axis M-mode at the mid-left ventricular level to calculate common parameters of systolic function such as ejection fraction and cardiac output (Boerma et al., 2016;Bose et al., 2018;.
| Oxygraph
At the completion of the 12 weeks of SFN treatment, all mice were euthanized, SKM and heart tissue were obtained for highresolution respirometry (HRR) using the Oxygraph-2k (Oroboros Instruments, Austria) measurements. A section of myocardial tis- Approximately 10 mg tissue sample was used to prepare fiber bundles from each tissue. Tissue was teased apart longitudinally using a blade to increase its surface area. SKM and myocardial fibers were permeabilized by gentle agitation in relaxing solution supplemented with 50 µg/ml saponin for 20 min at 4°C followed by washing in ice-cold respiration medium/mito-buffer MiR05 (0.5 mM EGTA, 3 mM MgCl 2 , 60 mM potassium lactobionate, 20 mM taurine, 10 mM KH 2 PO 4 , 20 mM HEPES, 110 mM sucrose, and 1 mg/ml essential fatty acid-free bovine serum albumin (BSA)) (pH 7.1) by agitation for 10 min each at 4°C. Tissue samples weighing ~2 mg (after blotting excess solution on filter paper) were used in duplicate for the respirometric assay.
Mitochondrial complex activities were measured at 37°C by HRR using substrate inhibitor titration protocols. When the chamber oxygenation achieved 400 µM of O 2 , the titration procedure was initiated. The first measured step was fatty acid oxidation, by the addition of 30µl of octanoylcarnitine 0.1 M (final conc. 1.5 mM). Respiratory rates were expressed (pmol/s) per mg of dry tissue weight (Rose et al., 2019).
| Immunostaining of electron transport chain (ETC) complex protein and nitrotyrosine adducts in the heart
At the completion of the 12 weeks of SFN treatment, all mice were euthanized, and left ventricular tissue was fixed in 10% phosphate-buffered formalin and embedded in paraffin. Sections (5 µm) were deparaffinized in ethylene and rehydrated through gradient ethanol (100-70%), permeabilized by 1% Triton X-100, and immersed in 3% H 2 O 2 for 10 min to inactivate endogenous peroxidase activity. The sections were blocked with 5% goat serum/1% BSA in PBS for 30 min at room temperature and then immunostained with primary antibodies against Ndusf3 (1:200; Abcam, ab110246) and CORE-2 (1:300; Invitrogen, 459220) overnight. Analysis of the stained tissue sections was performed with light microscopy (Nikon E1000, Nikon). Immunohistochemical detection of nitrotyrosine staining was performed using primary antibodies against nitrotyrosine (1:6000 dilution). Immunoreactivity was detected by Dako Envision+System-HRP. Counterstaining was performed using Mayer's Hematoxylin (Electron Microscopy Science).
| Transmission electron microscopy
Left ventricular heart tissue samples were fixed overnight in 2.5% glutaraldehyde/0.05% malachite green in 0.1 M sodium cacodylate buffer. Samples were post-stained with 1% osmium tetroxide/0.8% potassium hexacyanoferrate (III), 1% tannic acid, and 0.5% uranyl acetate followed by dehydration in a graded alcohol series and propylene oxide and embedded in Araldite-Embed 812 (Electron Microscopy Sciences). Sections of 50 nm were collected on a Leica UC7 microtome and post-stained with uranyl acetate and lead citrate. Images were collected using a Tecnai F20 (FEI Company, Hillsboro, OR) transmission electron microscope at 80 kv (Bose et al., 2018).
| Real-time polymerase chain reaction
Total RNA was isolated from ventricular heart and SKM (gastrocnemius) tissue samples of control and SFN-treated mice. cDNA was prepared as described previously . Real-time polymerase chain reactions (qPCR) were performed on a DNA Engine
| Active Nrf2 binding assay
Nuclei were extracted from the left ventricle of the heart and SKM tissues using the Nuclear Extract Kit (Active Motif, Carlsbad, CA) as per manufacturer's instructions. Nrf2 activity was measured in nuclear extracts by an Nrf2-DNA-binding ELISA kit (Active Motif) as we have reported before .
| Statistical analyses
All data are presented as mean ±standard deviation (SD) and were analyzed using Prism 8 (GraphPad Software, San Diego, CA, USA). The unpaired Student's t test was used when two groups were compared and a one-way analysis of variance (ANOVA) followed by the Bonferroni or Tukey post hoc test was used when three or more groups were compared.
Survival curves were analyzed using the log-rank test. A p-value < .05 was considered a significant difference between groups.
| SFN improved survival but did not affect animal body weight
At the age of 21-22 months, male C57BL/6 mice were placed on SFN containing diet or control diet (n = 20 mice per group) for a total of F I G U R E 1 Effects of sulforaphane (SFN) diet on survival, body weight, food intake, water intake, fasting glucose, and glucose tolerance of mouse. Old mice with SFN supplementation improved significantly their survival (a), compared with mice on control diet (p = .0087, n = 20) each group. After the start of SFN or control diet, young and old mice were weighed for their body weight (b), water intake (d), and food intake (e), weekly. The data shown represent the means ±SD (n = 10). Statistical significance between SFN or control diet fed mice of the same group was determined using repeated measures two-way ANOVA followed by Bonferroni test or unpaired Student's t test (lower panel). *p < .05 and ns = non-significant, compared with the same group. SFN also reduced fasting glucose and improved glucose tolerance in old mice challenged with glucose. (c) Fasting blood glucose levels of young and old mice on control or SFN diet were measured after an 8 h fasting period. Means ±SD (n = 10) are shown; the difference between control or SFN diet fed old mice is statistically significant *p < .05 and **p < .01 by a t test. (f) Young and old mice on control or SFN diet were fasted for 4 h and were given an intraperitoneal injection of glucose (2 g/kg of body weight). The area under the blood glucose level vs time curve was calculated by numerical integration between 0 and 120 min for each individual mouse, and the mean ±SD (n = 10) of the areas is shown. Differences between young mice are not statistically significant but improve significantly in old mice on SFN diet the body weight of young mice increased more rapidly than old mice in both diet groups (Figure 1b). On the other hand, water intake was affected by supplementation with SFN ( Figure 1c). Water intake was significantly lower in young mice fed with the SFN diet compared to the young mice on the control diet. The same was observed in the older animal groups. However, food intake was unaffected ( Figure 1d).
Furthermore, hematological parameters of young and old mice were not affected by the supplementation of SFN (Table S1).
| SFN reduced fasting glucose and improved glucose tolerance in old mice
Fasting blood glucose levels were no higher in young mice on the SFN diet than for the young control mice (Figure 1e). On the other hand, the old mice fed on the SFN diet showed significantly lower glucose levels (mean ±SD: 77 ± 16 mg/dl) compared to old control mice (102 ± 9 mg/dl). In both diet groups, the glucose levels were within or below the normal range for the C57BL/6 strain (Surwit et al., 1988). Because the difference in fasting blood glucose levels between control and treated animals was modest, an intraperitoneal glucose tolerance test was performed. SFN treatment did not affect the response to glucose administration in young animals ( Figure 1f).
However, in old mice on control diet, blood glucose levels increased more significantly, after glucose injection compared to young animals. Furthermore, the SFN treatment reversed this increase in glycaemia and improved glucose tolerance in the old animals ( Figure 1f).
| SFN improves exercise capacity in old mice
We tested whether SFN treatment would improve muscle strength and exercise capacity. For this purpose, we challenged young and old mice, fed with control or SFN diet, with involuntary physical exercise testing of muscle strength. First, we tested them for grip strength by making them hold on to a thick wire. Young and old mice fed with control diet were able to hold on for an average of 54 ± 5 and 39 ± 5 s, respectively. Young and old mice on SFN diet were able to hold on for longer periods of time, 64 ± 13 and 86 ± 12 s, respectively ( Figure 2a). Surprisingly, the old mice fed SFN were able to hold on longer than the young mice fed SFN. In addition, we compared the exercise capacity of control and SFN-treated young and old mice by having them run on a motorized, speed-controlled, modular treadmill system (Handschin et al., 2007). Old mice on control diet had a lower exercise capacity compared to their young counterparts ( Figure 2b). Feeding with an SFN-rich diet resulted in a significantly improved exercise capacity in the old mice. The SFNfed old mice performed similarly to young animals on the treadmill.
| SFN increased the numbers of skeletal muscle stem cells and their function
Myofibers isolated from the EDL muscles of the lower hindlimb
| SFN decreased markers of SKM aging, oxidation, and apoptosis
Histology revealed more of the cross-sectional area in slides was occupied by myofibers after 2 months on SFN diet ( Figure S9). Similarly, there was less staining for muscle myostatin, a negative modulator of SKM mass ( Figure S10). Finally, we found evidence that SFN reduced 8OHdG, a marker of oxidation ( Figure S11) and Tunel staining, a surrogate marker for apoptosis ( Figure S12).
| SFN treatment of old mice also improves cardiac function
A decrease in cardiac/respiratory function is known to limit exercise capacity in the elderly (Farkhooy et al., 2018). Cardiac aging is an intrinsic process with profound cellular and molecular changes that results in impaired cardiac function (Vigorito & Giallauria, 2014). We examined whether SFN restored cardiac function in old mice. Old mice on control diet showed reduced ejection fraction (61.0 ± 1.0%), fractional shortening (32.1 ± 0.7%), and stroke volume (32.8 ± 9.2 µl) compared to young mice on control diet (Figure 3a-c). On the other hand, SFN supplementation improved the ejection fraction (76.0 ± 1.4), fractional shortening (44.2 ± 1.3%), and the stroke volume (51.6 ± 11.3 µl) in the old mice. While SFN also significantly increased these parameters in young mice, the effect on ejection fraction and fractional shortening was larger in the old mice.
There was no significant effect of SFN supplementation on cardiac output in young mice, while cardiac output in old mice was significantly improved (Figure 3d). As a result, cardiac output in SFN-fed old mice was similar to that of young controls. We conclude that SFN-fed old mice developed resistance to ageassociated loss of cardiac function.
| SFN protects ultrastructure of cardiac mitochondria in aging heart
Mitochondria, the primary energy source for high energy demanding cardiomyocytes, occupy over 40% of their cell volume (~5,000 mitochondria/cardiomyocyte). Cardiomyocyte function is critically dependent on the health of these organelles (Strom et al., 2016). Additionally, age-associated mitochondrial dysfunction can be caused by changes in mitochondrial ultrastructure due to oxidative damage (Vays et al., 2014). To determine whether SFN protects against age-associated changes in cardiac mitochondrial ultrastructure, we compared mitochondrial morphology between hearts from control and SFN-fed old mice using transmission electron microscopy. The results suggest that SFN protects mitochondria against age-associated cristae disarrangement, partial cristolysis, and reduced electron density of the matrix (Figure 4a). Since the enzymes and complex III (CORE2-a mitochondrially encoded subunit).
Mitochondrial complex I & III subunit protein expression was
increased in the aging heart after SFN treatment compared to the heart of age-matched untreated animals ( Figure 4b). These data suggest that one mechanism by which SFN may maintain mitochondrial function is by inducing mitochondrial protein expression. Alternatively, the decreased protein expression found in cardiomyocyte mitochondria in the old mice might be due to oxidative damage. Immunohistochemical analysis of left ventricular tissues revealed that SFN-fed old mice had lower levels of nitrotyrosine protein adducts (a marker of oxidative stress) (Figure 4c). These effects may have contributed to increased autophagy in hearts of old mice and to the mitigation of that autophagy by SFN, as illustrated by the appearance of LC3-II in old mice and its disappearance after treatment with SFN ( Figure S13).
F I G U R E 3 Sulforaphane (SFN) treatment protects mouse from age-associated cardiomyopathy. Preservation of cardiac function
was assessed by evaluating: (a) ejection fraction, (b) fractional shortening, (c) stroke volume, and (d) cardiac output, which were significantly preserved in old mice fed with SFN supplemented diet. Black bar represents animals fed on control diet and gray bar represents animals fed on SFN supplemented diet. Statistical significance ***p < .001 and ****p < .0001 was determined by ANOVA used followed by Tukey (n = 10)
| SFN improves mitochondrial function in the old mice
Heart tissue is particularly rich in mitochondria to meet its high metabolic demand. Disruption of mitochondrial respiratory complexes can lead to the generation of oxidants, ATP depletion, and cardiomyocyte malfunction. Mitochondrial function has long been recognized to decline during aging, resulting in increased oxidative stress (Hebert et al., 2015). We measured cardiac and SKM tissue mitochondrial function by high-resolution respirometry. ETC activity was ≈30% lower in the old compared to the young mouse hearts, and SFN protected the ETC from this age-associated decline in oxygen flux (Figure 5a). Additionally, the activities of the ETC complexes I, I+II, and maximum respiration were increased in SFN-fed animals, but this increase was significant only in the old mice. Similar trends were seen in mitochondria from the SKM of SFN-fed animals, but these increases did not reach the level of significance (Figure 5b).
| SFN restores Nrf2 activity in the heart and SKM of old mice
Aside from effect on mitochondria, an age-associated decline in Nrf2 function has been well documented , but it is not clear if SFN can restore Nrf2 activity in the older population. We recently reported that Nrf2 activity increases with SFN treatment and is key to protecting the heart from oxidant injuries such as doxorubicin and ionizing radiation (Boerma et al., 2015;. Therefore, we compared Nrf2 ARE-binding activity in the hearts and SKM of young and old mice. Nrf2 activity was upregulated in the nuclear extracts of both hearts and SKM of SFN-fed young and old mice compared to their age-matched controls ( Figure 6).
| Results on heart and skeletal muscle qRT-PCR to further delineate potential mechanisms by which SFN works
To further elucidate the mechanisms of SFN-mediated enhancement of aging cardiac and SKM functions in mice, we examined the expression of a number of genes related to oxidant and electrophile metabolism. The selected genes are involved in antioxidant, antielectrophile activity, mammalian longevity, and glutathione synthesis and loss (Tables 1 and 2), all known to be regulated by Nrf2 (Hayes & Dinkova-Kostova, 2014). We examined cardiac and SKM transcript levels of catalase, Sod1, Sod2, HO1, Pxdn, Gpx1, Gsta4,Akr3,Akr7,Akr 8,Sirt1,Pgc1,Gclc,Gclm,and Nrf2 (See Table 1 for full names of the genes). In silico or in vitro analysis of the Gclc, Gclm, Gsta4, Nqo-1, Ho-1, and Sod2 promoter regions have revealed consensus ARE-binding sites for Nrf2 (Tonelli et al., 2018).
Cardiac transcript levels of a number of antioxidant and anti-electrophile genes were significantly increased in hearts from SFN diet fed young and old mice, as compared to those fed with control diet (Table 1)
Ultrastructure
Gclm were unaffected by aging, they also were upregulated by SFN in the diet of old mice. Together, these results suggest that the Nrf2 pathway was induced upon SFN treatment in old mice and enhanced protective mechanisms in SKM, possibly even more than in hearts.
Together, these results suggest that the Nrf2 pathway was induced upon SFN treatment in the young and old mice and enhanced protective mechanisms.
| DISCUSS ION
SFN prolonged life in old mice, decreased their likelihood of becoming diabetic, and increased their exercise capacity. Part of the increase in exercise capacity was certainly due to increased SKM function but some may have been due to preservation of cardiac function. In addition, comparison of survival curves of old mice on SFN or control diet during the study period demonstrates a significant protection from mortality of aged animals on SFN diet (log-rank (Mantel-Cox) test, p < .0087; Gehan-Breslow-Wilcoxon test, p < .0089).
We performed this SFN diet study using young and old C57BL/ B6J mice because this strain is relatively long-lived and has variable causes of death but a low tumor incidence (Treuting et al., 2008).
Previous studies have shown that SFN treatment decreases body
weight gain and food intake in mice fed on a high-fat diet or high-fat high-sucrose diet fed obese mice and alters metabolic parameters (Shawky et al., 2016;Shawky & Segar, 2018). Using a more normal diet, we found no effects of SFN on body weight or food intake, although we did find a decreased water intake in the old mice fed SFN. We speculate that increased muscle mass may have made up the difference in body weight that reduced water intake should have caused. The mice showed no statistically significant effects on hematological parameters (Table S1). Fasting blood glucose level was slightly lower in old mice on SFN diet than in old control animals, but remained in the published normal range. As seen in Figure 1f, IPGTT is normal in young mice, but insulin resistance became evident in old mice on control diet compared with old mice on SFN diet. The differences between responses of individual older mice (control vs. SFN diet) to insulin suggest that old mice on control diet began to develop metabolic syndrome and pathology. These results suggest that SFN diet may have a beneficial effect on insulin resistance in these old mice.
Findings from the current study include that treadmill exercise capacity and responses to forelimb grip strength significantly improved in old mice fed the SFN diet. Treadmill exercise capacity was approximately 1.5 times higher in old mice fed SFN diet compared with mice fed control diet. When comparing forelimb grip strength, old SFN-fed mice remained hanging on a wire more than twice as long as old mice on control diet. Performance of old SFN-fed mice was even better than young mice. Increased Pax7-and MyoD-positive satellite cell progeny in EDL myofibers of SFN-fed mice suggests an improvement in muscle regeneration in aged mice. Overall, we have demonstrated that the SFN diet F I G U R E 5 Sulforaphane (SFN) improves ETC function in aging heart. Respiration status of complex I, II+III, and maximum respiration of the ETC was evaluated in fresh heart (a) and SKM biopsies (b) from young and old mice (n = 10) fed with i) control diet or ii) SFN supplemented diet (442.5 mg per kg diet; 3 months), using the substrate inhibitor titration protocol as described in Methods section.
According to oxygen flux measures, old mice fed with SFN show improved complex I, I+II and maximum respiration compared to control diet fed old mice. Each bar represents mean ±SD (n = 10); statistical significance was evaluated by performing an unpaired t test (ns > 0.05, **p < .01 and ***p < .001 by a t test) F I G U R E 6 Sulforaphane (SFN) supplemented diet improves Nrf2 activity. SFN supplementation improves Nrf2-ARE-binding activity from old mouse heart (n = 5). Statistical significance *p < .05, **p < .01 and ***p < .001 was determined by ANOVA followed by Tukey's multiple comparison test. This study has several important limitations. It was performed in male mice only; thus, sex differences are not known. In addition, this study was performed on a relatively small sample size, only utilized two age groups and was of relatively short duration (12 weeks). We have yet to fully delineate the differences in mechanism affected by SFN in SKM versus heart. Finally, the translation of findings to humans is unknown, particularly with respect to heart failure. It is however clear that both skeletal and cardiac muscle function decline in elderly humans and contribute substantially to a decreased quality of life. In this setting, SFN may be a promising strategy to attenuate the aging process in these tissues.
Heart
Increased ROS production in the aging heart can lead to necrosis and apoptosis of cardiomyocytes and proliferation of fibroblasts with excess generation of collagen and development of fibrosis, and mitochondrial damage; these events subsequently lead to cardiac remodeling and dysfunction (Dai et al., 2014). The heart, unfortunately, expresses low levels of antioxidant and anti-electrophile protective enzymes, rendering it particularly vulnerable to free radical TA B L E 1 Relative abundance of Nrf2 and Nrf2 target gene transcripts from hearts of control young and old mice and those fed SFN Relative levels of transcripts encoding Nrf2, Nrf2-driven antioxidant, and anti-electrophile enzymes, enzymes associated with aging, and enzymes involved in glutathione synthesis and loss were measured by qRT-PCR in young and old mice hearts of control and treated animals. Gene expression levels were normalized to the S3 ribosomal protein transcript by calculating for each individual animal the difference, ∆Ct, in the respective cycle numbers. The ratio of the gene expression level was calculated by the ∆∆Ct method as described in Methods section, and values are shown as mean±SD (n = 5). Asterisks denote a statistical significance by Tukey's post hoc after two-way ANOVA of all groups difference ( * p < .05, ** p < .01, *** p < .001, **** p < .05, two-tailed t test) between control and treated (young or old) group of animals. "x" denotes similar comparisons between untreated young and old groups of animals ( xx p < .01, xxxx p < .0001, two-tailed t test). "ns" denotes p > .05 in all comparisons.
damage . Age-associated mitochondrial dysfunc- Relative levels of transcripts encoding Nrf2, Nrf2-driven antioxidant and anti-electrophile enzymes, enzymes associated with aging, and enzymes involved in glutathione synthesis and loss were measured by qRT-PCR in young and old mice skeletal muscle of control and treated animals. Gene expression levels were normalized to the S3 ribosomal protein transcript by calculating for each individual animal the difference, ∆Ct, in the respective cycle numbers. The ratio of the gene expression level was calculated by the ∆∆Ct method as described in Methods section, and values are shown as mean ± SD (n = 5). Asterisks denote a statistical significance by Tukey's post hoc after two-way ANOVA of all groups difference ( * p < .05, ** p < .01, *** p < .001, **** p < .05, two-tailed t test) between control and treated (young or old) group of animals. "x" denotes similar comparisons between untreated young and old groups of animals ( xx p < .01, xxxx p < .0001, two-tailed t test). "ns" denotes p > .05 in all comparisons.
toxicity and other related conditions (Chapple et al., 2012). We have recently demonstrated that SFN enhanced Nrf2 signaling and cytoprotective gene activity in mouse heart that confers protection from oxidative damage .
In the present study, we examined whether SFN in the diet stimulates Nrf2 activity and responsive genes in two most important functional tissues, SKM, and heart. We determined the amount of active nuclear Nrf2 in the SKM and heart of young and old mice fed SFN or control diet. SKM and heart of SFN diet fed young and old mice showed a significant increase in ARE-binding activity, countering the significantly reduced activity observed in old mice on the control diet. The drop in Nrf2 activity in the old control group of mice clearly indicated an inability of the SKM and heart, which already express low levels of protective enzymes, to protect themselves from oxidative and electrophilic assault and function normally. Thus, in this study, we have shown for the first time that increased oxidative stress in the aging heart and SKM correlates with failure of protective response due to dysregulation of Nrf2 and that this process can be attenuated by the administration of SFN to old (21-to 22-monthold) mice. and anti-electrophilic defense mechanisms and contribute to the initial lower lipid peroxidation products and 4-HNE concentration, to restore function in the SKM of old mice. Increased capacity for enzymatic scavenging of superoxide radical could be a major protective adaptation against free radical damage in muscle and may therefore be a major protection against the development or onset of sarcopenia, and Gpx1 and Gsta4 might additionally be involved. Our findings demonstrate that increased active Nrf2 in SKM and heart facilitates transcriptional activation of genes expressing antioxidants and anti-electrophile enzymes which can play a critical role during aging.
Thus, SFN could effectively prevent heart and SKM tissue dysfunction by restoring multiple cellular defenses through other activities of the Keap1/Nrf2 pathway, such as autophagy, glutathione biosynthesis, and mitochondrial biogenesis during aging and aging-related diseases.
In conclusion, age-associated myopathy is an intrinsic process with profound cellular and molecular changes that results in impaired cardiac and SKM function. Using a mouse aging model, we demonstrated that SFN diet can restore SKM and heart functions in the aged mice. Our studies did not find any adverse effects of SFN containing diet on the mice. The protection of respiratory chain complex activity, increased mitochondrial complex proteins, and lowering of oxidative damage by SFN in the aging heart suggest that SFN enhances mitochondrial function. Further analysis of pathways activated by Gpx1 and Gsta4 warrant attention because dissection of the involved mechanisms could lead to new therapies for sarcopenia. Protecting and enhancing Nrf2-driven biological functions may prove to be a safe and effective strategy to counter the loss of SKM and heart functions in older adults. Our findings provide a mechanistic understanding and information about novel biomarkers and surrogate endpoints, which could be applied in clinical settings.
ACK N OWLED G M ENTS
This work was supported in part by a grant from the National
CO N FLI C T O F I NTE R E S T
All authors declare no competing interests. | 8,212.4 | 2020-10-17T00:00:00.000 | [
"Biology",
"Medicine"
] |
Intelligent L2-L∞ Consensus of Multiagent Systems under Switching Topologies via Fuzzy Deep Q Learning
The problem of intelligent L2-L∞ consensus design for leader-followers multiagent systems (MASs) under switching topologies is investigated based on switched control theory and fuzzy deep Q learning. It is supposed that the communication topologies are time-varying, and the model of MASs under switching topologies is constructed based on switched systems. By employing linear transformation, the problem of consensus of MASs is converted into the issue of L2-L∞ control. The consensus protocol is composed of the dynamics-based protocol and learning-based protocol, where the robust control theory and deep Q learning are applied for the two parts to guarantee the prescribed performance and improve the transient performance. The multiple Lyapunov function (MLF) method and mode-dependent average dwell time (MDADT) method are combined to give the scheduling interval, which ensures stability and prescribed attenuation performance. The sufficient existing conditions of consensus protocol are given, and the solutions of the dynamics-based protocol are derived based on linear matrix inequalities (LMIs). Then, the online design of the learning-based protocol is formulated as a Markov decision process, where the fuzzy deep Q learning is utilized to compensate for the uncertainties and achieve optimal performance. The variation of the learning-based protocol is modeled as the external compensation on the dynamics-based protocol. Therefore, the convergence of the proposed protocol can be guaranteed by employing the nonfragile control theory. In the end, a numerical example is given to validate the effectiveness and superiority of the proposed method.
Introduction
In recent years, the coordination control of MASs has attracted considerable attention for their broad applications in many fields [1,2], such as formation control, cooperative attack, and attitude alignment. e MAS consists of a series of agents, which can communicate and interact with each other to realize multiple missions and adapt to the complex environment [3,4]. In particular, much attention has been paid to the problem of consensus of MASs because of their great potential applications in both economic and military. e purpose of MASs is to construct a relationship between the agents to achieve an agreement for the state/output. In the past decades, fruitful research studies have emerged to contribute to the development in theory and applications. To mention a few, the problem of distributed formation control for MASs is studied in [5], the time-varying formation design for MASs with disturbances is proposed in [6], and the problem of finite-time consensus for switched nonlinear MASs is investigated in [7].
In practical applications, it is well known that the communication topology among the agents may change dramatically over time to adjust to multiple missions and complex environments [8,9], such as the MASs can realize obstacle avoidance and higher flight efficiency by formation transformation [10,11]. e design flexibility, security, and performance of convergence will be improved, which motivated the studies on the switching topologies of MASs [1,12]. Recently, because of the broad potential applications of switching topologies, considerable significant research studies have been proposed by scholar at home and abroad. e communication topologies among interacting agents will change according to the flight conditions and missions, which can be modeled as switched systems. e switched systems consist of a series of continuous-time (or discretetime) subsystems and a switching signal, which determines the switching strategy between subsystems. It provides an efficient approach to deal with the problem of fast timevarying conditions. erefore, it can be inferred that the switching of topologies can be viewed as the switching between subsystems, and it is essential to study the problem of consensus protocol design to make sure the state/output can converge to the given value. In [13], the problem of timevarying formation control of MASs is investigated. e communication topologies switching among given connected topologies and the switching signal depend on the Markovian process.
e Lyapunov function method is utilized to analyze the convergence. In the work of [14], the problem of event-triggered leader-following consensus problem for multiagent systems with external disturbances is addressed under switching topologies. A novel distributed event-triggered protocol is proposed to realize disturbance rejection based on extended state observer. e average dwell time (ADT) method is utilized to ensure the stability of the event-triggered protocol. In [15], the time-varying practical formation problem is studied for spacecraft, where switching topologies and time-delays are taken into consideration. Sufficient conditions are provided to ensure that the error system is convergent, which are derived based on the ADT method. It is well known that the research studies mentioned above are proposed to deal with the problem of switching topologies. However, the convergence is guaranteed based on the ADT method. It can be inferred that the common parameters are applied for all subsystems in the ADT method, which will lead to conservativeness. To obtain tighter bounds on dwell time and improve the design flexibility of the algorithm, MDADT is applied during last decades. In [16], the MDADT method and multiple discontinuous Lyapunov function (MDLF) method are combined to analyze the stability of switched systems with unstable modes. e sufficient conditions are established, and the results in existing literature are covered as a special case. e fast switching and slow switching in the framework of MDADT are applied to unstable modes and stable modes. In [17], the global adaptive control algorithm for switched systems is proposed based on the MDADT method. e different properties of subsystems are taken into consideration. en, the adaptive tracking controller is applied to the nonlinear switched systems with external disturbance and unmodeled dynamics, which illustrates the effectiveness and superiority of the MDADT method. In the work of [18], the event-triggered sliding mode controller is proposed. By employing the MDADT method and event-triggered strategy, less conservative and more practical results are obtained. Sufficient conditions are given to ensure stochastically exponential stability by the aid of the LMI technique. e literature mentioned has provided fruitful results on consensus protocol design for MASs under switching topologies. However, stability and convergence are ensured by the traditional ADT method. e different properties of subsystems cannot be considered, which will lead to conservativeness. erefore, how to obtain less restrictive results is still an open and challenging problem, which has been fully investigated, and it has an important value and potential applications in practice.
Moreover, in practical environment, there always exist uncertainties and disturbances, which will lead to performance degradation and even instability [19,20]. erefore, it is essential to investigate the robust consensus problem to improve the performance in the uncertain environment [21][22][23]. In the work of [24], the problem of distributed H∞ containment control for MASs with switching topologies is studied. An observer-based containment control scheme is proposed. e external disturbance and time delay in the environment are taken into consideration, which is more applicable than the traditional method. By employing the Lyapunov function method and LMIs technique, the sufficient existing conditions and solutions of control protocol are given in the form of LMIs. In [25], the problem of timevarying formation of second-order discrete-time MASs under switching topologies and the time delay is investigated. e sufficient conditions are given to ensure MASs accomplish the mission of time-varying formation based on the state transformation method. e time delay and uncertainties are considered. Compared with the existing literature, the proposed can overcome the undesirable response caused by time delay and improve the transient performance. In the work of [26], the problem of formation control for tail-sitters in flight mode transitions is studied. e nonlinear dynamics and uncertainties are considered, and the robust time-varying formation control protocol is proposed. It is proven that the tracking errors can converge to the origin in finite time. e problem of L 2 -gain robust protocol for time-varying output formation-containment of MASs is addressed in [27]. e PID-based output-feedback control protocol is provided to ensure that all followers can track a time-varying formation reference, where communication delays and external disturbance are taken into consideration. e asymptotic stability of MASs is proved by the Lyapunov function method. However, as well known, the transient performance and robustness cannot achieve simultaneously. erefore, we need to make comprise of the transient performance and robustness, which still remains an open and challenging problem.
In addition, with the development of computing ability, the intelligent technique has been an attractable problem during the last decades [28][29][30]. It is widely applied in the areas of target recognition, machine vision, robotic systems, and controller design [31,32]. It provides an efficient method to improve the autonomy and design flexibility of the system [33]. e most widely used methods are the deep learning and reinforcement learning. As a combination of deep learning and reinforcement learning, the advantages of deep learning and reinforcement learning are adopted, which include the characteristics of self-fitting and selflearning. In the work of [34], the automatic completion of multiple peg-in-hole assemble tasks is realized. Because the traditional method requires an accurate contact model and complex analysis, the intelligent control method is formulated by constructing the task as a Markov decision process. e deep deterministic policy gradient (DDPG) algorithm is proposed to accomplish the task to achieve optimal policy and avoid risky actions. In [35], a noninteger PID controller is proposed based on the DDPG algorithm. e measurement noises and external disturbances are taken into consideration. e kinematic controller and dynamic controller are proposed to achieve optimal performance. e DDPG algorithm is given to compensate for the uncertainties and disturbances in the framework of actor-critic. A numerical example is given to illustrate the effectiveness of the proposed method. Cheng et al. [36] proposed the real-time controller for the problem of fuel-optimal moon landing. Because the traditional method cannot meet the demand of high requirements of real-time performance and autonomy, the deep reinforcement learning algorithm is proposed for the real-time optimal control based on actor-indirect method architecture. e deep neural networks are applied for initial guesses, and the efficiency of training data is guaranteed. e literature mentioned above has provided considerable meaningful results in the area of machine learning. However, to the best of the authors' knowledge, the intelligent consensus design for MASs with considerations of stability, robustness, and optimal transient performance has not been fully studied yet. It is essential and important to achieve optimal comprise of robustness and transient performance.
Based on the statement above, it can be inferred that the problem of the improvement of autonomy and design flexibility for the system needs to be studied. e problem of consensus protocol design for MASs under switching topologies has not been fully investigated yet. e design flexibility can be improved by employing tighter bounds on dwell time because less conservative results can be obtained, and it leaves more room to ensure the switching logic stays in the subsystems with better performance for long enough time. Moreover, it is of great importance to combine the advantages of the traditional method and intelligent technique, which can ensure convergence, robustness, and transient performance simultaneously. erefore, the problem of intelligent L 2 -L ∞ consensus design of MASs under switching topologies is investigated. e convergence and robustness are guaranteed by the Lyapunov function method and the MDADT method, which are more applicable. e transient performance is improved by fuzzy deep Q learning, in which the fuzzy reward function is proposed for the complex scheduling process.
e main contributions of this study can be summarized as follows: (1) e L 2 -L ∞ consensus protocol of MASs under switching topologies is designed. e problem of L 2 -L ∞ consensus of MASs is converted into the problem of stability analysis for switched systems, which is more applicable than the traditional method. e MDADT method and multiple Lyapunov function method are combined to guarantee the stability and prescribed attenuation performance index, which can obtain tighter bounds on dwell time and less conservative results.
(2) e consensus protocol is composed of the dynamics-based consensus protocol and learningbased consensus protocol. Compared with the traditional method, the proposed strategy can ensure the stability, robustness, and transient performance simultaneously. (3) e fuzzy reward function is utilized to improve the efficiency of the deep reinforcement learning algorithm. e design of reward function for the traditional method mainly depends on the experience of designer, which will lead to complexity. e fuzzy reward function can improve the data efficiency and ensure optimal performance. e rest of the study is organized as follows: the preliminaries and problem statement are provided in Section 2; in Section 3, the main results of the study are given; the numerical example is given in Section 4, which is followed by the conclusion in Section 5.
Preliminaries and Problem Statement
In this study, it is supposed that MASs are composed of a leader labelled as 0 and n followers labelled as 1, 2, . . ., n.
e connection topology among n followers can be described as a time-varying model with N f topologies. We define G σ(k) � (G 1 , G 2 , . . . , G N f ) as undirected connected graph, respectively. H � (1, 2, . . . , n), n > 1 represents the set of finite nodes. s � σ(k): [0, ∞) ⟶ R � 1, 2, . . . , N f denotes the switching signal, which is a piecewise continuous function of time and takes value in the finite set ij ) n×n and L σ(k) � (l σ(k) ij ) n×n are the adjacency matrices of the undirected graph G σ(k) and the Laplacian matrix at time instant k, where a σ(k) ij stands for the element of adjacency matrix, where a σ(k) ij � 1 represents that the node i can obtain information from node j, and l σ(k) ij is defined in the following equation.
en, for given node i ∈ H, we can define the neighbors of node i as to indicate the information transformation between the leader and the followers with n nodes. Define i � 1 stands for that the node i ∈ H can obtain information from the leader; otherwise, we define θ σ(k) i � 0. erefore, MASs with leader-followers can be described as in the following equations: Computational Intelligence and Neuroscience 3 where A, B, C, and D are the system matrices with appropriate dimensions, stands for the output of the i th follower, and ω i (k) ∈ R m denotes the external disturbance belonging to L 2 [0, ∞). It is supposed that the agent i can obtain information from its neighbors and leader. erefore, we define υ i (k) as relative state measurements of the i th agent, which can be described as follows: In this study, the control input of the i th agent to ensure the consensus of leader-followers is proposed.
where K σ(k) is the control parameter to be determined by robust control theory, and K c,σ(k) is the compensated parameter obtained by deep Q learning. In this study, the gained parameters K c,σ(k) are supposed to vary in a finite set with given bounds. e K c,σ(k) can be viewed as additional perturbance of K σ(k) , which can be described as follows: where M σ(k) ∈ R l×l Δ and N σ(k) ∈ R q Δ ×q are the known matrices with appropriate dimensions, and F σ(k) ∈ R l Δ ×q Δ are the unknown matrices with F T σ(k) F σ(k) ≤ I. For the i th agent, the error of state is defined as e i (k) � x i (k) − x 0 (k). en, the closed-loop system can be rewritten as To facilitate the proof, the definitions and lemmas are given as follows.
Definition 1 (see [37]). For given switching signal σ(k) and k 1 > 0, define N σ,s (0, k 1 ) as the number of switching instants over the time interval (0, k 1 ). T σs (0, k 1 ) is set to be the activated time of undirected graph G s during (0, k 1 ). ere exist constant scalars N 0 ≥ 0 and τ as > 0, such that en, τ as is called the mode-dependent average dwell time and N σ,s (0, k 1 ) is the mode-dependent chatter bound, respectively. In this study, we set N 0 � 0.
Definition 2 (see [37]). If there exist control protocol in equation (5), all agents asymptotically track the state trajectory of the leader, such that Definition 3 (see [38]). For given constant scalars 0 < δ < 1 and c > 0, the prescribed L 2 − L ∞ attenuation performance c is satisfied such that (1) e MASs in equations (2)-(3) are asymptotically stable when ω(k) � 0. (2) e following inequation holds for all nonzero Lemma 1 (see [35]). The matrices L σ(k) + Θ σ(k) are symmetric and positive definite if and only if the graphs G σ(k) are connected for t ≥ 0. Moreover, there exist a transformation matrix T σ(k) , such that the following equation holds.
where λ σ(k) i , i ∈ H are the nonzero eigenvalues of matrices Lemma 2 (see [39]). For given constant a > 0 and real matrices Θ, U, V, W, it is concluded that equation (12) is equivalent to equation (13).
Lemma 3 (see [39]). For given symmetric matrix T and matricesM, N, if there exist constant scalar ε > 0, such that en, the following equation holds for any appropriate F with F T F ≤ I.
L 2 -L ∞ Consensus Protocol Design.
In this section, the L 2 -L ∞ consensus protocol is proposed, and the stability and prescribed performance are guaranteed.
Lemma 4.
For given constant scalars 0 < δ < 1, c > 0. e system in (7) with control input in (5) where Proof. Substituting equation (17) to (7), one can obtain equation (16). It can be inferred that the transformation matrix T σ(k) is unique; therefore, we have the following equations.
It is obvious that the problem of robust consensus protocol design can be converted to the controller design of (16). □ Remark 1.
e system in equation (16) consists of the independent system in equation (20). erefore, the stability of equation (7) is equivalent to the stability of n subsystems in equation (20); the attempt to ensure the prescribed attenuation performance of (7) can be converted to guarantee the attenuation performance of (16).
In eorem 1, the sufficient conditions to guarantee the stability and prescribed attenuation performance index are presented.
Theorem 1. For given constant scalars μ
and class functions κ 1 , κ 2 , the switched systems in equation (20) with MDADT satisfying equation (25) are globally uniformly asymptotically stable with prescribed L 2 -L ∞ attenuation performance c, such that Proof. e entire proof can be divided into two steps.
Together with (22), we can conclude that Based on equations (26)- (27), the following equation can be obtained by iteration.
Computational Intelligence and Neuroscience 5 Combining with Definition 1, we have en, we can obtain (29) based on (21).
Together with equations (22)-(23), one has
6 Computational Intelligence and Neuroscience en, one can obtain the equation as follows by iteration.
Proof. e Lyapunov function V i,s (e i (k)), i ∈ H, is defined as follows: According to (20) and (39), we can conclude that (38) is equivalent to (24). Along the trajectory of V i,s (e i (k)), one has Together with equations (40)-(41), we have According to eorem 1, we can conclude that the system in (20) with MDADT satisfying (25) is globally uniformly asymptotically stable with prescribed L 2 -L ∞ attenuation performance c.
Based on eorem 1 and Corollary 1, the solutions of consensus protocol are given in eorem 2. □ Theorem 2. For given constant scalars μ s > 1, 0 < δ s < 1, c > 0, a s > 0, and ε s > 0, if there exist positive-definite matrices P i,s ∈ R p×p , matrices X s ∈ R l×l , Y s ∈ R l×q , the MASs in (2)- (3) with control input in equation (5) are asymptotically stable with prescribed L 2 -L ∞ attenuation performance c such that equation (43) holds. 8 Computational Intelligence and Neuroscience e parameters of control protocol can be derived in (43). where Proof. According to Schur complement, it is obvious that equation (43) is equivalent to equation (45).
Together with Lemma 2, we have Moreover, based on Lemma 3, one has According to Schur complement, it is obvious that (46) is equivalent to (37), which completes the proof.
Compensated Consensus Protocol Design Based on FDQL
In this section, the learning-based consensus protocol is proposed based on deep reinforcement learning, where fuzzy deep Q learning is utilized. e stability and prescribed attenuation performance are guaranteed by the robust control theory, and the learning-based control protocol is introduced to improve the transient performance and realize optimal control policy. e output of the learning-based control protocol can be viewed as an additional variation of robust consensus protocol. e online scheduling of control protocol is established as a Markovian process. erefore, the advantages of robust control theory and deep reinforcement learning are combined.
It is well known that reinforcement learning is composed of state, action, agent, and environment. e state of k th step is defined as s k , and the chosen action is supposed to be a k ; then, the reward function r k and the state s k+1 are generated based on the interaction with the environment. erefore, the optimal control policy can be obtained by maximum the reward function.
To improve the convergence of consensus protocol, the state is defined as s k � e i (k) z i (k) and the action is defined as a k � [K c,σ(k) ].
In Q learning, the deep neural network is utilized to approximate the action-state value function Q * (s k , a k ), which can be described as where f(s k , a k , ω) denotes the function of deep neural networks. e action is chosen based on the maximum Q value: ere exist two neural networks in the deep Q learning algorithm, whose structures are the same and can be called as the critic neural network and target neural network. e parameters of the critic neural network are updated based on temporal-difference learning. e output of the critic neural network is defined as Q(s k , a k , ω) and the output of the target neural network is defined as Q(s k , a k , ω − ). erefore, the parameters of the critic neural network are updated based on the equation as follows: where L r is the learning rate, c s denotes the discount factor, R represents the reward of state transition from s k to s ′ through action a k , and max a′ (Q(s ′ , a ′ , ω − )) stands for the maximum Q value of the target neural network. It can be inferred that the reward function has an important influence on the final performance. e design of traditional deep Q learning mainly depends on the experience of designers, which can not achieve optimal performance and will improve the computational complexity. In this study, the reward function is applied to design the Computational Intelligence and Neuroscience 9 reward function. e input value of fuzzy reward function can be divided into five categories, which can be described as VB, B, N, G, and VG. e five categories represent very bad, bad, normal, good, and very good. In this study, it is supposed that there are four followers. erefore, the inputs of the fuzzy reward system are set to be |e 1 |, |e 2 |, |e 3 |, and |e 4 |. It can be inferred that each fuzzy set includes 25 rules, and the total number of the fuzzy rules is 75. e output of the fuzzy reward function is limited in the interval [− 1,0), and the defuzzifier of the fuzzy reward function is defined as Based on the statement above, the learning-based consensus protocol design algorithm can be summarized as follows: e FDQN algorithm proposed in this study can improve the transient convergence performance of MASs. e output of the deep Q network is supposed to be variation of parameters of consensus protocol. As well known, the design of reward function in the traditional method depends of the experience of the designers. To overcome the problem, the fuzzy reward function is developed to improve the learning efficiency in this study.
Numerical Example
In this section, an example is provided to illustrate the effectiveness of the method. e model of MASs is constructed as follows: e external disturbance is e switching topologies are shown in Figure 1. en, we can obtain the Laplace matrices as follows: e parameters of switching topologies are given as follows: erefore, we can obtain MDADT according to (25).
It is well known that the ADT method can be viewed as a special case of the MDADT method. erefore, it can be inferred that τ a � max τ ai � 0 · 4266. It is obvious that tighter bounds on dwell time and less conservative results can be obtained. en, we define the attenuation performance index c � 0 · 9, and we can obtain the parameters of consensus protocol based on eorem 2. e switching logic is shown in Figure 2. In order to illustrate the effectiveness and superiority of the proposed method, the traditional ADT method and MDADT method are given as comparisons. From the statement above, we have realized that MDADT can obtain tighter bounds and less conservative results. Moreover, the comparisons of state response of the ADT method and MDADT method are shown in Figures 3-6. e state responses of MASs with ADT switching topologies are shown in Figures 3-4. e state responses of MASs with MDADT switching topologies are shown in Figures 5-6. We can see that the transient performance of the ADT method is better than that of the MDADT method because the different characteristics of subsystems are taken into consideration, which will no doubt improve the design flexibility and make it more applicable for practical conditions.
Validate the superiority of the proposed method. e state response of the proposed method is shown in Figures 7-11. e state responses of the proposed method are shown in Figures 7-8. We can conclude that the transient performance can be improved by the aid of fuzzy deep Q learning. e advantages of the traditional method and intelligent method are combined. Compared with the traditional method, the transient performance can be improved, and compared with the intelligent method, stability and training efficiency can be guaranteed. e attenuation performance index is shown in Figure 9, from which we can see that the robustness of the proposed is ensured. e episodes reward response is shown in Figure 10, and we can see that the reward function of the fuzzy deep Q learning algorithm can converge to the neighbor of the origin, which demonstrates the effectiveness of the algorithm in this study. In addition, the response of the action is shown in Figure 11, from which we can see that the learning-based consensus protocol is provided to compensate the additional input caused by the uncertainties.
Based on the statement above, we can conclude that the convergence, robustness, and prescribed attenuation performance index are guaranteed. e less conservative results and tighter bounds on dwell time can be obtained by the MDADT method. e transient performance of the system can be improved based on the fuzzy deep Q learning algorithm. It is worth mentioning that the traditional robust cannot make comprised of robustness and transient performance, and the intelligent method always cannot guarantee convergence. By employing the proposed method, convergence, robustness, and transient performance are guaranteed simultaneously.
(1) Design the dynamics-based consensus protocol according to eorem 2 (2) Define the bounds of learning-based consensus protocol parameters (3) Initialize the weights of the Q value network (4) Initialize the weights of the target Q value network (5) Initialize the replay buffer R, episode � 0 (6) for episode � 1 to M do (7) Initialize a random state s 1 and receive the initial observation (8) For t � 1 to K do (9) Select an action based on the state and reward function (10) Execute the action a k , and then, one can obtain the reward function r k and the state s k+1 (11) Store the pair (s k , a k , r k , s k+1 ) in the replay buffer (12) Sample a random minibatch of transitions (s m , a m , r m , s m+1 ) from the replay buffer (13) Update the target Q value function (14) Update the weights of the target Q value network (15) end for (16) Computational Intelligence and Neuroscience Time (s) Figure 4: e state response of (x) 2 under the ADT method.
x 11 x 21 x 31 x 41 x 51 Time (s) Figure 7: e state response of (x) 1 with the proposed method.
x 12 x 22 x 32 x 42 x 52 Time (s) Figure 11: e response of the action.
Conclusions
e problem of intelligent L 2 -L ∞ consensus design for MASs under switching topologies is investigated in this study. e switching topologies of MASs are modeled as switched system theory by employing linear transformation. en, the problem of consensus protocol design can be converted to the problem of L 2 -L ∞ control. To ensure the convergence, robustness, and transient performance simultaneously, the proposed consensus protocol is composed of dynamicsbased consensus protocol and learning-based consensus protocol, which provides baseline and compensation of uncertainties. e baseline of consensus protocol is obtained by dynamics-based consensus protocol, which is provided based on the MDADT method and MLF method. e scheduling interval of learning-based protocol is given by nonfragile control theory. en, the learning-based consensus protocol is proposed based on the fuzzy deep Q learning algorithm to improve the transient performance and achieve optimal policy, where the fuzzy reward function is introduced to improve the learning efficiency.
Data Availability
e data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 7,081.2 | 2022-02-16T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Engineering Topological Surface State of Cr-doped Bi2Se3 under external electric field
External electric field control of topological surface states (SSs) is significant for the next generation of condensed matter research and topological quantum devices. Here, we present a first-principles study of the SSs in the magnetic topological insulator (MTI) Cr-doped Bi2Se3 under external electric field. The charge transfer, electric potential, band structure and magnetism of the pure and Cr doped Bi2Se3 film have been investigated. It is found that the competition between charge transfer and spin-orbit coupling (SOC) will lead to an electrically tunable band gap in Bi2Se3 film under external electric field. As Cr atom doped, the charge transfer of Bi2Se3 film under external electric field obviously decreases. Remarkably, the band gap of Cr doped Bi2Se3 film can be greatly engineered by the external electric field due to its special band structure. Furthermore, magnetic coupling of Cr-doped Bi2Se3 could be even mediated via the control of electric field. It is demonstrated that external electric field plays an important role on the electronic and magnetic properties of Cr-doped Bi2Se3 film. Our results may promote the development of electronic and spintronic applications of magnetic topological insulator.
The ability to modulate the band gap of Cr-doped Bi 2 Se 3 could be significant for semiconductor spintronics and the application of TIs 12,29 . External electric field is revealed to have significant influence on bulk and SSs of Tis [30][31][32][33][34][35][36][37][38][39][40][41] . In particular, the magnetism of MTI can be external controlled by carrier-mediated Ruderman-Kittel-Kasuya-Yosida (RKKY) coupling 16,36,[42][43][44][45][46][47] . Recent study revealed that the ultrathin Bi 2 Se 3 films with a hybridization gap, which is induced by the coupling between two SSs, become gapless by the application of external electric field. This is due to the movement of electrons in the opposite direction of electric field 31 . Liu et al. also reports that external electric field can close or reopen a gap for surface states of Bi 2 Se 3 film 32 . Wang et al. even revealed that electric field can induce a quantum phase transition in the magnetic topological insulators 45 . However, the mechanism of electronic manipulation by external electric field in pure Bi 2 Se 3 is still inconsistent. The efficient electric-field control of the electronic and magnetic properties in insulating Cr-doped Bi 2 Se 3 remains elusive.
In this paper, we performed a first-principles calculation within density functional theory (DFT) to investigate the pure and Cr-doped Bi 2 Se 3 film under external electric field. We find that the dopant of Cr atom can reduce the transfer of electrons in Bi 2 Se 3 , as well as the movement of atoms caused by the application of electric field. The band gap of the pure Bi 2 Se 3 film under electric field is determined by the competition between the electrically
Results
Crystalline Bi 2 Se 3 has a rhombohedral structure and is formed by stacking of weakly coupled quintuple layers (QLs) along its (001) direction. The order of layers in QLs is Se(1)-Bi-Se(2)-Bi-Se(1). It is previously revealed that the thicker the Bi 2 Se 3 film is, the more sensitive to the external electric field 30 . When the thickness of Bi 2 Se 3 film becomes larger than 3QL, the band gap drastically reduces as the external electric field applied. Accordingly, the thickness of 3QL where the SSs appears is considered in this study. Figure 1(a) shows the slab structure of pure 3QL Bi 2 Se 3 film. The lattice constant of a = b = 4.138 Å, c = 28.64 Å taken from the experimental data is chosen for the hexagonal Bi 2 Se 3 (001) film of 3 QLs. In order to discuss the charge transfer and its response to the external electrical field, QL1, QL2 and QL3 are labeled, representing the upper, middle and lower surfaces, respectively. The outermost Bi atom from the upper surface is substituted by Cr atom, as illustrated in Fig. 1(b). We define the c-axis from QL1 to QL3 as the positive direction of the external electric field (see also Fig. 1). The Brillouin zone and the high symmetry points are also shown in Fig. 1(c).
Firstly, we study the difference between the atom structures of pure Bi 2 Se 3 SSs and Cr-doped Bi 2 Se 3 SSs. It is well known that the radius of Cr atoms is smaller than Bi atoms. Hence the substitution of Cr renders that each layer of the QL1 approaches it. The thickness of QL1 decreases from 7.084 Å to 6.113 Å after doping (see Table 1). This result agrees with the bulk case of Bi 2 Se 3 22,23 . An additional calculation including van der Waals correction shows relaxed interval between QL only slightly changes about 0.046 Å. Subsequently, a series of external electric fields perpendicular to the vacuum layer are applied to the system. Table 1 shows the thickness (θ) of each QL and the distance (d) between QLs with the electric field of 0 and 0.1 V/Å. We can find that, under the same electric field of 0.1 V/Å, the atom structure of the pure Bi 2 Se 3 film undergoes an obvious change, while the doped film remains constant. It suggests that the atom structure of the doped system is more robust against the external electric field. As shown in Fig. 2(a) and (c), we calculate the average planar electrostatic potentials (APEPs) of the pure and Cr doped Bi 2 Se 3 3QLs film under electric fields of 0 and 0.1 V/Å, respectively. For the pure 3QL-Bi 2 Se 3 film, the APEPs of two surfaces have the same value of 6.0 eV. As the external electric field is applied (E ext = 0.1 V/Å), the upper and the lower surfaces have different APEPs of 3.6 eV and 8.2 eV, respectively. In the Cr doped case, the APEPs of two surfaces slightly reduce to 5.6 eV when E ext = 0 V/Å. While for the electric field of 0.1 V/Å, the upper and the lower surfaces have less difference than the pure case. The calculated APEPs in Cr doped Bi 2 Se 3 are 5.0 eV and 6.3 eV for the upper and the lower surfaces, respectively. This result reflects that the upper surface QL1 undergoes an accumulation of electrons, while the lower surface QL3 exports electron when the external electric field is applied toward QL3. The electric potential difference of two surfaces is about 4.6 eV in the pure case, which is larger than that in the doped case with a value of 1.3 eV. It suggests that the doping of Cr in Bi 2 Se 3 film will reduce the charge transfer caused by external electric field. For the robust atomic structure and the suppression of charge transfer in the Cr doped Bi 2 Se 3 under the electric field, the main reason can be the movement of each atom in QL1 in ward to the Cr atom. As a result, the covalency of the chemical bonds of QL1 is enhanced, leading to a local redistribution of electrons 22,23,48 . This result is confirmed by the calculated electron density of pure and Cr doped Bi 2 Se 3 film, as shown in Fig. 2(b) and (d). Comparing the two figures, we can observe that as Cr atom is introduced, the bonding of Cr with its neighboring Se atoms gets strengthened. Electrons in Cr-doped material become more localized, resulting in the suppression of the charge transfer. This result suggests that the dopant of Cr atom leads to an inhibition of the external electric field. Figure 3 shows the calculated band structures of pure 3QL-Bi 2 Se 3 film with different external electric fields. When E ext = 0, we get a band gap of 0.111 eV, which is close to the experimental value of 0.138 eV 49 . Due to the symmetrical structure of pure Bi 2 Se 3 , the SSs of QL1 (red) and QL3 (blue) are almost overlapping. Both of the two surfaces contribute to the valence band maximum (VBM) and conduction band minimum (CBM). However, as Cr atom doped, the TRS of Bi 2 Se 3 film is broken 20,22 . The introduction of the magnetic atom results in a removal of the degeneracy, opening a band gap of 0.099 eV in Bi 2 Se 3 , as can be seen in Fig. 4. We can see that, the contribution of the upper and lower QLs in the Cr doped cases no longer overlap without electric field, making it difficult to hybrid between the two surfaces. The major states of VBM are contributed by QL1, which includes the transition metal Cr atom. While the bottom of conduction band consists of the non-degenerate QL2 and QL3 states. Specifically, the CBM is mainly dominated by QL2 as shown in Fig. 4. In our calculation, with the increase of electric field from the upper surface to the lower surface, the band gap of pure 3QL-Bi 2 Se 3 first decreases gradually and close at the field of 0.026 V/Å (see Fig. 3). Furthermore, if the electric field continues to increase, the gap reopens and then closes again, which indicates the band gap of Bi 2 Se 3 is electrically tunable. The reason is the competition between the electrically tunable charge transfer 28 and the Rashba SOC. The Rashba SOC is an important ingredient responsible for the nontrivial properties of topological insulator 11,12 . In the Bi 2 Se 3 system, Rashba SOC induces the "M-shape" surface bands, as shown Fig. 3. As the external electric field strengths, the bands of QL1 (red) at VBM rise up while the bands of QL3 (blue) at CBM descend. VBM and CBM turn to touch at a critical gapless point, leading to the band gap closing at E ext = 0.026 V/Å, as shown in Fig. 3. Meanwhile, the bands closing point is not at the Gamma point. As the electric field continues to increase, charge transfer of the system is enhanced. At the same time, the atom positions of Bi 2 Se 3 are gradually changed, inducing a corresponding strengthening of the SOC inverted bands. Thus, the band gap of Bi 2 Se 3 will be reopened, as can be clearly seen in Fig. 3 when E ext = 0.04 V/Å. Actually, the interplay between the external electric field and the SOC may induce a quantum phase transition 32,45,50 at the critical field of 0.026 V/Å. Finally, with the external electric field strengthens, the charge transfer gradually enhances, making it dominated in the competition, and hence closes the band gap again. Figure 5 clearly shows the trend of band gap as a function of the external electric field. The band gap of pure Bi 2 Se 3 can be engineered from 0.111 to 0 eV (as the shaded area highlights), where the trend is also predicted by Liu et al. 32 and Wang et al. 45 .
We then further investigate the mechanism that electric field directly impact on the band gaps in Cr doped Bi 2 Se 3 film. Figure 4 shows the calculated band structure for Cr-doped 3QL-Bi 2 Se 3 film under the electric field of 0, 0.1, 0.2 and 0.3 V/Å, respectively. We can find that Rashba-like band splitting observed in band of pure 3QL-Bi 2 Se 3 film vanishes in the doping system. Accompanied by the increase of electric field, CBM continuously rises up whereas VBM is pinned below the Fermi level. To clarify this phenomenon, we calculate the decomposed band structures for QL1 (red), QL2 (green) and QL3 (blue) in Fig. 4. It is indicated that, CBM especially the band below 0.4 eV is completely contributed by QL2 and QL3, while VBM is dominated by the Cr-containing QL1. As the electric field increases, the band of QL2 and QL3 apparently move up due to the loss of electrons. Remarkably, the upper surface QL1, theoretically should descend like the pure case, is almost unchanged. As discussed in Fig. 2, this is because the dopant Cr atom can suppress the charge transfer in QL1, making VBM insusceptible under the external electric field. We have also studied the case that Cr atoms substitute Bi atom from a deeper layer. The result shows that the impurity bands of Cr atoms slightly shift down, which has no impact on the band gap. Accordingly, the external electric field will continuously open the band gap of Cr doped Bi 2 Se 3 film. We conclude the trend in Fig. 5(b). The result shows, the band gap rise from 0.099 eV (under zero field) to 0.144 eV (under the field of 0.1 V/Å). As the external electric field turned up to 0.2 V/Å and 0.3 V/Å, the gap further rises to 0.178 eV and 0.235 eV. It suggests that external electric field is an effective way for engineering the band gap of MTI.
In addition, magnetic order of Cr atom in Bi 2 Se 3 film is also investigated. Result from a 2 × 2 × 1 supercell Bi 2 Se 3 surface doped by two Cr atoms shows that ferromagnetic order is preferred (with the total energy lower than antiferromagnetic case by 0.11 eV). This result is consistent with refs 22 and 23. Thus, ferromagnetic order of Cr atom is performed in the study of magnetism. Figure 5(b) shows the calculated magnetic moment as a function of electric field (shown in red line). It shows that with the increase of the electric field, the magnetic moment has the same tendency as the band gap. Under the electric field of 0.3 V/Å, the magnetic moment rises up to 2.991 μ B from 2.979 μ B (at zero electric field). That is, the external electric field will also enhance the magnetic moment due to the change of charge transfer, which is revealed important to the RKKY mechanism by Zhang et al. 43 . This result suggests that the magnetic coupling of MTI can also be engineered by external electric field.
Discussion
In order to better understand the influence of external electric field on the system, we describe the schematic diagram of band structure evolution under the increasing electric field. As illustrated in Fig. 6, for pure Bi 2 Se 3 film, the bands of upper and lower surfaces are degenerate at zero electric field as shown in Fig. 6(a). When the external electric field is applied, the bands of upper (red) and lower (blue) surfaces split away in Fig. 6(b). Due to the direction of electric field originates from the top to the bottom surface (see Fig. 1), the band of upper QL is pushed down while the band of lower QL is lifted up on the contrary, resulting in band gap close, as shown in Fig. 6(c). Figure 6(d) shows that the bands are strongly inversed and a band gap is reopened because of the strong SOC. In this case, the electrically tunable charge transfer 31 and the Rashba SOC are competing in pure Bi 2 Se 3 . As the electric field continues to increase, the enhanced charge transfer becomes dominant, resulting in the band gap close again, as shown in Fig. 6(e).
For the Cr doped case as shown in Fig. 7, Cr doping leads to the non-degenerate three bands of upper (red), middle (green) and lower (blue) QLs. As the external electric field increases, CBM mainly consisted by QL2 is pushed up, whereas VBM (contributed by the Cr containing QL1) is almost fixed, as can be seen in Fig. 7(b) and (c). Thus, the band gap of Cr doped Bi 2 Se 3 film will keep enlarging during the increase of the external electric field.
In summary, we have investigated the effects of external electric field on the pure and Cr-doped 3QL-Bi 2 Se 3 films using first-principles calculations based on DFT. For the pure Bi 2 Se 3 film, the external electric field will close the band gap at a critical value of 0.026 V/Å. As the electric field continues to increase, the gap may reopen and then close again, which indicates the band gap of Bi 2 Se 3 is electrically tunable. The competition between electrically tunable charge transfer and Rashba SOC is responsible for this phenomenon. For the Cr doped Bi 2 Se 3 film, investigation on the APEPs of the two surfaces has suggested that Cr atom doping will suppress the charge transfer, resulting in an inhibition of the external electric field. Calculated band structures indicate that VBM, which is dominated by the Cr atom-containing QL1 surface, is almost fixed under different external electric fields. As a result, the band gap of Cr doped case can be engineered by external electric field from 0.099 to 0.235 eV. Furthermore, magnetic coupling of MTI is revealed can also be manipulated under electric field. Our work provides a new route for engineering the SSs of MTI, promoting the future development of quantum computation and spintronic devices.
Methods
All the first-principle density functional theory calculation was performed by Vienna ab initio simulation package (VASP) 51,52 with the Perdew-Burke-Ernzerhof generalized gradient approximation (GGA-PBE) 53,54 . The interaction of ion-electron is described by projected augmented wave (PAW) potentials 55 . To describe the strong electron-electron correlation in partially filled Cr elements, GGA + U calculations with U = 3 eV and J = 0.87 eV are performed 22,23 . In particular, SOC is taken into account due to the strong relativistic effect in Bi elements. The cutoff energy for the plane-wave expansion of electron wave function is set to be 300 eV in all the calculations. A 11 × 11 × 1 k-point mesh is adopted for sample the surface Brillouin zone of 3QL Bi 2 Se 3 . We decouple adjacent atomic slabs by using a vacuum layer of 20 Å. All the atoms are allowed to move under the electric field until the forces on each of them are less than 0.02 eV/Å. | 4,195.8 | 2017-03-08T00:00:00.000 | [
"Physics",
"Materials Science"
] |
A Small Molecule Inhibitor of Isoprenylcysteine Carboxymethyltransferase Induces Autophagic Cell Death in PC3 Prostate Cancer Cells*♦
A number of proteins involved in cell growth control, including members of the Ras family of GTPases, are modified at their C terminus by a three-step posttranslational process termed prenylation. The enzyme isoprenylcysteine carboxylmethyl-transferase (Icmt) catalyzes the last step in this process, and genetic and pharmacological suppression of Icmt activity significantly impacts on cell growth and oncogenesis. Screening of a diverse chemical library led to the identification of a specific small molecule inhibitor of Icmt, cysmethynil, that inhibited growth factor signaling and tumorigenesis in an in vitro cancer cell model (Winter-Vann, A. M., Baron, R. A., Wong, W., dela Cruz, J., York, J. D., Gooden, D. M., Bergo, M. O., Young, S. G., Toone, E. J., and Casey, P. J. (2005) Proc. Natl. Acad. Sci. U. S. A. 102, 4336–4341). To further evaluate the mechanisms through which this Icmt inhibitor impacts on cancer cells, we developed both in vitro and in vivo models utilizing PC3 prostate cancer cells. Treatment of these cells with cysmethynil resulted in both an accumulation of cells in the G1 phase and cell death. Treatment of mice harboring PC3 cell-derived xenograft tumors with cysmethynil resulted in markedly reduced tumor size. Analysis of cell death pathways unexpectedly showed minimal impact of cysmethynil treatment on apoptosis; rather, drug treatment significantly enhanced autophagy and autophagic cell death. Cysmethynil-treated cells displayed reduced mammalian target of rapamycin (mTOR) signaling, providing a potential mechanism for the excessive autophagy as well as G1 cell cycle arrest observed. These results identify a novel mechanism for the antitumor activity of Icmt inhibition. Further, the dual effects of cell death and cell cycle arrest by cysmethynil treatment strengthen the rationale for targeting Icmt in cancer chemotherapy.
Posttranslational processing of so-called CaaX proteins has received much attention in the past two decades due to the important roles these proteins play in biological regulations and diseases (1,2). This processing is initiated by isoprenoid modification of the cysteine residue of the C-terminal CAAX motif of the protein, subsequent proteolytic removal of the three C-terminal amino acids, i.e. the ϪAAX residues, and the methylation of the newly exposed carboxyl group of the prenylated cysteine residue. The overall process, termed protein prenylation, has been shown to be important for the localization, stability, and ultimate functions of a broad array of CaaX proteins (3).
Most members of the Ras superfamily of GTPases are CaaX proteins, and Ras proteins themselves, which are farnesylated, have been extensively studied due to the high prevalence of dysregulated Ras signaling in human cancers (4). Inhibitors of protein farnesyltransferase (FTase) 2 have been under development as anticancer agents for over a decade, but their efficacy, especially in solid tumors, has been disappointing (5,6). The realization that some CaaX proteins, including forms of Ras in which mutations are prevalent in human tumors, are subject to alternative prenylation by protein geranylgeranyltransferase I when FTase is inhibited (7) spurred efforts to target the postprenylation processing steps of proteolysis and methylation since each of these steps is catalyzed by a single enzyme that acts on both farnesylated and geranylgeranylated proteins (8,9). In particular, targeting of CaaX protein methylation via inhibition of the enzyme responsible, isoprenylcysteine carboxylmethyltransferase (Icmt), through both genetic and pharmacological approaches, has been shown to dramatically impair oncogenesis in several tumor cell models (10,11).
The mechanism(s) through which inhibition of Icmt impacts on cell proliferation and oncogenesis are still far from clear. Interference with cell cycle progression, however, is a cornerstone of many chemotherapeutic agents, and both FTase inhibition and geranylgeranyltransferase I inhibition have been * This work was supported by the Agency for Science, Technology and demonstrated to arrest many types of tumor cells at G 1 phase of the cell cycle; FTase inhibitors also trigger a G 2 /M arrest in certain cell types (5,6,12). Another important property of cancer chemotherapeutic agents is the ability to induce cell death. The process of apoptosis in particular has been widely studied in this regard, and many current anticancer agents, including CaaX prenylation inhibitors, enhance apoptosis in cells (6,13). Quite recently, autophagic cell death has stepped into the spotlight as a type of programmed cell death with the potential to be enhanced by cancer therapeutics (14,15). As with many biological regulatory processes, autophagy seems to be a doubleedged sword in terms of an impact on cell functions. Most cell types display a baseline level of autophagy to clear damaged organelles and unwanted proteins, but dysregulation of the autophagy process can be detrimental to cell survival. Consequently, manipulation of autophagy is now considered to present therapeutic opportunities in several disease states, including cancer (16).
We recently reported the identification of a specific small molecule inhibitor of Icmt, cysmethynil, and demonstrated a mechanism-based impact on tumorigenesis in an in vitro cancer cell model (11). We now report that treatment of PC3 prostate cancer cells with cysmethynil impairs cell cycle progression and, unexpectedly, activates an autophagic process in the cells and promotes autophagy-dependent cell death. Further, treatment of mice harboring xenograft tumors with cysmethynil results in dramatic impairment of tumor growth. These results identify a novel mechanism for the antitumor effects of Icmt inhibition and strengthen the support for targeting Icmt in cancer chemotherapy.
EXPERIMENTAL PROCEDURES
Materials-Cysmethynil and biotin S-farnesylcysteine were synthesized by the Duke Small Molecule Synthesis Facility via established methods (11,17). Cysmethynil analog 1-octyl-mtolyl-1H-indole (J3) was synthesized via standard chemical procedures and characterized to confirm identity and purity (see the supplemental text and scheme). Stock solutions were prepared at 10 mM in DMSO and stored at Ϫ20°C. Antibodies recognizing cyclins D1 and B1, p27, poly(ADP-ribose) polymerase-2, caspase 3, eukaryotic initiation factor 4E-binding protein 1 (4EBP1), and GAPDH were all from Cell Signaling. The LC3 antibody was from Abgent.
Cell Culture and Proliferation Assays-The PC3 human prostate cell line was obtained from American Type Culture Collection (Rockville, MD). Cells were maintained at 37°C with 5% CO 2 in DMEM (Sigma) supplemented with 10% fetal bovine serum (Hyclone), 50 units/ml penicillin (Invitrogen), and 50 g/ml streptomycin (Invitrogen). For proliferation assays, cells were seeded at 15-20% confluency in DMEM containing 5% fetal bovine serum in 96-well plates for 24 h prior to treatment with specific agents (e.g. cysmethynil) or vehicle at the concentrations and length of time indicated in the legends for Figs. 1-2. The relative number of the live cells was determined using the CellTiter 96 AQueous One Solution cell proliferation assay (Promega). Each condition was performed in quadruplicate, and data presented that obtained from at least three separate experiments. For proliferation studies performed with 3-methyladenine (3-MA) in addition to cysmethynil, cells were seeded as above and incubated with 0.5 mM 3-MA or vehicle at 37°C overnight. Cells were then treated with the concentrations of cysmethynil as indicated in the legend for Fig. 1 in the continued presence of 0.5 mM of 3-MA for the indicated durations.
Flow Cytometry Analysis-PC3 cells (5 ϫ 10 5 ) were seeded in 100-mm dishes in DMEM containing 5% fetal bovine serum and incubated for 24 h, whereupon the cells were exposed to cysmethynil, the J3 analog, or vehicle at the concentration and for the time indicated in the appropriate figure legend. Cells were harvested by centrifugation at 300 ϫ g for 5 min, whereupon the cells were then fixed in ice cold 70% ethanol overnight before being resuspended in phosphate-buffered saline containing 40 g/ml propidium iodide and 0.1 mg/ml Ribonuclease A (both from Sigma) for 1 h at 37°C. Fluorescence was measured by flow cytometry analysis using an Excalibur Instrument (BD Biosciences).
Immunofluorescence of LC3-Cells subjected to the indicated treatments were fixed with 4% paraformaldehyde and permeabilized with 0.3% Triton using a standard protocol (18). Incubation with primary antibody to LC3 (MAP1LC3B, Abgent) was performed at 4°C overnight before incubation with Rhodamine Red-X secondary antibody (Jackson ImmunoResearch Laboratories). Analysis was performed using an Olympus fluorescent microscope fitted with the appropriate excitation and emission filters.
Acridine Orange Staining for Autophagy Detection-Acridine orange (Sigma) staining was performed according to a published protocol (19). Briefly, cells were washed twice with phosphate-buffered saline and then stained with 1 M/ml acridine orange for 15 min at 37°C. Analysis was performed via fluorescence microscopy using 490-nm band-pass blue excitation filters and a 515-nm long-pass barrier filter. In the study of the effect of bafilomycin co-treatment, cells were treated with 200 nM bafilomycin A1 for 40 min prior to the addition of acridine orange.
Atg5 Knockdown-Stealth siRNA duplex oligoribonucleotides targeting Atg5 (Invitrogen) were resuspended to make a 20 M solution following the manufacturer's instructions. Transfections were carried out using the Lipofectamine 2000 protocol provided by Invitrogen. Conditions were optimized with varying ratios of Lipofectamine and RNA, as well as different time intervals after the transfections as determined by immunoblot analysis of atg5 protein levels. Cell proliferation on both the atg5 siRNA-treated cells and the mock-transfected cells was then assessed using the assay described above.
Cysmethynil Treatment of Xenograft-harboring Mice-PC3 cells were grown in DMEM and 10% fetal calf serum until near confluence and then harvested with a standard method using trypsin. Cells (5 ϫ 106) were then mixed with Matrigel (BD Biosciences) to achieve 40% Matrigel in the final solution. The cell preparation was then injected subcutaneously into the flanks of 6 -8-week-old SCID mice. For treatment, cysmethynil was prepared in 4% DMSO, 1.4% Tween 80, and 1% sodium carboxymethyl cellulose (Sigma) normal saline solution; the vehicle control was with the same mixture lacking cysmethynil. In a preliminary acute toxicity study, mice were injected intra-peritoneally with vehicle control, 0.1, 0.2, 0.3, 0.6, and 1.0 mg/g of cysmethynil. The control, the 0.1 mg/g-, and the 0.2 mg/ginjected mice showed few signs of distress after the injection. The 0.3 mg/g and higher dosings resulted in sluggishness and diarrhea in the first 24 h after injection, although all recovered 2 days after injection. Based on the acute toxicity study, cysmethynil was dosed at 0.1 and 0.2 mg/g in two groups, together with the control group, at 48-h intervals. The animals were monitored for their general appearance and health, as well as body weight. Subcutaneous tumors were measured with the standard clipper ruler method. Final tumor weight at the end of the study was documented after animal sacrifice and dissection of tumor tissue.
Miscellaneous Procedures-Icmt activity was determined by the in vitro assay described previously (17,20). Briefly, the assay was carried out using recombinant Icmt produced in Sf9 cells, biotin S-farnesylcysteine as the prenylcysteine substrate, and [ 3 H]S-adenosylmethionine. Apoptosis was assessed by determination of activation of caspase 3 and poly(ADP-ribose) polymerase-2 through immunoblot analysis, DNA fragmentation assay, and microscopic observation of the abnormalities of nuclei stained with 4Ј,6-diamidino-2-phenylindole as described (21). To perform immunoblot analysis, cells subjected to the indicated treatment were harvested and lysed, and protein concentration was determined by Micro BCA protein assay (Pierce). Proteins were separated by 14% SDS-PAGE, and subsequent immunoblot procedures were performed using an enhanced chemiluminescence procedure (GE Healthcare) per the manufacturer's instructions.
Cysmethynil Exhibits Mechanism-based Antiproliferative
Activity toward Prostate Cancer Cells-To facilitate a rapid evaluation of the mechanism-based consequences of cysmethynil treatment on cells, we sought a structural analog of the compound that lacked activity toward the enzyme. Molecular modeling of available structure-activity data on the chemical series from which cysmethynil was identified (22) suggested that the amide portion of the indole ring might be important in this activity. An analog lacking only the amide portion was synthesized, termed J3 (Fig. 1A), and evaluated for in vitro activity toward Icmt. This analysis, shown in Fig. 1B, revealed that the J3 analog was essentially devoid of Icmt inhibitory activity despite its chemical similarity to cysmethynil.
The impact of treatment, by cysmethynil and the inactive J3 analog, on the prostate cancer-derived cell line PC3 was evaluated using a cell viability assay. Cysmethynil treatment resulted in a dose-and time-dependent reduction in the number of viable PC3 cells, whereas the J3 analog at the highest corresponding dose was ineffective (Fig. 1C). These data, along with a previous study using a colon cancer cell line in which the antiproliferative activity of cysmethynil was markedly diminished by overexpression of Icmt in the cells (11), provide compelling evidence that the impact of cysmethynil treatment on cells is directly due to inhibition of Icmt. An additional finding from this study is that at moderate dosage, cysmethynil appears to exhibit primarily cytostatic activity, whereas at higher con-centrations, both cytostatic and cytotoxic activity are observed (Fig. 1C).
Cysmethynil Is Efficacious in Controlling Tumor Growth in a Xenograft Mouse Model of Prostate Cancer-To investigate the ability of cysmethynil to inhibit tumor growth in vivo, we first conducted a dose escalation toxicity trial in Balb/C mice and found that an intraperitoneal dose of less than 0.3 mg/g was well tolerated. We then established a xenograft model of PC3 cells in SCID mice. Tumor cells were implanted subcutaneously in the flank. When tumors started to increase in size (usually 100 -200 mm 3 ), confirming the successful grafting of the tumor cells, mice were randomly assigned to control (vehicle) and cysmethynil treatment groups. Mice thus bearing established PC3 tumors were given intraperitoneal injections of vehicle only, 0.1 or 0.2 mg/g of cysmethynil every 48 h for 28 days. The duration of the experiments were dictated by the tumor burden of the control mice. As shown in Fig. 2A, both doses of cysmethynil significantly suppressed the growth of PC3 tumors when com- pared with the vehicle control group. Neither dose was accompanied by any appreciable toxicity as assessed by body weight determinations of the treated mice when compared with the control group (Fig. 2B). These data indicate that pharmacological inhibition of Icmt in vivo significantly impacts on tumor growth and further reinforces the notion of Icmt as an anticancer drug target.
Cysmethynil Treatment Induces Cell Cycle Arrest in PC3 cells-As noted in the Introduction, there is considerable evidence that inhibition of CaaX protein processing by either of the protein prenyltransferases can impact on cell cycle progression. This information, coupled with the finding that moderate doses of cysmethynil have apparently predominant cytostatic activity on PC3 cells (Fig. 1C), prompted us to examine more closely cell cycle parameters in cells treated with this Icmt inhibitor. Flow cytometry analysis of PC3 cells treated 48 h with 20 M cysmethynil showed a significantly increased population of cells in G 1 (Fig. 3A). Further examination of molecular markers associated with G 1 arrest showed remarkable changes, including increased p27 levels, reduced cyclin D1 levels, and showed almost complete loss of phospho-Rb (Fig. 3B). These data are all consistent with the G 1 arrest observed in the flow cytometry analysis.
Cysmethynil Treatment Induces Autophagic Cell Death-Although the ability of cysmethynil treatment to trigger a G 1 arrest in PC3 cells could account for the cytostatic capacity of this compound, the reduction in cell count following longer term and higher dose treatments in vitro suggested that an increase in cell death was also being triggered by Icmt inhibition. Our initial set of experiments to examine this potential outcome of cysmethynil treatment was focused on apoptotic pathways since as noted above, inhibition of CaaX protein processing had been linked to elevated apoptosis in many cells types (6). However, no consistent impact of cysmethynil treatment at the concentration of cysmethynil that decreased the number of viable cells was observed on apoptotic markers such as caspase 3 and poly(ADP-ribose) polymerase-2 cleavage in the treated PC3 cells (data not shown). Additionally, neither DNA fragmentation nor abnormal nuclear morphology was observed in PC3 cells following cysmethynil treatment at the concentration that resulted in cell death (not shown), arguing against a significant contribution of apoptosis to diminished cell viability.
The inability to link apoptosis to the cell death observed in the PC3 cells following cysmethynil treatment prompted us to consider whether the drug induced a non-apoptotic form of cell death. Specifically, we examined autophagy, a process that involves the degradation of cellular components through an autophagosome-lysosome pathway, as this process has recently become appreciated as important in cell growth and survival 14). Indeed, cysmethynil treatment of PC3 cells resulted in a dramatic elevation of the LC3-II protein (Fig. 4A), which is characteristic for the activation of autophagic pathway (23). Further cell-based analysis employing anti-LC3 immunofluorescence revealed that the LC3 protein in drug-treated cells both increased in total abundance and also aggregated into granular/vesicular structures (Fig. 4B), consistent with the expected changes that signal autophagosome formation (23).
Since different cell types exhibit varying degrees of autophagocytosis at baseline (14), we quantified the autophagosomes in the cysmethynil-and vehicle-treated cells by determination of the fraction of cells exhibiting elevated level of LC3 aggregation; this quantitation showed that cysmethynil treatment significantly elevated cellular abundance of autophagosomes (35% versus 2%, Fig. 4B).
The process of autophagy starts with the autophagosome formation and then progresses to autophagolysosomes through the fusion of acidic lysosomes with autophagosomes (15). Therefore, acridine orange staining of the live cells was also employed to visualize acidic autophagolysosomes in control and cysmethynil-treated PC3 cells. As shown in Fig. 4C, cysmethynil treatment markedly elevated the amount of autophagolysosomes in the cells, providing further evidence that the autophagic process was being activated by drug treatment and that the autophagosome formation is not the result of the inhibition of lysosomal fusion.
Bafilomycin A1, an inhibitor of vacuolar Hϩ ATPase, prevents the transition of autophagosome to autophagolysosomes by disrupting the fusion of autophagosomes to lysosomes (24). Hence, bafilomycin A1 provides a useful way of studying the autophagy process. The treatment of the PC3 cells with bafilomycin A1 markedly reduced the quantity of acridine orangepositive vesicles (Fig. 4C), confirming that the autophagosomes produced by the treatment of cysmethynil undergo the same maturation process with the fusion with lysosomes. Indeed, in the bafilomycin A1-treated cells, LC3-II levels remain elevated despite a marked reduction of acidic vesicles (not shown), indicating that the earlier stages of autophagy prior to lysosomal fusion were not affected by this lysosome fusion inhibitor.
Inhibition of Autophagy Protects PC3 Cells from Cell Death Elicited by Cysmethynil Treatment-To assess whether the induction of autophagy observed in cysmethynil-treated PC3 cells actually contributed to the cell death elicited by treatment with the compound, we first assessed whether 3-MA, a known inhibitor of autophagy that acts through inhibition of type 3 PI3 kinase (25), could alleviate cysmethynil-induced cell death. PC3 cells were treated with vehicle alone, cysmethynil, 3-MA alone, or 3-MA plus cysmethynil, and viability of the cells was assessed 72 h later. As seen in Fig. 5A, 3-MA treatment alone had little impact on cell viability; the viability of cysmethynil-treated cells was markedly increased if 3-MA was present during the course of the treatment. Immunoblot analysis showed that the cells treated with both 3-MA and cysmethynil had much lower levels of the autophagy marker LC3-II when compared with the cells treated with cysmethynil alone, suggesting that the autophagic process stimulated by cysmethynil is subjected to the regulation by type 3 PI 3-kinase (Fig. 5A).
We also employed a knockdown strategy to impair autophagy to further assess its impact on cysmethynil-induced cell death. RNA interference-mediated knockdown of atg5, a crucial component of the autophagy cascade (14), markedly reduced cell death elicited by cysmethynil treatment (Fig. 5B). This has provided additional evidence supporting a crucial role for autophagy-dependent cell death in the efficacy of cysmethynil. Knockdown of atg5 also resulted in the reduction of LC3-II production in cysmethynil-treated cells (Fig. 5B), confirm- ing a direct impact of the knockdown on the autophagic process in the cells. Taken together, the survival benefits achieved with both 3-MA treatment and atg5 RNA knockdown in the cysmethynil-treated PC3 cells provide compel-ling evidence that cysmethynil not only induces autophagy but that the accompanying autophagy-dependent cell death contributes significantly to the efficacy of cysmethynil in inducing cancer cell death.
Cysmethynil Treatment Impacts on mTOR Signaling in PC3 Cells-The data presented above indicate that cysmethynil treatment induces both G 1 cell cycle arrest and autophagy; a common link of these two processes is that they can be modulated by mTOR signaling. Two types of CaaX proteins are known to be important in regulating mTOR signaling, Ras GTPases and the Rheb GTPase. Ras activates PI 3-kinase, leading to the activation of Akt, which in turn activates Rheb by inhibiting tuberous sclerosis complex, a negative regulator of Rheb. Rheb positively regulates mTOR kinase, with a resultant positive impact on cell cycle progression and negative impact on autophagy (26). Inhibition of Ras methylation has been shown to impair Ras activity (11); Rheb could also be affected by Icmt inhibition, and this could further impact mTOR signaling. Indeed, cysmethynil treatment of PC3 cells resulted in a marked reduction of phosphorylation of Akt (Fig. 6A), as would be expected from inhibition of Ras. In addition, phosphorylation of the mTOR downstream effector ribosomal protein S6 was markedly reduced in cysmethynil-treated cells, and another effector, 4EBP1, showed the characteristic shift in phosphorylation pattern consistent with inhibition of the mTOR kinase (Fig. 6A). Furthermore, Rheb levels in PC3 cells were markedly reduced following 48 h of cysmethynil treatment, providing evidence that the general availability and activity of Rheb GTPase were reduced. Based on these data, we propose a model whereby inhibition of Icmt leads to a reduction of Ras and Rheb activity and consequent inhibition of Akt and mTOR signaling, contributing to the effects on cell cycle progression and autophagy (Fig. 6B).
DISCUSSION
Autophagy as a means of induction of cancer cell death has attracted increasing attention recently, especially in circumstances of apoptosis-resistant cancer cells. For example, the mTOR inhibitor rapamycin has been reported to induce cell death through autophagy in glioblastoma cell lines that are resistant to many therapies that induce apoptosis (27). In this regard as well, a recent study showed that temozolomide treatment alone of glioblastoma cell lines causes autophagic cell death (28). Although the realization that autophagic cell death may be an important component of the action of certain cancer drugs is relatively new, the connection between the PI3K/Akt/ mTOR inhibition and autophagy induction has been well established (14).
The PI3K/Akt/mTOR signaling pathway activation enhances cell proliferation, survival, and growth and inhibits autophagy in many cells, and aberrant up-regulation of the PI3K/Akt/ mTOR axis has been frequently linked to oncogenesis in many cancers (29). In many cases, up-regulation of the pathway has been linked to loss of the phosphatase and tensin homolog (PTEN) phosphatase. Although these aberrancies do contribute to resistance to therapies many cancers, they might also render the cancers more sensitive to mTOR inhibition, as the sustained activation would render the cells more dependent on this pathway for proliferation (21,30). On this note, the PC3 prostate cancer cell used in the current studies is PTEN-null and exhibits elevated intrinsic PI3K/Akt/mTOR signaling, which likely contributes to the resistance of this cell line to a variety of therapeutic interventions. The efficient induction of autophagic cell death by cysmethynil suggests that the inhibition of the CaaX protein processing in this group of more resistant cancers could provide an effective therapeutic alternative, or enhancement of, existing chemotherapies.
Targeting mTOR signaling by cysmethynil makes use of two fundamental mechanisms of treating cancer, induction of cell death (through autophagy) and inhibition of cell cycle progression. Inhibiting mTOR blocks the phosphorylation of two key downstream effectors, p70S6 kinase and 4EBP1. Both proteins play important roles in translational regulation; in particular, inhibition of expression of the G 1 cell cycle regulatory protein cyclin D1 leads to G 1 arrest in cells in which mTOR is inhibited (30). The ability of cysmethynil to induce G 1 arrest as well as autophagy could make it broadly applicable as an anticancer agent.
The identification of cysmethynil as a specific and bioavailable Icmt inhibitor provides a selective tool to probe the involvement of Icmt in both normal and pathological cellular processes. The current study not only strengthens the rationale for targeting Icmt as an anticancer strategy from a mechanistic standpoint but also demonstrates significant in vivo efficacy of cysmethynil through its administration to mice bearing xenograft prostate cancer-derived tumors. Our data clearly indicate that an induction of autophagy by cysmethynil is a major contributor to the cell death that accompanies pharmacological inhibition of Icmt. Although the specific CaaX protein(s) underlying this phenomenon have not yet been unam-biguously identified, the Ras and Rheb GTPases are potential players due to their abilities to control PI 3-kinase and mTOR signaling. | 5,675.6 | 2008-07-04T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Synthesis, Characterization, and Antibacterial Evaluation of a Cost-Effective Endodontic Sealer Based on Tricalcium Silicate-White Portland Cement
Mineral trioxide aggregate (MTA) is an ideal yet costly endodontic sealer material. Tricalcium silicate-white Portland cement (TS-WPC) seems to have similar characteristics to those of MTA. This work aims to characterize a modified TS-WPC and evaluate its antibacterial properties as a potential endodontic sealer material. The modified TS-WPC was synthesized from a 4:1 mixture of sterilized Indocement TS-WPC and bismuth trioxide using a simple solution method with 99.9% isopropanol. The mixture was stirred until it was homogenous, centrifuged, and dried. The material was then characterized using infrared spectroscopy, X-ray diffraction, and electron microscopy and subjected to antibacterial evaluation against Enterococcus faecalis using a Mueller–Hinton agar inhibition test. The results showed that the material was characterized by main functional groups of hydroxyls, silicate, bismuth trioxide, and tricalcium silicate, like those of a commercial MTA-based sealer, both tested after hydration. Modified TS-WPC before hydration showed similar powder morphology and size to the commercial one, indicating the ease of manipulation. Both materials exhibited antibacterial activity due to calcium dihydroxide’s ability to absorb carbon dioxide, which is essential for the anaerobic E. faecalis, with minimum inhibitory effect and bactericidal concentrations of 12,500 ppm and 25,000 ppm, respectively. The modified TS-WPC has the potential to become a cost-effective alternative endodontic sealer material.
Introduction
Endodontic treatment is regularly performed to eliminate necrotic tissue and pathogenic bacteria and assist the teeth healing process surrounding the damaged tissue while preventing bacterial reinfection. The treatment consists of three principal biomechanical preparation processes, including cleaning and shaping, followed by the sterilization and obturation of the root canal, known as the endodontic triad, which is the key to a successful root canal treatment [1,2]. During this treatment, the root canal cavity must be sterilized from bacterial infection, including the opportunistic Enterococcus, an abnormal flora in the oral cavity habitat that generally colonizes temporarily in individuals with a weak immune system. One of these antibiotic-resistant bacteria, Enterococcus faecalis, has been isolated from various oral conditions, including carious lesions and chronic periodontitis, and is associated with persistent apical periodontitis [3,4]. Successful endodontic treatment depends on the selection and application of ingredients or materials for completing obturation, where the root canal is filled with synthetic materials that cover the canal and inhibit the development of the remaining microorganisms [5,6]. The root canal filling material is a combination of core material, with gutta-percha as the most widely used one and the golden standard, and a sealer, which is a paste that fills the irregular space between the walls of the gutta-percha and dentin up to the lateral channel [6,7].
An ideal sealer has many features, including being bactericidal, having the capacity to form hermetic density and adhere to the dentin wall, being easy to apply to the root canal without shrinkage after hardening (nor expanding during hardening), and having an adequate working time and consistency [7,8]. One of the most used sealer materials is mineral trioxide aggregate (MTA), a bioceramic-based material consisting of various calcium compounds such as calcium silicate (Ca 3 SiO 5 , Ca 2 SiO 4 , CaSiO 3 ), calcium aluminate (CaAl 2 O 4 ), and calcium sulfate (CaSO 4 ·2H 2 O) [9,10]. Calcium silicate is known to have excellent broad-spectrum antibacterial power, induces apatite formation in hard tissues, and is thus osteoconductive [6,11,12]. Dicalcium silicate and tricalcium silicate produce calcium hydroxide when reacting with water, providing a strong alkalinity and absorbing carbon dioxide, which is essential for the survival of anaerobic bacteria [10,13,14]. MTAbased sealers offer advantages in their usability for teeth obturation with an open apex, perforated lesions, and resorption damage due to their ability to harden under humid conditions and induce cementogenesis and dentinogenesis with a strong bond with the dentin wall [15,16]. However, their usage is still rare because of their expensive price.
A commercial MTA generally consists of about 75 wt.% Portland cement, 20 wt.% bismuth trioxide , and 5 wt.% calcium sulfate/gypsum as a setting modifier in the powder mixture [17]. Bismuth trioxide (Bi 2 O 3 ) has been widely used for more than one decade as a radiopacifier in the MTA mixture [18][19][20]. Many investigations have revealed that inert Bi 2 O 3 was not involved in the setting reaction and had no effect on Portland cement during the hydration reaction. However, the incorporation of Bi 2 O 3 as a radiopacifier will produce flaws, increase porosity by leaving more unreacted water in the hydration reaction of the Portland cement, and may reduce the mechanical stability of the sealer [21]. Some preliminary studies showed a compositional similarity of MTA with tricalcium silicate-white Portland cement (TS-WPC), except for the absence of bismuth trioxide in the latter [22][23][24]. This compositional similarity may give it similar material characteristics that could make TS-WPC an interesting cost-effective alternative to commercial MTA. Therefore, this work aims to evaluate the potential application of a modified TS-WPC for endodontic sealer material by synthesizing and characterizing the material and evaluating its antibacterial properties.
Materials and Methods
The materials used in this work were TS-WPC powders (Indocement Ltd., Cirebon, Indonesia), bismuth trioxide (Bi 2 O 3 , Shanghai Xinglu Chemical Technology Co. Ltd., Shanghai, China), and commercial ProRoot ® MTA sealer package (Dentsply Sirona, Ballaigues, Switzerland). All the materials were sterilized under ultraviolet light for 1 h before use. The synthesis of modified TS-WPC (hereafter called TS-WPC for simplicity) sealer material was conducted by dissolving 80 mg of TS-WPC in 100 mL of 99.9% isopropanol, then 20 mg of Bi 2 O 3 was mixed into the dissolved TS-WPC. The mixture was stirred for 30 min using a magnetic stirrer at a rotation speed of 400 rpm until it was homogenous, then it transferred into tubes for centrifugation at 1000 rpm for 10 min. As the supernatant was discharged, the pellet was dried under a vacuum for 60 min to evaporate the isopropanol. The TS-WPC powder samples and the ProRoot ® MTA (hereafter called as MTA) for comparison were then characterized by Fourier transform infrared spectroscopy (FT-IR, Spectrum 100, PerkinElmer Inc., Shelton, WA, USA) to obtain an infrared spectrum of absorption, emission, and photoconductivity for different functional groups; X-ray diffraction (XRD, D2 Phaser, Bruker Corp., Santa Barbara, CA, USA) to obtain detailed information about the crystallographic structures; and scanning electron microscopy and energy dispersive X-ray spectroscopy (SEM-EDS, JSM-6510A, Jeol Ltd., Tokyo, Japan) to observe the surface morphology at high magnification and to analyze its chemical composition.
The powder samples were also subjected to antibacterial evaluation against E. faecalis (ATCC 29212) using the Mueller-Hinton Agar method to determine the inhibition zone, the minimum inhibitory concentration (MIC), and the minimum bactericidal concentration (MBC). The inhibition zone was determined by spreading 100 µL of E. faecalis suspension onto the MHA surface in 6 mm diameter wells, then samples of 20 mg TS-WPC and MTA powders were applied onto the wells incubated for 24 h at 37 • C in an anaerobic incubator.
The tests were performed in triplicate, and the inhibition zone diameter was then measured using a millimeter-scale caliper as the average of three tests. To evaluate the MIC, both samples were diluted in 9% NaCl to obtain a range of concentration up to 400,000 ppm. Then, 100 µL of each sample was inserted into the designated 96-well microplates, where 10 µL of Mueller-Hinton broth containing bacteria was then added, followed by incubation at 37°C for 24 h, and ended by measuring the turbidity in a microplate reader. The MIC was indicated by wells containing the lowest concentration that was still able to inhibit bacterial growth. The MBC was evaluated by preparing two more concentrated solutions than the MIC and two more dilute solutions than the MIC, which were sub-cultured (planting) on the MHA media and incubated at 37°C for 24 h. This planting was performed to overcome the microplate reader's ability to read the plate's dilution turbidity by measuring its optical density alone, without detecting the separate turbidity caused by the two sample materials' different densities. The MBC was determined from the agar media with no bacterial colony.
Results
The TS-WPC and MTA samples presented some similarities in their IR spectra before and after hydration (Figure 1a). Hydration was performed by reacting the samples with distilled water to simulate conditions during which a sealer is applied to a root canal. Table 1 Further characterization by XRD identified the presence of four compounds in both samples before hydration (Figure 1b), which are tricalcium silicate (Ca 3 SiO 5 ), bismuth trioxide (Bi 2 O 3 ), dicalcium silicate (Ca 2 SiO 4 ), and tricalcium aluminate (Ca 3 Al 2 O 6 ). After hydration, the four compounds' peaks were still present with the new appearance of calcium hydroxide (Ca(OH) 2 ) peaks. The peaks appeared very weak due to the relatively short contact time (30 min) between the samples and water during hydration. Both the XRD and FT-IR results confirmed a chemical similarity between the TS-WPC and MTA samples, while SEM observation showed their morphological similarity (Figure 2). Both powders had an octahedral shape, with an average size of 5 µm, and elemental bismuth was present on the powders as Bi 2 O 3 .
The bacterial inhibition zones of the two tested samples indicated a similar measured diameter (Figure 3). The average diameter of the formed clear area was 15.33 mm for the TS-WPC and 15.36 mm for the commercial MTA. Table 2 shows that some E. faecalis colonies still existed at the concentration of 12,500 ppm. The bacteria were killed and declared to be depleted at the concentration of 25,000 ppm.
Discussion
The TS-WPC powder samples were characterized using FT-IR, XRD, and SEM-EDS, along with samples of the MTA as a comparison. The characterization results showed some similarities between the lab-made TS-WPC samples and the commercial MTA sealer in terms of chemistry, crystallinity, and granulometry. On the IR spectra, both samples presented strong peaks of the Si-C/C=C bond, weaker peaks of the Si-O and C-H/C-O bonds, and beyond the fingerprint band peaks of the Si-H bond. Once hydrated, new peaks of Ca(OH) 2 appeared while preserving the high intensity peaks of C-H/C-O and Si-C/C=C bonds. The appearance of Ca(OH) 2 was again identified on the XRD patterns of hydrated samples, while before hydration four main compounds were identified as Ca 3 SiO 5 , Bi 2 O 3 , Ca 2 SiO 4 , and Ca 3 Al 2 O 6 . These results were in line with the previous research that confirmed the presence of Ca(OH) 2 [25]. Calcium hydroxide (Ca(OH) 2 ) has long been used as one of the most effective root canal medicaments. Calcium hydroxide has a working action by releasing Ca 2+ ions which play a role in the mineralization process of tissues and OH − ions, which can provide an antimicrobial effect by increasing the pH so that an alkaline environment is formed, which causes most of the microorganisms in the root canals to be unable to survive [26]. The characterization results clearly showed that TS-WPC with added 20 wt.% Bi 2 O 3 as a radiopacifier had the same configuration of peaks as that of the commercial MTA, and both materials were compositionally similar. These similar material characteristics should provide similar properties-i.e., antibacterial capabilities. In terms of granulometry, both samples have octahedral shape powders with an average size of 5 µm, and elemental bismuth was evidently present on the powders as Bi 2 O 3 , as observed in the work of Bedoya-Hincapie et al. [27].
The usage of endodontic sealers' to perform root canal fillings in obturation procedures is an established mainstay in endodontics and plays a crucial role in the success of the treatment. Therefore, these materials should exhibit a set of characteristics that allow successful root canal filling with the resolution of periapical inflammatory and/or infectious processes and prevent further microbial contamination [28]. Based on the results of the antibacterial evaluation against E. faecalis using the Mueller-Hinton Agar method, the two samples presented similar diameters of bacterial inhibition zones of around 15 mm, falling within the classification of intermediate sensitivity, with a diameter between 14 and 22 mm [29]. The consistent dense sealer after setting time within 24 h is one of the limiting factors for the diffusion process of antibacterial agent to the further agar area. Therefore, this TS-WPC sealer's application will require direct contact in the root canal cavity to eliminate bacteria optimally [29][30][31]. The TS-WPC sample achieved its minimum inhibitory concentration (MIC) at 12,500 ppm, while its minimum bactericidal concentration (MBC) was reached at 25,000 ppm. This MBC value is interestingly far below the concentration commonly used in root canal sealer applications.
The antibacterial capability of both samples can be attributed to the presence of a Ca(OH) 2 compound that created a strong alkaline condition and provided hydroxyl ions, which was consistent with the other finding that the application of intracanal medicaments containing Ca(OH) 2 compounds was a competent of forming negative cultures of E. faecalis [32,33]. Most pathogenic bacteria in the root canal cannot survive the strong alkaline conditions, with a pH of around 12.5, produced by Ca(OH) 2 . Generally, bacteria will be eliminated sometime after contact with Ca(OH) 2 , as they can only tolerate a pH range between 6 and 9 [34]. The high pH conditions created by Ca(OH) 2 inhibit enzyme activity during bacterial cell metabolism [35]. Alkalinity induces the breakdown of ionic bonds that maintain the tertiary structure of proteins, thus only covalent bonds are maintained leading to the formation of an irregular bond of polypeptide chains. These changes eventually cause the loss of the enzymes' biological activity, thus damaging the cell metabolism [30].
The second antibacterial capability of Ca(OH) 2 comes from the hydroxyl ions, which are high free radical oxidants with strong reactivity toward bacterial cytoplasmic membrane [31]. Bacterial cytoplasmic membranes have essential functions in cell defense mechanisms, such as selective permeability and fluid transport; secretion from the hydrolysis exoenzyme; and delivering enzymes and molecules that function in DNA biosynthesis, cell wall polymers, and membrane lipids [30]. Hydroxyl ions induce lipid peroxidation, which damages phospholipids and the cell membrane structure. Hydroxyl ions avert hydrogen atoms from unsaturated fatty acids in producing free radicals from lipids. These free radicals react with oxygen and form peroxide lipid radicals that avert other hydrogen atoms from secondary fatty acids to re-produce other lipid peroxides. Peroxide acts as a free radical that automatically initiates the catalysis chain reaction, losing more fatty acids and sustaining more severe membrane damage. Hydroxyl ions also react with bacterial DNA and induce the division of DNA from the chain structure unity that can cause the loss of bacterial genetic code, leading to a failure in DNA replication, and thus cell activity does not occur [30]. Another contributing factor to the antibacterial capability of Ca(OH) 2 is its ability to absorb CO 2 stored in the periradicular tissue of the root canal cavity, eliminating the essential atmosphere for anaerobic bacterial survival of bacteria such as E. faecalis [34,35].
This modified TS-WPC could become a cost-effective alternative endodontic sealer material once its biocompatibility is confirmed. It may become one among many alternative therapies for treating root canal problems including the easily accessible stem cells that can be isolated from various tissues of the oral cavity and become sources for bone and dental regeneration [36].
Conclusions
A modified tricalcium silicate-white Portland cement (TS-WPC) was synthesized using a simple solution method and possessed similar material characteristics and antibacterial properties to those of a commercial mineral trioxide aggregate (MTA)-based sealer material. The TS-WPC was characterized by powder morphology, hydroxyl and silicate groups, and bismuth trioxide and tricalcium silicate phases like those of a commercial MTA-based sealer. The TS-WPC and MTA samples exhibited antibacterial activity against the anaerobic E. faecalis bacteria, with minimum inhibitory and killing concentrations of 12,500 ppm and 25,000 ppm, respectively. Although both materials' TS-WPC and the commercial MTA were proven to be compositionally similar, further studies of TS-WPC are awaited based on these findings to improve its mechanical stability and biocompatibility. The modified TS-WPC has the potential to become a cost-effective alternative endodontic sealer material.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest:
The authors declare no conflict of interest. | 3,823.4 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
Distributed Stochastic Model Predictive Control for Scheduling Deterministic Peer-to-Peer Energy Transactions Among Networked Microgrids With Hybrid Energy Storage Systems
The current tendency toward increases in energy prices makes it necessary to discover new ways in which to provide electricity to end consumers. Cooperation among the various self-consumption facilities that form energy communities based on networked microgrids could be a more efficient means of managing the renewable resources that are available. However, the complexity of the associated control problem is leading to unresolved challenges from the point of view of its formulation. The optimization of energy exchanges among microgrids in the day-ahead electricity market requires the generation of an optimal profile for the purchase of energy from and sale of energy to the main grid, in addition to enabling the community to be charged for any deviation from the schedule proposed in the regulation service market. Microgrids based on renewable generation are systems that are subject to inherited uncertainties in their energy forecast whose interconnection generates a distributed control problem of stochastic systems. Microgrids are systems of subsystems that can integrate various components, such as hybrid energy storage systems (ESS), generating multiple terms to be included in the associated cost function for their optimization. In this work, the problem of solving complex distributed stochastic systems in the Mixed Logic Dynamic (MLD) framework is addressed, as is the generate of a tractable formulation with which to generate deterministic values for both exchange and output variables in interconnected systems subject to uncertainties using hybrid, stochastic and distributed Model Predictive Control (MPC) techniques.
P
Electric power (W ).P Predicted electric power (W ).P(S)) Given probability for a certain scenario.
I. INTRODUCTION
The majority of developed countries are currently adopting new energy policies based on commitments to the Paris Agreement with the aim of reducing greenhouse gas emissions by transitioning from fossil fuels to other energy sources.In the face of the challenge of creating low/neutral carbon-based energy systems, microgrid technology may be a key solution by which to update traditional electric power systems to intelligent smart grids with a high degree of penetration by renewable energy systems.The lack of dispatchability of the new renewable generation schemes can be solved by structuring the power system components into smaller management units.In this challenging paradigm, microgrids are a key technology with which to solve system deficiencies.Microgrids pave the way toward the deployment of an electricity market that is completely based on renewable generation, providing the flexibility required in order to balance the stochastic behavior of generation sources and consumption loads for both market and system operators.Microgrids also could empower the role played by end users by allowing them to become active prosumers.As stated in [1] and [2], the combination of different energy storage technologies provides a high degree of flexibility and competitiveness to microgrids, since each Energy Storage System (ESS) has its own limitations or operational costs which can be improved if an appropriate control system is developed.The inclusion of these advanced controllers increases the number of constraints and variables to be optimized, along with the complexity of the control problem and the necessary computational cost.The networked operation of microgrids adds a degree of flexibility to their optimization, leading to better operation results in the electricity market, as shown in recent studies [3], [4], [5].Different prosumers can share their energy in local markets while participating in the day-ahead electricity market.This joint operation could achieve lower final costs for the electricity consumption required.But the networked operation of microgrids must confront the complexity of optimizing interconnected stochastic systems subject to penalties for deviation if the commitment made to the day-ahead market is not fulfilled.The incorrect and/or uncertain management of one microgrid could, therefore, seriously affect the whole energy community.The optimization algorithm for energy communities based on microgrids should be formulated by considering a distributed and stochastic control problem of the system (network) based on interconnected subsystems (microgrids).Aspects related to increases in the execution time that will allow the solver to discover the optimal solution must also be considered when several scenarios are included, in order to integrate uncertainty into the forecasting of the energy produced by the microgrid [6].
It is consequently recommended that the networked operation or the uncertainty management be included only at the tertiary control level, where sample periods of 1 hour are taken, while the secondary control level be applied solely to a single microgrid that follows the references obtained in the tertiary control [7].Energy exchange among renewableenergy-based microgrids will provide the possibility of dispatching their production through electricity pools, not only as single systems, but also acting as a network of microgrids that achieve better results in liberalized electricity markets.The main feature of these markets is that the different actors have to make their offers in advance, and will be charged for any difference between real-time production and energy bidding [2], [6].In this context, the sale and purchase of energy among the different microgrids and the main grid must be subject to a common deterministic energy exchange, despite the stochastic behavior of renewable generation and consumption loads.As interconnected systems, those microgrids that decide to exchange energy with each other will have to incur economic penalties if the neighboring microgrid does not achieve the scheduled energy in the dayahead market.It is difficult to solve a deviation of this nature at the secondary control level, at which execution times close to real time are required.It is, therefore, necessary to obtain deterministic profiles not only at the tertiary level for the energy exchange with the main grid, but also for the energy that has to be exported from/imported to the neighboring microgrids.In order to solve these issues, procedures with which to obtain both a deterministic exchange profile among microgrids and a deterministic optimization of the buying and selling of energy with the main grid in the day-ahead market, despite the uncertainty in energy forecasting, are required.
A. LITERATURE REVIEW
The distributed and stochastic formulation for control problems when applied to systems with a large number of optimization variables, such as microgrids with hybrid ESS, requires the development of customized algorithms that can exploit the special features of their associated control problem, such as the limitation in the dimension of the matrices that current solvers can handle.As detailed in [8] and [9], MPC techniques are a powerful framework with which to solve the complexity of optimizing microgrids [10].Their hybrid formulation makes it possible to integrate logic and continuous decision variables [11], and stochastic MPC (SMPC) has, therefore, recently emerged with the aim of incorporating the probabilistic descriptions of uncertainties into a constrained optimal control problem [12].In a similar direction, Distributed Model Predictive Control (DMPC) [13] is being established as an advanced technique by which to optimally solve distributed control problems.A complete review of both SMPC and DMPC can be found in the aforementioned references [12], [13].The stochasticity of systems is being satisfactorily resolved using SMPC in several studies applied to a wide variety of systems.Theoretical analyses related to distributed stochastic MPC (DSMPC) have recently been carried out in [14], in which the problem of large systems composed of many coupled subsystems interacting with each other is analyzed, showing that the propagation and perturbation of uncertainty make the control design of such systems a complex problem.A theoretical framework with which to solve this kind of control problem is proposed.Firstly, the study establishes a centralized MPC scheme that integrates the overall system dynamics and chance constraints as a whole.Rather than solving a non-convex and large-dimension optimization problem at each moment, a semidefinite programming problem is stated.The computational cost and the amount of communication derived from a centralized framework are reduced by developing a DSMPC based on a sequential update scheme.This signifies that only one subsystem updates its plan by solving the optimization problem at each instant in time.
With regard to microgrids, recent reviews concerning the application of MPC techniques to this kind of systems can be found in [8], [15], and [16], in which no solutions are provided for common distributed and the stochastic formulation of complex optimization problems.It is particularly notable that aspects concerning deterministic exchanges among agents in distributed solutions are not addressed.SMPC and DMPC are recent and timely techniques that are being satisfactorily applied by the scientific community in order to manage possible errors in the energy forecast of microgrids and to deal with the formulation of control problems associated with interconnected microgrids, as shown in [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], and [17].In [18], the authors carry out a review of networked microgrids from fundamental to advanced research topics, while in [19] a review of the proposed solutions for P2P energy exchanges among microgrids is carried out.Three common gaps in the research developed in [18] and [19] can be highlighted: i) the non-inclusion of uncertainties in the energy forecasts for microgrids, ii) the fact that the cost functions developed do not integrate a large number of terms, as occurs when hybrid ESS are included in the networked operation of microgrids, and iii) the fact that they do not establish deterministic outputs and exchanges among the different subsystems, despite their inherited stochastic nature.In [3] and [20] algorithms based on Distributed Model Predictive Control (DMPC) techniques are applied without considering uncertainty in the energy forecast.Solutions considering uncertainties in the Energy Management System (EMS) of networked microgrids can be found in [21] and [22].The authors of [21] propose a model in which peers negotiate together in order to trade energy and flexibility by considering renewable generation uncertainty.In [23], the authors propose a P2P local electricity market for the joint trading of energy and uncertainty using flexible loads.A new P2P model in which both energy and uncertainty can be traded is proposed in [24], while aspects related to cybersecurity in P2P-based energy management are studied in [25].A consensusbased approach for the day-ahead market in conjunction with a local energy-reserve market design considering the uncertainties of renewable energy systems is studied in [26].The authors of [27] developed a two-stage robust stochastic scheduling model for transactive energy-based renewable microgrids.In the first stage problem, all the microgrids attempted to maximize their profits by adopting the optimal bidding strategy in the day-ahead market, while minimizing the imbalance cost in the second stage.In [28], the uncertainty of the electricity market is managed using a robust data MPC framework for multi-microgrid energy dispatch.In [29], the operation management of cooperative microgrids was formulated in the Chance-Constraint MPC framework, while in [30] the degradation cost of batteries was also included in the EMS, highlighting the importance of this term.The authors of [31] developed an optimal stochastic day-ahead scheduling problem.The stochastic analysis of the problem includes the day-ahead energy price as an uncertain parameter, while aspects of the operational cost of the ESS are not included.In [32], the authors introduce distributed microgrids integrated with buildings by taking advantage of their peak load limiting.The proposed algorithm is formulated as a two-stage stochastic problem: in the first stage, the temperature setpoints of the buildings for the next time step in each microgrid are determined, while in the second, the power exchange decisions made in order to limit the peak load in the microgrid network are defined.The work carried out in [33] is focused on the stochasticity of the multi-microgrid environment, proposing a distributed power management algorithm with which to minimize a sum of generation cost objective function subject to generator constraints, including the following: supplydemand balance constraint, individual constraint, capacity constraint and the ramp-rate constraint.Finally, case studies based on IEEE 30-bus, IEEE 57-bus and IEEE 300-bus systems show the effectiveness of the proposed distributed primal-dual consensus strategy.In [34], a distributed demand side management (DSM) approach for smart grids that takes uncertainty in wind power forecasting into account is developed.A two-stage stochastic optimization with which to operate a renewable-based microgrid with batteries is developed in [22], but the problem of the interconnection of microgrids is not addressed.Stochastic methodologies with which to solve resilience problems in single microgrids are also proposed in [35], [36], and [37].
Distributed Stochastic approaches that are applied to systems other than microgrids are additionally found in the existing literature, as can be observed in [38] and [39].In [38], the authors investigate the distributed output-feedback tracking control for stochastic nonlinear multi-agent systems with time-varying delays, and propose a new distributed stochastic homogeneous domination method.The authors specifically design distributed output-feedback controllers for the corresponding nominal systems.The proposed methodology simultaneously considers time-varying delays, unmeasurable states, and Hessian terms.The authors of [40] focus their research on enabling multiple agents to cooperatively solve a global optimization problem without a central coordinator by using a decentralized stochastic optimization in which aspects of sensitive information are considered.A decentralized stochastic optimization algorithm that is able to guarantee provable convergence accuracy, even in the presence of aggressive quantization errors that are proportional to the amplitude of quantization inputs, is proposed.In [41], an innovative data-driven robust model predictive control for irrigation systems is proposed.The paper integrates both first-principle models in order to describe dynamics in soil moisture variations, and data-driven models with which to characterize the uncertainty in forecasting errors from historical data.The precipitation forecast errors are analyzed, along with the dependence of their distribution on forecast values.In [39], a DSMPC framework is proposed using a stochastic cooperative game-based assistant fault-tolerant control for distributed drive electric vehicles, considering the uncertainty in driver behavior.The control algorithm considers the interaction among the driver, automatic steering, and in-wheel motors.SMPC techniques are also applied to HVAC systems for energy-efficient buildings in [42].A common gap in the aforementioned references related to DSMPC concerns the formulation of DSMPC problems, with cost functions that integrate a large number of terms.A framework with which to obtain deterministic behavior of the exchange variables is not addressed either, despite being an important aspect in the common optimization of networked microgrids in the day-ahead electricity market, as explained previously.The gap regarding the development of optimization methods for complex interconnected stochastic subsystems is again found in [38] and [39].
New control schemes with which to confront the computational burden that the interconnection of stochastic complex subsystems produce (as occurs in microgrids with hybrid ESS) are therefore required, in which the decomposition steps are defined in the optimization problem in order to make them feasible for normal computing devices.
B. MAIN CONTRIBUTIONS
As discussed in [6], the flexibility of the participation of microgrids in electricity markets can be enhanced by the use of hybrid ESS.The aforementioned authors confront the optimization problem of integrating different types of ESS subject to the inclusion of different economic criteria, such as degradation and lifetime issues for each ESS, start-up costs, etc., and also that of considering uncertainties in the energy forecast.However, the methodology developed is applied only to one microgrid, without considering the case of energy exchange among different microgrids subject to uncertainties in the energy forecast.
The distributed optimization of day-ahead market participation for interconnected microgrids should confront a distributed formulation of complex single problems, integrating the operation cost of each microgrid component subject to the inherited stochasticity of the energy forecast and thus confronting the problem of avoiding penalties for deviations from the regulation service market.According to the literature review, while the stochasticity of energy generation within microgrids, along with their common participation in the electricity markets, are topics that have been considered in previous work related to energy trading schemes, the coupled problem of not achieving the energy schedule of the day-ahead market in a common operation of two microgrids owing to the connection of two stochastic systems has not been studied.The same can be said of the networked operation of microgrids with hybrid ESS when considering stochastic energy forecast scenarios.
The work described herein expands on the methodology introduced in [3], in which the networked operation of microgrids was solved using DMPC techniques, but by considering a deterministic profile in the energy prediction.It also achieves an advance in the state of the art with respect to [6] by considering energy exchanges among microgrids, despite uncertainties.As indicated in [3], the high number of constraints to be introduced into the controller makes it unfeasible (using standard computing hardware) to solve the network optimization problem in a centralized manner when more than two microgrids are involved.A problem related to the computational burden is similarly found in [6], in which more than two scenarios are considered in the stochastic optimization problem of a microgrid with hybrid ESS.The aim of this work is to propose a tractable methodology with which to manage two scenarios and two microgrids in the same optimization problem.The principal innovative results obtained are that, despite the uncertainty in the energy forecast that is considered, a deterministic energy schedule is obtained for both the purchase/sale of energy with the main grid and the energy exchange with the neighboring microgrid.The algorithm is developed using stochastic and distributed MPC techniques and mixed-integer programming.
The following features of the proposed methodology are considered to be the main contributions of the present work: • The development of a framework with which to optimize energy trading processes among networked microgrids, considering the stochasticity of both energy generation and load consumption, thus achieving deterministic energy exchanges among microgrids and enhancing their operation when compared to acting as individual microgrids.
• The generation of deterministic outputs in interconnected stochastic subsystems.
Fig. 1 shows a schematic overview of the kind of energy community on which this work is focused.As can be seen, each microgrid can be composed of internal loads and different renewable generators.Both loads and generators are drawn inside a cloud so as to highlight the inherited uncertainty in the energy forecast of these components of the microgrid.Each microgrid also integrates batteries and hydrogen as ESS that are not subject to uncertainties in the forecast of their behavior.
C. OUTLINE OF THE PAPER
The remainder of this paper is organized as follows: The controller, formulated as a Stochastic Distributed Model Predictive Control (SDMPC) in order to include the uncertainty of the energy forecast, is developed in Section II, which also describes and justifies the operation cost associated with each storage technology used in the microgrid.The results obtained are discussed in Section III and the main conclusions are summarized in Section IV.
II. P2P STOCHASTIC OPTIMIZATION OF DAY-AHEAD MARKET PARTICIPATION
The microgrid controllers are designed in order to optimize the day-ahead participation of the network of microgrids such as those shown in Fig. 1 in the electricity market through P2P energy exchanges according to the following criteria: 1) Economic Optimization: The microgrid controllers integrate the operational costs of the microgrid components into the model simultaneously with the electricity market prices.2) Uncertainties Management: The controller is formulated to include the stochasticity of renewable generators and consumers' behavior.3) Deterministic Energy Exchanges: It is assumed that, independently of the stochastic nature of the energy forecast for each microgrid, the engagement of energy exchange must follow a deterministic profile that is completely independent of uncertainties as regards energy exchange with either the main grid or the neighboring microgrid.The block diagram of the proposed controller is shown in Fig. 2. Each block is detailed in the following sections.
A. GENERIC FORMULATION OF THE DSMPC CONTROLLER
The optimization problem for a system of interconnected stochastic subsystems, considering deterministic output variables and exchange variables among the different subsystems, can be generically formulated as indicated in the expressions (1)- (13).The first expression (1) corresponds to the cost function of a distributed and stochastic system using a multi-scenario formulation [3], [8] as the methodology with which to consider the uncertainties in the energy forecast.As will be noted, it is expressed in such a way that all the sample instants of a scheduling horizon SH are added together.The subindex i is utilized in order to mention each microgrid inside the network N .The upper-index [S i ] is used to reference each of the scenarios considered.As can be seen in ( 1), the nomenclature global is used for the global optimization problem derived from the network of interconnected subsystems, while the nomenclature local is employed to refer to each local optimization problem for each of the subsystems.The logic control signals are expressed as δ [S i ] i (t), while the continuous signals are integrated with u [S i ] i (t).The state variables are denoted by x [S i ] i (t) and correspond to those model variables whose value at each sample instant depends on the previous one.The nomenclature z [S i ] i (t) is used for the mixed product [11] of logic and continuous variables.Finally, v i→j (t) represents the exchange variables between a generic subsystem i and a generic neighbor subsystem j.The expressions (2)-( 7) represent the corresponding constraints related to the upper and lower limits of the variables that integrate the model of the plant, while the expressions ( 11)-( 13) concern the plant model constraints among variables using its state-space representation by employing the MLD framework [11].As will be noted, the model of the plant also includes its output variables (see expression (12)), which are labeled as y i (t).Note that in order to achieve a deterministic value for both the output variables y [S i ] i (t) and the exchange variables v [S i ] i→j (t), the constraints ( 9) and ( 10) are introduced because these kinds of variables do not depend on the scenario S i considered.The matrices A i , B i , C i , D i and E i represent the relationships among the different variables that integrate the plant model.Finally, P(S i ) denotes the probability of each given forecast scenario.As introduced in [3], the first term of the cost function (1) penalizes the exchange variables values so as to consider the transport losses resulting from power flux among microgrids. min subject to: Asumption 1: As mentioned in section I, the execution time required by the solver to find the optimal solution increases with the number of decision variables.
VOLUME 12, 2024
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
Asumption 2: while the number of decision variables increases with the number of subsystems and scenarios considered.
Asumption 3: Both subsystems can act as single and non-interconnected subsystems which are, in this case: v [S i ] i→j = 0. Asumption 4: In the case of obtaining v [S i ] i→j ̸ = 0, the value of J global is lower than the value of J global if the problem is constrained with v [S i ] i→j = 0.In order to reduce the number of scenarios in [6], an uncertainty band is used to introduce the stochastic behavior of the system by applying a mean deviation to a deterministic mean scenario S i = 0, thus generating a positive and a negative scenario S i = +, −.A method with which to conform the best couple of subsystems at each iteration is similarly followed [3].In both methodologies, the problem is decomposed into the following steps: Step 0. Peer-to-Peer optimization for the selected subsystems a and b, considering all the combinations of the possible deterministic scenarios.
For a number of possible scenarios N S a for the subsystem a, and a number of scenarios N S b for the subsystem b, the problem defined by expressions (1)-( 13) is solved by considering all the possible combinations of scenarios S a = 1, . . ., N S a and S b = 1, . . ., N S b , as specified in (14).Note that this simplification makes it possible to follow the procedure explained in [3], since the scenario is known at each iteration, signifying that the problem can be solved as a deterministic DMPC problem. min In this step, all the constraints defined in expressions (2)-( 13) are considered, with the exception of those defined in ( 9) and (10).The next set of optimal variables would be obtained after executing this step: After solving all the combinations of scenarios, the average profile for the exchange variables is obtained as follows: Step 1. Solving the problem for all the scenarios considered, independently for each subsystem This step calculates the value of the expectation i,local , taking into account the value of the local cost function for all the considered scenarios and constraining v [S i ] i→j (t) = 0.
In this step, all the constraints defined in expressions (2)-( 13) are considered.Note that in this step, the constraint ( 9) is included in order to achieve a deterministic value of the output variables for all the possible scenarios.After solving the problem defined in this step, the value of i,local for the optimal operation point for each subsystem working as a single system C <1> i,local is obtained.The upper index < k > refers to the iteration step. Step
Calculation of the expectation of the cost function for every single subsystem, considering exchange possibilities
This step solves the problem defined in ( 20) In this step, all the constraints defined in expressions (2)-( 13) are also taken into account.Note that although both microgrids are optimized independently, the exchange variables v [S i ] i→j (t) are considered and deterministic behavior is imposed on them (10).After solving this step, as occurred at Step 1, C <2> i,local is again obtained (note that this term evaluates only the corresponding value of i,local of the expression (20)). Step
Calculation of the expectation of the cost function for every single subsystem, considering exchange possibilities and constraining the local cost
The problem defined in Step 2 is again solved subject to the following constraint: Step
k. Calculation of the expectation of the cost function for every single subsystem, considering exchange possibilities and constraining the local cost, taking into account the previous result for the neighboring subsystem
This step solves the problem defined in ( 22) VOLUME 12, 2024 subject to: This step is carried out iteratively until the condition ( 24) is satisfied.
Note that if condition ( 24) is satisfied, the constraint related to the deterministic behavior of the exchange variables introduced in (10) for the stated problem before being decomposed into the proposed steps is also satisfied.The same is true of the constraint (8), which is related to the complementary behavior of the exchange variables between the subsystems involved.
Remark 1: The method is introduced for P2P optimization between two interconnected systems.In the case of a greater number of them, the procedure introduced in [3] to form the best couple at each iteration can be followed.
B. APPLICATION OF THE METHOD TO A CASE STUDY CONCERNING INTERCONNECTED MICROGRIDS WITH HYBRID ESS
The method explained above was applied to a network of microgrids with hybrid ESS composed of renewable generation, local loads, batteries, an electrolyzer, a fuel cell and a hydrogen tank.The block diagram of the case study for a network of just two microgrids is represented in Fig. 2.
The analog inputs of the plant u u u [S i ] are defined in (25). where i,dis are the setpoints provided by the microgrid Energy Management Systems (EMS) to the local controllers of the Battery Management System for the charging or discharging of the batteries.P [S i ] i,elz and P [S i ] i,fc are similarly the control signals sent by the EMS to the internal controller of the electrolyzer and the fuel cell in order to set their power.The energy exchange with the main grid, purchasing or selling energy in the day-ahead market, are represented by P i,pur and P i,sale , which do not depend on the scenario S i considered owing to the deterministic behavior required for these variables.
The logic inputs of the plant δ [S i ] i are represented in (26).
where δ [S i ] i,ch and δ [S i ] i,dis are logic variables related to the charge and discharge states of the batteries.The electrolyzer and fuel cell have digital inputs related to their on/off-state (δ [S i ] i,elz and δ [S i ] i,fc ).As the start-up and shutdown processes lead to degradation issues in the elecrolyzer and the fuel cell, the auxiliary logic variables σ [S i ] i,elz and σ [S i ] i,fc are included in order to penalize these processes.The logic variables χ [S i ] i,elz and χ [S i ] i,fc are auxiliary variables that are employed in order to represent the instants at which the electrolyzer and the fuel cell are in the on-state, with the exception of those at which these devices are started-up or shut down.These logic variables are used to penalize fluctuant operations in the electrolyzer and the fuel cell, which also lead to degradation processes.
i,pur and δ [S i ] i,sale are logic variables associated with the purchasing and selling of energy with the main grid.The lower limit for all the logic variables is ''0'' and the upper limit is ''1''.The vector of mixed product variables in the plant is represented by (27). where i,α are the mixed products for the charging/discharging of the batteries, the electrolyzer, the fuel cell and the purchasing/selling of energy, respectively.The auxiliary mixed products ϑ [S i ] i,elz and ϑ [S i ] i,fc are obtained in order to represent power increments in the electrolyzer and the fuel cell at all their working instants, with the exception those at which they are started-up or shut down.
The dynamic state variables of the different microgrids are the energy level stored in the batteries, using their state of charge SOC [S i ] i , and the level of hydrogen available in the hydrogen vessel LOH [S i ] i , as shown in (28).
The exchange variables v i→j between the microgrid i and the microgrid j represent the exchange of energy P i→j at each sampling instant (29).
Finally, the output variables of the microgrids (y i ) are defined through the use of the energy transactions with the main grid P grid , as shown in (29).The cost function defined in (1) can be obtained using the expressions (30) and (31).Expression (31) corresponds to the case study of just one microgrid [6].In the aforementioned cost function, CC represents the capital cost of acquisition for each component of the microgrid.The term Cycles corresponds to the number of cycles of the batteries.As indicated in [2], high charge and discharge power ratios produce degradation processes which have to be penalized, as occurs in the terms associated with battery degradation, and these are quantified by Cost degr,α .The electrolyzer and the fuel cell lifetimes depend on the number of working hours Hours.Fuel cells and electrolyzers are also degraded by starting-up cycles and power fluctuations.These degradation mechanisms are penalized in the terms concerning the Hydrogen ESS Degradation.The last two terms in (31) are included in order to maintain the energy stored in each ESS at the end of the 44546 VOLUME 12, 2024 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
schedule horizon in a reference value.Note that only those values whose difference with the reference are negative are penalized. being, Grid Exchange Revenue&Cost(J grid ) The state-space representation of the plant (11) can be specifically defined for this case study by following the mathematical model introduced in [2], with (32) and (33), where C bat stands for the capacity of the battery, and η ch and η dis signify the performance factors for the charging and discharging processes of the batteries.η fc and η elz are similarly the performance factors in the conversion powerto-hydrogen carried out by the electrolyzer and the fuel cell.As can be seen in Fig. 3, the optimization of two interconnected microgrids is subject to different possible energy scenarios in each microgrid.The forecast module is based on the methodology described in [2].It employs the historical data of a meteorological station to obtain the array of forecast variables composed of the hourly prediction for the energy generated by the photovoltaic and wind turbine generators, along with the load consumption (d i = [ Pi,pv , Pi,wt , Pi,load ], where the subscript i is used to make reference to the microgrid i belonging to the network N ).As already stated in [6], the stochasticity of these variables is defined by including an uncertainty band in the predicted value of the variables.A positive and negative uncertainty band ( P un ) is, therefore, applied to an initial deterministic scenario (S i = 0) of the remaining energy prediction Prem in the microgrid, which is defined as ( Pi,rem = Pi,pv + Pi,wt − Pi,load + P un ) for the optimistic energy scenario considered (S i = [+]) and ( Pi,rem = Pi,pv + Pi,wt − Pi,load − P un ) for the pessimistic scenario considered (S i = [−]) for each microgrid.
As occurred in [6], the uncertainty band value P i,un is obtained using the expression (34), which is based on the average standard deviation between the value of the predicted remaining power for the microgrid and that which is measured, applied to each hour and each day for a complete year, although other methods could also be applied to the proposed algorithm [6].The terms day and h refer to the day and the hour that the standard deviation is calculated, with Pi,rem (day, h) being the predicted value for the remaining power in the microgrid, while P meas i,rem (day, h) is the measured value.The forecast algorithm also calculates the energy prices for the actions of purchasing and selling power in the day-ahead market ( (t) = [ pur (t), sale (t)]).
The expression for the plant model output variables (12) can be particularized to the case of the microgrids that are the object of this study by means of the difference between the purchased and the sold energy in the day-ahead market with the main grid.The energy exchange with the main grid is the result of the power balance obtained for each scenario at each sample instant (36) using the following values K [+] = 1, K [−] = −1, K [0] = 0, as done in [6].
P i,grid (t) = z i,pur (t) − z i,sale (t) P i,grid (t) Following the methodology introduced in [2], the expression ( 13) can be obtained by the linear constraints resulting from the logic relationships between the variables u, δ and z (expressions ( 37)-( 42)).
The symbols ∧ and ∼ stand for the logic operators AND and NOT, respectively.As introduced with the constraints ( 9) and (10), both the exchange variables and the output variables of each subsystem have to behave in a deterministic manner.These constraints can be particularized to our case study by inserting the following expressions: The problem for Step 0 can be particularized to our case study concerning the optimization problem (45), in which the scenarios for each microgrid adopt the values S i = +, − and S j = +, −, and therefore, After solving all the combinations of scenarios, the average profile for the exchange of power among microgrids is obtained as follows: In the case study described herein, Step 1 can be expressed by defining the expectation i,local , considering the value of the local cost function for all the scenarios constraining the value of P [+] i→j (t) = P [+] i→j (t) = 0 considered.
After solving the problem defined in this step, the value of i,local for the optimal operation point for each subsystem working as single systems C <k> i,local is obtained, as described in Section II-A.After obtaining C <1> i,local , Step 2 can be defined for the case study regarding P2P energy scheduling among networked microgrids with the expectation defined in (48): subject to: This step is carried out iteratively until the condition (51) is satisfied.
III. RESULTS
The algorithm was programmed in a MATLAB environment using the TOMLAB® toolbox as optimization software.
The execution time required for all the steps of the controller was 43.93 s, using a PC with an Intel® Core™ i7-9750H @ 2.60 GHz and 16 GB of RAM installed.The different values integrated into the controller are shown in Table 2.The sample period selected was T s = 1 hour and the schedule horizon was 24 hours, as usually occurs for the day-ahead market operation.Fig. 4 shows the results of the price prediction carried out by the controller following the methodology described in [2].It is considered that pur (t) = 3 sale (t).The different energy forecast scenarios when considering an uncertainty band of ±5000 W for each microgrid is shown in the left-hand graph in Fig. 5.The procedure explained in Step 0 of Section II was followed, and the results obtained for the energy exchanges P 1→2 when considering the deterministic profiles P1,rem and P2,rem based on the different combinations of considered scenarios is displayed in the right-hand graph of Fig. 5.In order to simplify, the assigned value is similar for each of the scenarios considered P(S i = +) = P(S i = −) = 0.5.
The simulations for the SMPC controller applied to the microgrids working as single systems (Step 1 of the find an energy exchange consensus for the day-ahead, which has deterministic behavior, independently of the scenario considered for each microgrid.The algorithm also obtains a deterministic energy exchange with the main grid.The final results of the algorithm can be observed in the graphs in Fig. 7. One goal of the algorithm is to achieve, in a networked operation, a lower value of the sum of local operational costs defined in expression (47) than that which acts as single systems.The optimization results of Step 1, in which the microgrids act as single systems, can be found in Fig. 6, while the optimization results of the networked operation are displayed in Fig. 7.The legends of both figures include the term P req , which indicates the exchange power required in order to satisfy the given constraint in expression (51), in which P req = 0 for the case of single microgrids, as occurs in Fig. 6.As can be seen in Fig. 7, despite the uncertainties, a common profile for the exchanged power for both microgrids and scenarios is found after several iterations.A common profile for the energy exchange with the main grid is also obtained for each microgrid, independently of the scenario considered.These can be considered as the main achievements of this work.Note that if the most advantageous energy forecasts S 1 = + and S 2 = + are scheduled, after which the worst possible scenario combination S 1 = − and S 2 = − later arises in the real-time operation of both microgrids, the schedule of energy exchange with the main grid carried out could not be achieved, since the corresponding penalty for deviations in the regulation service market are applied.This provides an additional feature to the P2P optimization of microgrids presented in [3].
The values obtained for the local cost functions of the microgrids are shown in Table 3 for both cases: 1) Single or independent operation of each microgrid without energy exchange, and 2) Cooperative P2P operation, while Table 4 shows the sum of costs for both microgrids when considering the possible scenario combinations.As can be seen, despite the stochastic nature of the energy forecast, a deterministic energy exchange profile that achieves a better interaction with the main grid and reduces the operation cost of the ESS can be obtained between both microgrids.These cost reductions make the sum of the local cost functions evaluated as a network with a P2P energy exchange lower in comparison to the case of working as single systems (see Table 4).
IV. CONCLUSION
This work presents a distributed stochastic MPC approach for interconnected systems that include a large number of terms in their cost function and require a deterministic schedule for both exchange and output variables.
The developments are applied to an energy community based on networked microgrids with hybrid ESS.The results obtained show that the energy community achieves a lower cost for its optimization in the day-ahead market as a network of microgrids than in the case of participating as separate microgrids, despite considering uncertainties in the energy forecast of both microgrids.
Two of the main challenges related to the large-scale deployment of energy communities are confronted and resolved.The first is that of large-scale energy storage, which is achieved by introducing an advanced formulation specifically developed for the management of microgrids with hybrid ESS composed of hydrogen and batteries in spite of the large number of terms required in the cost function of the associated optimization problem.The use of both technologies achieves high rates of power and energy density in the renewable power plant.The second challenge concerns the integration of uncertainties into the energy forecasting of interconnected microgrids.This aspect is achieved by using an advanced formulation for the energy optimization problem based on distributed stochastic MPC techniques.
As can be seen from the results, despite considering a band of uncertainty in the energy forecast of both microgrids, they can acquire a deterministic commitment to exchanging energy with the main power grid and with the neighboring microgrid.The proposed methodology paves the way toward a massive deployment of energy communities with large energy storage facilities based on hybrid ESSs.
Peer-to-peer energy transactions involve many entities, each with its own generation and consumption profiles.As the number of market participants increases, the computational burden grows.The objective of this algorithm is to solve the schedule of interconnected microgrids, and it is, therefore, an off-line optimization method to be used before the day-ahead market closes.The computational burden can be solved with simply a correct anticipation of the day-ahead market session closure, depending on the number of market participants involved.
Moreover, although the paper is focused on networked microgrids, the proposed methodology can be applied in order to solve the problem of coupled uncertainties in interconnected systems.Future developments will address the problem of including different scenarios with different probabilities so as to create a more generalized distributed stochastic framework.
FIGURE 1 .
FIGURE 1.An example of four-bus networked microgrids with hybrid ESS, considering uncertainties in the energy forecast.
FIGURE 2 .
FIGURE 2. Block diagram of the proposed stochastic P2P optimization of microgrids.
FIGURE 3 .
FIGURE 3. Possible energy scenarios in a P2P optimization of microgrids.
FIGURE 5 .
FIGURE 5. (a) Different energy forecast scenarios considered for both microgrids.(b) Power exchange profiles using deterministic P2P optimization of microgrids.
FIGURE 6 .
FIGURE 6. Optimization results for each microgrid working as a single system.
FIGURE 7 .
FIGURE 7. Optimization results for the stochatic P2P optimization of the interconnected microgrids.
TABLE 4 .
Comparison of results for the single and cooperative P2P optimization.
z Mixed product for Electric power (W ).δ On/Off state.ε Minimum tolerance provided to the controller.η Efficiency (p.u).χ Logical degradation state.ϑ MLD power variation in degradation state (W ).σ Start-up state logical variable.Cost of energy (e).Expectation of the cost function.
TABLE 1 .
Literature related to the optimization of networked microgrids considering forecasting uncertainties.
TABLE 2 .
Values of the controller.
TABLE 3 .
Controller results for each microgrid. | 10,054.8 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Guillain-Barré Syndrome as a Complication of COVID-19
Coronavirus disease 2019 (COVID-19) is associated with multiple neurological complications including Guillain-Barre syndrome (GBS). While there are reports of COVID-19 -related GBS cases, much remain unknown. We report two cases of GBS-associated COVID-19, which started about eight weeks after the initial COVID-19 infection. Such a long duration between infection and symptom onset of GBS is unusual for post-infectious GBS. Moreover, severely ill patients with COVID-19 may have prolonged hospital stay leading to critical illness myoneuropathy. Diagnosing superimposed GBS can be challenging in such cases. Clinical suspicion, nerve conduction studies with electromyography, and cerebrospinal fluid analysis can help in making the correct diagnosis. Both presented cases responded to intravenous immunoglobulin therapy.
Previous reports have shown that several viral (cytomegalovirus, Epstein-Barr virus, Hepatitis E virus, and Zika virus) and bacterial (campylobacter jejuni and mycoplasma pneumoniae) infections can trigger an aberrant immune response attacking the peripheral nerves, leading to GBS [5,6]. Additionally, GBS has been reported in patients with the Middle East respiratory syndrome (MERS) [7]. While there are some case reports and case series with COVID-19-related GBS, there is a knowledge gap and the entire clinical spectrum of COVID-19-related GBS remains unknown [1,8]. Here we report two cases of GBS associated with COVID-19 to highlight some critical clinical features and diagnostic challenges.
Case 1
A 32-year-old man with no relevant medical history presented with hypoxia, tachypnea, and fever from COVID-19 which was confirmed by RNA PCR test. He was admitted to the medical ICU and developed acute respiratory distress syndrome (ARDS). He received tocilizumab (single dose), hydroxychloroquine (three-day course), and remdesivir (100mg 14-day course). His ventilation requirements continued to increase and ultimately, he required veno-venous extracorporeal membrane oxygenation. He spent 60 days in the ICU, which was further complicated by a gastrointestinal bleed.
After the prolonged ICU stay, he was deconditioned and had a significant generalized weakness. He began to work with physical therapy and was able to walk with an assistive device. Over the next five days, he noted paresthesia in his lower extremities along with a progression in weakness, eventually requiring two-person assist to ambulate. Neurological examination revealed atrophy of anterior and posterior compartment leg muscles and 1/5 strength in ankle dorsi and plantarflexion. Proximal strength was preserved, and he did not have any facial weakness. Sensation to light touch was intact and vibration sensation was mildly decreased. Reflexes were 2+ at upper extremities, brisk at the patella bilaterally, and absent at the ankles. While there was a concern for a critical illness myoneuropathy, the clinical presentation, including lack of proximal weakness, sudden deterioration in motor strength, and new onset sensory neuropathy while recovering were atypical [9][10][11]. Electromyography (EMG) and nerve conduction studies (NCSs) showed severe axonal sensorimotor polyneuropathy with ongoing denervation changes consistent with a diagnosis of acute motor sensory axonal neuropathy, a variant of GBS ( Figure 1). Comprehensive workup revealed normal B1, B6, B12, ANA <1:80, negative for Hepatitis B and HIV, normal SPEP, UPEP, negative for GM1 and GD1a/b antibodies. CSF analysis showed albumino-cytological dissociation with four nucleated cells and protein of 127.6 mg/dL, further confirming the diagnosis.
FIGURE 1: Nerve conduction studies (NCSs) waveforms for patient 1.
Motor responses of the left peroneal nerve (recording from extensor digitorum brevis and tibialis anterior) and tibial nerve (recording from abductor hallucis) were absent. Motor response of the left median nerve was normal. Sensory response of the left sural nerve was absent. Sensory NCS of the left radial nerve was borderline normal. [Of note, this was a bedside study].
He responded to intravenous immunoglobulin (IVIg) therapy. Prior to discharge from the hospital, he was able to stand up with assistance. He was discharged to an acute rehabilitation center where he continued to improve and was able to walk with a walker with exam notable for improved strength in dorsi and plantarflexion (2/5).
Case 2
A 61-year-old man with known diabetes with no pre-existing neuropathy, severe lumbar stenosis (L4-L5), and right foot drop at baseline, presented with two days of generalized weakness and diarrhea and was positive for COVID-19 (RNA PCR). Shortly after admission, he went into acute respiratory failure requiring intubation. He also received a dose of tocilizumab for COVID-19 after intubation. He had a prolonged ICU course of over seven weeks which was complicated by aspiration pneumonia, urinary tract infection, and critical illness myoneuropathy. He was extubated and his medical issues stabilized. He was discharged to acute rehabilitation facility 60 days after initial admission but there he developed new-onset leg weakness, gait instability, and soon after his hands were involved. He was readmitted with increasing weakness. Neurological examination revealed atrophy of intrinsic hand and foot muscles. He had mild proximal upper extremity weakness, with 2/5 strength in ankle dorsi and plantarflexion. Vibration sensation was absent at the great toes and patella, and he was areflexic. His EMG/NCSs showed a severe axonal sensorimotor polyneuropathy ( Figure 2). CSF analysis showed no nucleated cell and mild elevation of protein to 54 mg/dL (normal: 15-45 mg/dl). He also was negative for Hepatitis B in addition to HIV and had a normal SPEP, normal vitamin B12, and no ganglioside antibodies. He responded to a course of IVIg therapy with improvement in his strength, with improvement in proximal upper extremity strength (4+) as well as distal strength (improvement in ankle strength). He was eventually discharged to a rehabilitation where he continued therapy for improving his strength ( Table 1).
Motor responses of the right peroneal nerve (recording from extensor digitorum brevis) and tibial nerve (recording from abductor hallucis) were absent. Motor response of the left peroneal nerve recording from tibialis anterior showed reduced response amplitude. Motor NCSs of the right median and ulnar nerves showed prolonged distal latencies and slowing of conduction velocities. Response amplitudes were reduced for right ulnar nerve. Sensory responses of the right sural, median, ulnar and radial nerves were absent.
Discussion
GBS can be associated with COVID-19 [1][2][3][4]8,9]. Many patients with GBS present with an antecedent infection. The majority of GBS cases associated with COVID-19 had typical AIDP pattern, consisting of a sensorimotor, primarily demyelinating GBS. AMSAN variants are relatively rare (<15% of all reported cases). Most patients present with symptoms roughly nine days after onset of COVID-19 [1,8]. However, our patients developed GBS about eight weeks after the initial symptoms of COVID-19, which is atypical. Moreover, both of them were affected by critical illness myopathy/neuropathy, further making the diagnosis challenging. Interestingly, one of the patients retained reflexes which is unusual for GBS but can be seen in a small percentage of patients [5,6].
It is important to differentiate the causes of acute to subacute new-onset weakness in patients with COVID-19. In severe cases of the disease, patients may have prolonged hospital courses with significant intensive care unit stays. This can result in critical illness myopathy (CIM) and critical illness polyneuropathy (CIPN), and these conditions may present together. The pathophysiology of CIM is thought to be secondary to an inflammatory cascade due to microvascular, metabolic, and electrical alterations with atrophy due to increase muscle proteolysis with decrease in muscle protein synthesis [10]. Creatinine kinase can be elevated in these patients, and they often present with symmetric proximal muscle weakness which was not seen in these cases. CIPN can present with diminished reflexes, weakness, or paresthesia. Systemic inflammatory response syndrome is a frequent underlying factor for CIPN [11]. The pathophysiology of CIPN is likely from injury to the microcirculation of distal nerves, resulting in ischemia and axonal degeneration. CSF analysis in CIPN usually does not show albumino-cytological dissociation.
Differentiating between CIPN/CIM and GBS can be nuanced and complicated, and as discussed above, CSF studies and EMG/NCS can be used to provide diagnostic evidence. However, sometimes, NCS/EMG alone may not be able to confidently differentiate between CIPN and axonal variant of GBS [5,[11][12][13][14]. In a small percentage of cases reflexes can be intact in axonal variants of GBS. In such cases, a critical review of the clinical presentation, and CSF analysis can be helpful [12][13][14]. It is critically important to try and parse out | 1,877.4 | 2021-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
COVRECON: Combining Genome-scale Metabolic Network Reconstruction and Data-driven Inverse Modeling to Reveal Changes in Metabolic Interaction Networks
One central goal of systems biology is to infer biochemical regulations from large-scale OMICS data. Many aspects of cellular physiology and organism phenotypes could be understood as a result of the metabolic interaction network dynamics. Previously, we have derived a mathematical method addressing this problem using metabolomics data for the inverse calculation of a biochemical Jacobian network. However, these algorithms for this inference are limited by two issues: they rely on structural network information that needs to be assembled manually, and they are numerically unstable due to ill-conditioned regression problems, which makes them inadequate for dealing with large-scale metabolic networks. In this work, we present a novel regression-loss based inverse Jacobian algorithm and related workflow COVRECON. It consists of two parts: a, Sim-Network and b, Inverse differential Jacobian evaluation. Sim-Network automatically generates an organism-specific enzyme and reaction dataset from Bigg and KEGG databases, which is then used to reconstruct the Jacobian's structure for a specific metabolomics dataset. Instead of directly solving a regression problem, the new inverse differential Jacobian part is based on a more robust approach and rates the biochemical interactions according to their relevance from large-scale metabolomics data. This approach is illustrated by in silico stochastic analysis with different-sized metabolic networks from the BioModels database. The advantages of COVRECON are that 1) it automatically reconstructs a data-driven superpathway metabolic interaction model; 2) more general network structures can be considered; 3) the new inverse algorithms improve stability, decrease computation time, and extend to large-scale models
Introduction
Recent studies in system biology generate large datasets of molecular, genomic, and physiological variables, with the aim to understand complex diseases and regulatory interactions in biochemical networks from clinical studies (Elgendy, et al., 2019;Linke, et al., 2017). However, the functional interpretation of such datasets and the inference of how regulatory mechanisms in the underlying biochemical networks change in disease conditions relies on developing proper mathematical analysis and inference methods (Weckwerth, 2010;Weckwerth, 2011;Weckwerth, 2019).
The primary method in genome-scale metabolomics data processing is statistics. In recent studies, both conventional and machine learning methods have been implemented into metabolomics data processing. The conventional statistical methods such as t-test, clustering, and principal component analysis (PCA) are widely used but are not able to reveal the underlying biochemical regulatory interactions. Nevertheless, many recently developed statistics and computer science techniques have primarily enhanced the statistical power in metabolomics data analysis, such as deep learning (LeCun, et al., 2015), genetic algorithms (Mitchell, 1998), and boosting machine learning methods (Chen and Guestrin, 2016). Yet these methods provide limited insight into how the information in a biochemical network is transferred, what the critical regulatory steps are, and how a regulatory mechanism changes under different conditions. Here, a metabolic interaction network is defined from biochemical or regulatory interactions between identified metabolites. Interactions can be direct or involve several connected reactions (superpathways) (Nägele, et al., 2014;Weckwerth, 2019). As shown in figure 1a, many aspects of cellular physiology and organism phenotype could be understood as a result of the metabolic interaction network dynamics (De Martino, et al., 2018). Thus analyzing the changes in a metabolic interaction network for different phenotypes can give insight into changes of the underlying regulatory mechanisms. This interaction difference is influenced by complex aspects, where reaction enzyme activity difference plays a key role. In that way, there also exists a link between the differential metabolic interaction network and differences in the transcriptomic or proteomic profiles of the different phenotypes.
On this aspect, kinetic models can be constructed to improve the systemic insight into a metabolic network. Over the last two decades, extensive biological studies have developed manually curated or optimized models and offered open-source files in databases such as the biomodel database (Malik-Sheriff, et al., 2020). However, a comprehensive biological modeling will need time-series experimental data; the modeling approach will be largely limited with only steady-state data. On the other hand, for large-scale kinetic models, model constructing and parameter estimating are challenging tasks (Lamichhane, et al., 2018).
Assuming that network dynamics are near steady state, biochemical interactions are represented by the system's Jacobian matrix evaluated in steady state. Starting from metabolic networks, this paper presents a new mathematical method for analyzing the variation in the steady state Jacobian matrix of a biological system (e.g. metabolic network) between two conditions, e.g. health condition 'h' and disease condition 'd'. The method is based on a Jacobian reconstruction from the Lyapunov equation (Sun and Weckwerth, 2012) requiring only data for the computation of the covariance matrix and the estimation of a fluctuation matrix .
In the last years, several works have developed inverse differential Jacobian algorithms, which provide a convenient way to infer the dynamics of metabolic networks from metabolomics data (Kügler and Yang, 2014;Nägele, et al., 2014;Steuer, et al., 2003;Sun, et al., 2015;Sun and Weckwerth, 2012;Weckwerth, et al., 2004;Wilson, et al., 2020). This method is still at an early stage and requires further improvements and unification of methods (Weckwerth, 2019). This work develops a new inverse differential Jacobian algorithm and a more extensive, automated workflow termed "COVRECON". This method combines the covariance matrix of metabolomics data with automatic metabolic interaction network modeling drawing from genome-scale metabolic reconstructions and biochemical reaction databases. As shown in figure 1b, COVRECON consists of three sub-modules: building of an organism-specific database, construction of a superpathway based metabolic interaction model (Sim-Network), and the inverse Jacobian computation.
1a, 1b, Figure1. a, Various phenotypes of an organism and differences in cellular physiology can be understood as the result from metabolic interaction network dynamics difference. This can be further uncovered with the differential Jacobian matrix. b, Work scheme of COVRECON: the covariance (COV) based differential Jacobian calculation implementing an automated metabolic network reconstruction (RECON).
The Differential Jacobian
Consider a biological system that consists of n compounds (metabolites, proteins) denoted by { } =1… . The system dynamics can be modeled with the set of ordinary differential equations (ODEs): where = { } = {| |} are the concentrations of the n compounds, and = ( ) are composed of the reaction rates for these compounds (e.g., Michaelis-Menten kinetics, or mass action).
The steady-state Jacobian matrix of the model is defined as a × matrix in which is the first-order derivative of the rate for the concentration of substances at steady state, noted as = | : Even if only evaluated at steady state, the Jacobian matrix of a system contains useful information about its dynamics, such as regulatory interactions between the different compounds. In a previous study, Steuer et al. (Steuer, et al., 2003) established a relation between the covariance matrix of the metabolic data in the network and the steady-state Jacobian matrix of the system J as the Lyapunov Equation given by * + * = −2 . (3) Thereby, ∈ × is the covariance matrix of the compounds' concentrations near its steady-state value , and the fluctuation matrix D is the covariance of noise sources acting on the system.
Here, we focus on the differences in Jacobian matrices for two biological conditions, for example a health and disease condition, abbreviated as 'h' and 'd'. Using steady-state metabolomics data of a biological network, the objective is to evaluate the differences in the Jacobian matrices and thus the changes in biochemical interactions between the two conditions.
The differences between the two conditions are quantified by the differential Jacobian , the elements of which are defined from the Jacobians in first, e.g., health, condition 'h' and in the second, e.g., disease, condition 'd' as (4) In the Lyapunov Equation Eq. (3), the Jacobian matrix J has • unknown variables, while the covariance matrix C is a symmetric matrix, and thus has only However, for realistic biological networks the Jacobian matrix structure is commonly a sparse matrix (Nägele, et al., 2014;Sun and Weckwerth, 2012). Thus, the first step of COVRECON, Sim-Network, is made to automatically construct a metabolic interaction network model for the measured metabolites, which is used to constrain the Jacobian matrix structure. In most cases, the resulting structure leaves fewer non-zero entries than independent variables in the covariance matrix, thus making the Lyapunov equation over-determined and requiring the use of regression methods.
The COVRECON method consists of two major steps: 1. Determine the Jacobian structure. The key objective is to reduce the network structure from a genome-scale network to the specific biochemical species that are included in the considered dataset.
2. Analyze the differential Jacobian matrix. We establish a new method that focusses on identifying the major components in the differential Jacobian, instead of pursuing a full quantitative reconstruction.
The following sections will describe these two steps in more detail.
Determination of the Jacobian structure with the software Sim-Network
The Jacobian matrix structure is determined by the software tool Sim-Network which we implemented in Matlab. It constructs a conceptualized metabolic model for the metabolites in the considered dataset using a pathway search approach. Methods for pathway design and prediction have a long history since the emerging genomics, proteomics, and metabolomics databases (e.g., BIGG models (King, et al., 2016), KEGG (Ogata, et al., 1999), Metacyc (Caspi, et al., 2020), and ModelSEED (Seaver, et al., 2021)).
To construct the conceptualized metabolic model, Sim-Network makes use of reaction data from BIGG genome-scale models (King, et al., 2016), the KEGG database (Ogata, et al., 1999). For relevant reactions thermodynamics data (reaction irreversibility, direction and estimated reaction delta Gibbs free energy), we utilize the ModelSEED dataset (Seaver, et al., 2021). Figure 1b illustrates a scheme of the Sim-Network tool; the aim of Sim-Network is reducing the genome-scale model (with more than 1000 metabolites) to a superpathway based metabolic interaction network for the measured metabolites set (with less than 100 metabolites). In general, Sim-Network contains three main steps: network information gathering, path search and pruning.
We first generate a reaction database from a BIGG model or all the reactions related to a specific organism in KEGG. Then Sim-Network will gather relevant metabolites, and assemble a directional weighted network representation. Next, through shortest path search algorithm, Sim-Network will compute shortest paths (in both directions) for all pairs of metabolites in a specific metabolomics dataset. Finally, regarding the predefined cost threshold, Sim-Network will prune the network (assuming long-distance interactions are negligible), and construct a metabolic interaction network for the considered dataset. The detailed workflow of Sim-Network consists of the following steps: Step 1: Initially, one needs to choose an organism-based genome-scale metabolic model as a database. If transcriptomic data is available, the model is modified and trimmed by discarding reactions whose enzyme is not activated, using the GIMME method (Becker and Palsson, 2008).
When no transcriptomic data is available, the KEGG database offers a broader database including more enzyme specific reactions. By choosing the organism in COVRECON toolbox according to the experimental dataset (e.g. hsa for Homo sapiens, mmu for Mus musculus), Sim-Network will search and include all the enzymes and reactions of that organism into a database model.
Step 2: The model stoichiometric matrix is transformed into its network representation.
Meanwhile, a weighting strategy is applied to the reactions. Firstly, the side-metabolites (e.g., H2o, H+...) are excluded during the pathway search. This is because those side-metabolites concentrations are influenced by many other metabolites, which makes the influence from each specific metabolite negligible. The predefined side-metabolites are listed in the supplemental material. For each reaction, the reaction direction and irreversibility information is obtained from the Bigg model or from the ModelSEED dataset. The forward direction is given weight 1. If the reaction is reversible, the reverse reaction is given a user-defined weight (default 2). If the Gibbs free energy difference of the reaction Δ is available in ModelSEED database, we adjust the path cost for the reverse reaction according to the strategy in MRE web tool (Kuwahara, et al., 2016).
Generally speaking, for the reverse reaction whose Gibbs free energy difference is larger than 100 kcal/mol, we will add the log value of the delta Gibbs free energy to the user-defined reverse reaction weight. Based on the weighting strategy, Sim-network will connect every pair of reactantproduct in each forward or reverse reaction with directional connections, and give the related reaction weight to the directional connections. Since the reaction rate is influenced by all reactants, two metabolites are also connected in Sim-Network when they are both reactants in a reaction, or both products in a reversible reaction.
Step 3 if the shortest route from metabolite to contains another metabolite in the selected metabolites set { }, we will discard this connection, assuming that the influence from to can be reflected by the co-influences of ( to ) and ( to ). The resulted network is saved in SBML format (Hucka, et al., 2003) with the Sim-Biology Matlab toolbox.
With the conceptualized network, one can introduce the Jacobian structure. The diagonal components of the related Jacobian matrix are non-zero components, and each connection in the conceptualized network represents a non-zero off-diagonal component. In Supplemental material S3, we provide a toolbox manual and case studies.
The regression loss matrix as a new inverse differential Jacobian evaluation
This section introduces a new algorithm to determine the major components of the differential Jacobian matrix, using the Jacobian structure information as determined by Sim-Network and a covariance estimation from metabolomics data.
In the inverse Jacobian approach, the Lyapunov equation (3) is solved for the Jacobian matrix with given covariance and fluctuation matrices and . This is a linear equation, which as outlined in previous studies (Kügler and Yang, 2014;Sun and Weckwerth, 2012) can be rewritten where the vector consists of all the non-zero components in the Jacobian matrix as unknown variables. The matrix is calculated from the values in the covariance matrix C; its dimension is where is the dimension of the vector . Since the Jacobian structure is sparse, we , and equation (7) is overdetermined. The vector b is constructed from the fluctuation matrix D; its dimension is Similarly, to evaluate the differential Jacobian in two conditions denoted by 'h' and 'd', the corresponding Lyapunov equations can be rewritten as Previous inverse Jacobian algorithms (Kügler and Yang, 2014;Nägele, et al., 2014;Steuer, et al., 2003;Sun, et al., 2015;Sun and Weckwerth, 2012;Weckwerth, et al., 2004;Wilson, et al., 2020) have assumed that independent stochastic noise affects each metabolite individually, giving rise to a diagonal fluctuation matrix in the Lyapunov equation (3). Thus these studies used an average result of randomly sampled diagonal perturbation matrices D (Nägele, et al., 2014;Sun and Weckwerth, 2012), or applied a L-p optimization of the two conditional fluctuation matrixes and (Kügler and Yang, 2014) to calculate the differential Jacobian matrix . These methods work through directly solving the linear equation (7). However, when the condition number of the matrix is large, the linear equations' solution will be unstable against small perturbations in the data. This makes previous methods (Kügler and Yang, 2014;Nägele, et al., 2014;Sun, et al., 2015;Sun and Weckwerth, 2012) numerically sensitive and not adequate to deal with large-scale models.
As an alternative to a direct solution of the linear equations (8) and subsequent calculation of the differential Jacobian according to (4), we introduce the use of the regression loss as a measure for the relevance of individual components in the differential Jacobian. Solving an overdetermined linear equation of the general form as in (Freedman, 2009) This property can be used to make the determination of large elements in the differential Jacobian more robust against fluctuations in the data. To this end, we construct a regression loss matrix that aims to capture the relative importance of individual elements in the differential Jacobian. The regression loss matrix has the same dimension and sparsity structure as the Jacobian , determined by Sim-Network. Each non-zero element of the regression loss matrix is computed as in Equation (9), where is calculated by combing A ℎ and A in Equation (8) with the additional constraint that only that single element may differ between the Jacobians ℎ and , while all other elements are equal. * = ( ) −1 * = min ‖ − * ‖ To solve the regression problem, one needs to use specific values for the fluctuation matrices 1 and 2 . However, in practice these are not known. Therefore, similar as in previous studies (Nägele, et al., 2014;Sun and Weckwerth, 2012), we sample over possible values of the fluctuation matrices (diagonal elements distributed between 0 and 1), and take the minimum regression loss as the overall result for the regression loss matrix as in Equation (9). A more comprehensive description of the new algorithm is presented in Supplemental Material S1.
For comparison, we replicated the L-p optimization used in the previous work [19]. The general idea of the L-p optimization is that the L-p cost optimization will induce most components in the optimized differential Jacobian matrix (refer to (DJ-1) in equation 4) close to zero.
Furthermore, previous studies do L-p optimization based on diagonal D matrix; we extended it into diagonal-dominant matrix optimization and applied an improved algorithm compared to [19], which integrates several global optimization approach. The details of the original and improved L-p optimization approach are presented in Supplemental Material.
The inverse differential metabolic interaction network
The differential metabolic interaction network illustrates the change of the metabolic network between two phenotypes and are visualized by circular interaction plots. Here, each node i represents a metabolite in the dataset; the thickness of the line j->i is proportional to in Eq. (4) or entry * of the regression loss matrix (9); the node size i is proportional to in Eq.
(4) or the entry * of the regression loss matrix (9). The generation of these circular plot is included in the COVRECON toolbox with different setting for the resulting differential metabolic interaction network as shown in the Supplemental material S3 (toolbox manual). The detailed enzyme and gene information of each superpathway can be checked interactively in the circular plot.
Model case studies
To evaluate the new regression loss Jacobian algorithm, we utilize an abstract test model and several published models obtained from the EBI BioModels database (Malik-Sheriff, et al., 2020). The following models are utilized in this evaluation, using reaction perturbations as described to obtain the two conditions ('h' and 'd'): 1. Abstract test model with six components (details see supplemental material).
2. Model of the upper glycolysis pathway from Klipp et al. (Klipp, et al., 2016): Similar to previous work in Kugler et al. (Kügler and Yang, 2014), in order to mimic a second network condition, we introduced a twofold increase of the phosphorylation rate parameter k4 of reaction 4: 6 + → 1, 6 2 + from its nominal value 4 ℎ = 1 to 4 = 2 . Figure 2a shows the exact differential Jacobian matrix between these two conditions at the steady state.
3. Model of the EGFR/ERK signaling pathway in Orton et al. (Orton, et al., 2009): This work studied the mutation of SOS feedback reactions. The differential Jacobian matrix of the wild type and mutation models is shown in Figure 3a. These two models are also used as evaluation models in (Kügler and Yang, 2014). 4. Mathematical model of carbohydrate energy metabolism (Nazaret and Mazat, 2008): We increased the reaction rate parameter in 2: + → + five-fold for the second conditional Jacobian matrix. Figure 2c shows the exact differential Jacobian matrix for this model. type and mTOR knockout model based on time-series experimental data, which represent our two conditional Jacobian models. The exact differential Jacobian matrix is in Figure 3b.
6. Hepatic glucose metabolism model (Bulik, et al., 2016). We applied a two-fold parameter change for the reaction rate of the second reaction R2 to generate the second conditional model. The related differential Jacobian matrix is shown in Figure 3c.
7. Large-scale blood cell metabolism model (Holzhütter, 2004). We introduced a fivefold increase to several components of the Jacobian matrix directly; the exact differential Jacobian matrix is shown in Figure 3d.
The correlation matrix does not recover the Jacobian
Correlation network topology analysis is one of the frequently used multivariate statistical methods to study metabolic interactions (Kitagawa, et al., 2019;Poldrack, et al., 2015;Tofte, et al., 2019;Weckwerth, 2010;Weckwerth, 2011;Weckwerth, et al., 2004). However, correlation network analysis alone is not suitable to study the differential metabolic regulation between two conditions. Using the models 2, 4, 6 and 7, we generated the differential correlation matrixes under the two conditions. Then, we compared the differential Jacobian matrices and the differential correlation matrices (supplementary Figure 1). The results show that the differential correlation matrix cannot recover the differential Jacobian matrix. Here we note that, a correlation matrix or network can still be utilized to show potential interactions when the Jacobian structure information is not clear (Kitagawa, et al., 2019;Poldrack, et al., 2015;Tofte, et al., 2019).
The regression loss Jacobian is more reliable than the L-p based Jacobian
In the following, we describe the results of applying either the regression loss Jacobian algorithm or the improved L-p optimization algorithm for the differential Jacobian reconstruction on the different models as described in Section 2.4. Since in previous studies (Kügler and Yang, 2014;Nägele, et al., 2014;Steuer, et al., 2003;Sun, et al., 2015;Sun and Weckwerth, 2012;Weckwerth, et al., 2004;Wilson, et al., 2020), the fluctuation matrix D is assumed to be a diagonal matrix, we restrict this comparison to a diagonal structure of .
In this evaluation, artificial data is generated using Gaussian perturbations acting on individual compounds only for all the evaluation models in Section 2.3.3. The covariance matrices for models 1 and 2 are computed through SDE simulation with 100 and 300 samples, respectively, while in other models the covariance matrices are determined from the Lyapunov equation with = 0.4 (Section 2.4). The Jacobian reconstruction made use of structure information derived directly from the underlying model. Both the L-p optimization method and the new regression loss Jacobian algorithm were applied to that data. As shown in supplementary Table 1, the new inverse method achieves a more accurate result with less computation time than the L-p optimization. The table shows the accuracy of three inverse differential Jacobian approaches: Kugler et al. (Kügler and Yang, 2014) (details in Supplementary Material), an improved L-p optimization approach and our new inverse Jacobian algorithm. The indicated cost is the L-p optimization loss; the target cost is the L-p loss calculated with the real ℎ and (detail in Supplemental Material). From the table, we can see that the improved L-p optimization approach can obtain a better result than Kügler et al. (Kügler and Yang, 2014), but requires more computation time. Our new inverse Jacobian algorithm will need fewer samples to reach a similar accuracy (100 compared to 1000) while also cutting down the computation time.
The inverse differential metabolic interaction network for both the L-p algorithm and our new regression loss Jacobian algorithm with all the models are shown in Figure 2. The results show that because of the instability of the regression solution with ill-conditioned matrices, the L-p optimization will be inadequate when either the condition number ( ) of the Jacobian is large, or the model dimension is large. The regression loss Jacobian algorithm can achieve a better accuracy and numerical stability by utilizing the regression loss matrix * to recover the relevant components of the differential Jacobian . The actual matrices (in place of the circular interaction plots) are shown in supplementary Figure 2. Furthermore, supplementary Figure 3 shows a scatter plot between the real differential Jacobian components values, scaled to the interval (0, 1) and the calculated ( , ) in our regression loss Jacobian algorithm for all models.
This confirms that with enough samples, the regression loss Jacobian algorithm can find most differential Jacobian components for the two conditional Jacobian matrixes, except for some false negative components in the larger models 6 and 7. scale blood cell metabolism model. In each sub-graph, the left two subplots gives the real differential Jacobian matrix and differential interaction network; the right two subplots show the calculated differential interaction network through the L-p optimization approach and the regression loss Jacobian algorithm respectively (refer to Section 2.3).
To test the numerical stability of the new algorithm, we calculated the statistical accuracy by using different noise levels in the D matrix, = 0.2, 0.3, 0.4 and 0.5. For each model and , we repeated the evaluation 100 times and calculated the replicability of the top 1, 3 and 5 differential Jacobian components in * (refer to Eq. (S4*)). The results are listed in Table 2, where we observe that the new algorithm can identify the top 1, 3, and 5 components with high accuracy. Of note, even in the larger models 6 and 7, which have several false negatives, these false negatives still have a high consistency, indicating that the noise level in the matrix does not significantly impact the results. Finally, we tested the regression loss Jacobian method with covariance data generated from stochastic simulations for the models 1, 3, 4 and 5 with a stochastic second-third order Runge-Kutta implicit method. For each model, we added the stochastic Gaussian noise perturbations to every component at each time step (Higham, 2008), which corresponds to the nominal D matrix being a diagonal matrix. For each model, we repeated the computation with covariances computed from 100 and 1000 samples to evaluate the effect of sample size on the results.
Supplementary Figure 4 demonstrates the results of the new inverse Jacobian algorithm. As shown in Supplementary Figure 4a and Figure 4b, for small models (about 10 variables) the algorithm correctly finds the large differential Jacobian components through large values in the regression loss * using only 100 samples. For larger models, the algorithm is only able to find some of the relevant differential Jacobian components with 100 or 1000 samples, as shown in Supplementary Figure 4c and 4d. Even though the detection is improved with 1000 compared to 100 samples, there are still some false negatives even with 1000 samples. We can conclude that this new inverse Jacobian algorithm gives reliable results using on the order of 100 samples for metabolic models with about 10 variables. For larger models, we expect the required sample number to grow with the square of the model size, presenting a practical challenge for larger models. However, even with an insufficient number of samples, we can still find several relevant differential Jacobian components through large values of the regression loss * .
COVRECON workflow case study
As in the first step of COVRECON, Sim-Network will reconstruct a reduced model; we need to test the influence of this network reduction approach (see "5 Effect of the network reduction on the Jacobian reconstruction" in the Supplemental material S1. The result verifies the feasibility of this first step (Sim-network) in the COVRECON approach. Even when using the reduced network structure for the Jacobian reconstruction, the algorithm is able to detect most of the relevant interactions in the original network.
In the next step, the complete COVRECON workflow is tested starting from a data covariance matrix, without additional structure information. The test is done with models 6 and 7 (hepatic glucose metabolism model the blood cell metabolism model). With these models, we generate the covariance matrices ℎ and for both conditions 'd' and 'h' with the Lyapunov equation using = 0.5. In the reconstruction, we first use Sim-Network to determine the Jacobian structure information, relying biochemical pathway information in the KEGG database for Homo sapiens (HSA). As algorithm parameters, we use a cost threshold of 2; discard side-metabolites (listed in supplementary material); apply reverse reaction weight of 2; and use thermodynamics to modify the reverse reaction weight. The reconstructed metabolic interaction networks and Jacobian structure matrices of the two models are shown in Figure 3. In conclusion, compared with manually built models, Sim-Network has several advantages.
First, the resulting model is all-inclusive, for it will explore all the reactions in one specific organism and locate all possible routes, without relying on a modeler's domain knowledge. On the other hand, the automatically constructed model will be more complex than a manually built model, and it might include rare reactions and routes. Domain knowledge could thus still be helpful to remove individual interactions, starting from the detailed information given by the Simnetwork reconstruction.
Moreover, the enzyme regulatory interactions are not included in the current tool. In fact, these interactions are still rarely known except for several widely studied enzyme activators or inhibitors such as fructose bisphosphate for pyruvate kinase. Future versions of COVRECON could make use of an enzyme regulation database to also take these interactions into account.
With the reconstructed network information, we then apply the regression loss Jacobian algorithm to the covariance matrices. As shown in figure 4 and Supplemental Figure 5, the COVRECON approach shows similar results to the results with exact Jacobian structure information obtained from the model. As a further evaluation, we also consider different Simnetwork settings to reconstruct the network (cost threshold 1, reverse reaction weight 1, no thermodynamic strategy, see supplementary material "6 Sim-Network Matlab Interface and default settings"). This will result in more connections because we treat the forward and reverse reaction direction in the same way. As shown in supplementary Figure 6, with this setting, the reconstructed model will miss fewer components present in the literature model but have even more components which are not present in the literature model. The resulting regression loss matrix * remains similar in each case. Overall, this analysis verifies that the COVRECON workflow can recover relevant interactions in a differential Jacobian from only metabolite covariance data.
Application to an experimental dataset from literature
Finally, we applied the COVRECON to a real experimental dataset from a breast cancer study (Di Filippo, et al., 2022). We analyzed the differential Jacobian matrix between two dataset: nontumorigenic breast epithelial cell line (MCF102A) and pleural effusion metastasis of a breast adenocarcinoma (MCF7). For the reconstruction, we use the KEGG dataset with organism homo sapiens (KEGG code: hsa), the Sim-network settings are left at their default values, and the transcriptomic dataset is used to discard inactivate reactions with GIMME method (refer to method section 2.2). The discovered superpathways with relevant reactions, enzymes and genes information are listed in the Supplemental file S2. A COVRECON toolbox manual of the case study is presented in Supplemental file S3. Figure 5 illustrates the inverse differential metabolic interaction network, where highlighted components imply major differences in metabolic interactions between the two datasets. Here, different from Section 2.4, the node size represents the Variable Importance (calculated as -log(p), where p is the p-value of a t-test from the metabolomics datasets comparison). The corresponding matrix is presented in Supplemental Figure 7. Moreover, we attached the detailed enzyme gene information of each metabolic interaction line in Supplemental material S8, where one can interactively check detailed information about each metabolic interaction. In addition, the information is also presented in Supplemental material S2. As shown in Figure 5, we found several metabolic interactions with a major difference between the datasets, in which also enzyme transcript levels underlying these interactions had a significantly different expression level. In Supplemental Figure 8, we list all the t-test results of the transcriptomic profile for interactions with highlighted value above 0.5. Figure 5. The differential metabolic interaction network plot, where the node size represents the Variable Importance (calculated as -log(p), p is the p-value of t-test from metabolomics datasets comparsion), and the line widths describe the metabolic interaction difference. We chose and list the superpathways details of three highlighted metabolic interactions and performed a t-test for relevant transcriptomic data. The t-test results show significant (p<0.05). The matlab format circular plot with all superpathway information is in Supplemental material S8.
Conclusion
In this paper, we have developed a new approach for the inverse differential Jacobian algorithm: COVRECON. Unlike the widely used constraint-based analysis for the reaction-flows optimization, this approach endeavors to discover the causal biochemical interactions between two conditions of the system. This new approach offers an alternative mathematical approach to process and interpret large-scale metabolomics data. The open-source matlab-tool is available in the website https://bitbucket.org/mosys-univie/covrecon. Supplemental file S3 gives a toolbox manual.
The main subject of this new approach is large-scale metabolomics data. Meanwhile, we also tried to integrate different OMICS datasets. In Sim-Network, the transcriptomic data can be used to exclude reactions with low activity; and the important enzyme regulations can be added into pathway search for a better network reconstruction.
From our knowledge, COVRECON is the first method to integrate different OMICS data and automatically construct a metabolic interaction model providing a more general network structures and perturbations of the same. As for the inverse Jacobian part, this work introduces a novel algorithm improving accuracy and stability, reduces computation time, and extends the method to large-scale models. | 7,639.8 | 2023-03-21T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Investigations on Strong-Tuned Magnetocaloric Effect in La0.5Ca0.1Ag0.4MnO3
The magnetocaloric effect (MCE) of La0.5Ca0.1Ag0.4MnO3 (LCAMO) is simulated using a phenomenological model (PM). The LCAMO MCE parameters are calculated as the results of simulations for magnetization vs. temperature at different values of external magnetic field (H ext). The temperature range of MCE in LCAMO grew as the variation in H ext increased, eventually covering the room temperature at high H ext values. The MCE of LCAMO is tunable with the variation of H ext, proving that LCAMO is practically more helpful as a magnetocaloric (MC) material for the development of magnetic refrigerators in an extensive temperature range, including room temperature and lower and higher ones. The MCE parameters of LCAMO are practically greater than those of some MC samples in earlier works.
INTRODUCTION
The need to solve the problem of emission of hazard gases, which come out of conventional vapor refrigerators, results in increased interest in functioning magnetic refrigerator (MR), the idea of which depends on functioning magnetocaloric effect (MCE) (Dhahri et al., 2014;El-Sayed and Hamad, 2019a;El-Sayed and Hamad, 2019b;Ahmed et al., 2021a, Ahmed et al., 2021bHamad et al., 2021;Jebari et al., 2021), because the MR provides high efficiency for cooling without any negative impact on the environment and has low energy consumption, availability of mechanical stability, and fewer noise events during cooling operation (Dhahri et al., 2015;Hamad, 2015a;ErchidiElyacoubi et al., 2018a, ErchidiElyacoubi et al., 2018bHamad et al., 2020;Sharma et al., 2020;Belhamra et al., 2021). MCE is described as a change in magnetic entropy (ΔS M ) with a variation in the external magnetic field (H ext ) exerted on the material, causing a change in temperature (Masrour et al., 2016;ErchidiElyacoubi et al., 2018c;Kadim et al., 2020, Kadim et al., 2021a, Kadim et al., 2021b. Numerous research over decades have studied various magnetic materials to discover their suitability as magnetocaloric (MC) materials suitable for the MR industry (Hamad, 2015b;Masrour et al., 2017;Jebari et al., 2021;Labidi et al., 2021). It is preferable to use MC materials that have a magnetic transition type of the second degree with a suitable Curie temperature (θ C ) as appropriate for use in a wide temperature range, including room temperature (Choura-Maatar et al., 2020;Henchiri et al., 2020;Laajimi et al., 2020). The current efforts are directed towards the use of manganite as an effective substance in MRs due to its great chemical stability during frequent use, lack of eddy current, ease of preparation, high electrical resistance, and the possibility of improving their properties through doping and changing the oxygen content (Alzahrani et al., 2020;Choura-Maatar et al., 2020;Henchiri et al., 2020;Laajimi et al., 2020). Felhi et al. prepared La 0.5 Ca 0.1 Ag 0.4 MnO 3 (LCAMO) via the ceramic method and reported an increase in H ext and an increase in broad ferromagnetic (FM) phase transition of LCAMO covering room temperature under high H ext (Jeddi et al., 2020).
These results motivate us to investigate the MCE of LCAMO, expecting that the MCE of LCAMO covers a large range of temperatures, especially cryogenic temperature and room temperature. Furthermore, it is believed that LCAMO, as a manganite, has low material processing costs, high chemical stability, and high resistivity, which are advantageous for reducing the overall eddy current heating. In this research, the MCE of LCAMO is studied using a phenomenological model (PM) to simulate the isofield magnetization vs. temperature curves, concluding with simulated ΔS M , heat capacity change (Δ C P,H ), and relative cooling power (RCP).
THEORETICAL CONSIDERATIONS
According to PM, as described in Hamad (2012Hamad ( , 2015cHamad ( , 2015d, the magnetization (M) vs. temperature is simulated by: where M i and M f are values of magnetization at the onset and finalization of the FM paramagnetic transition as pointed out in Figure 1, respectively.
The numerical evaluation of ΔS M of LCAMO under H ext variation (ΔH) can be derived from Maxwell's relation and derived from Eq. 1 as follows: The full-width at half-maximum (δT FWHM ) of LCAMO can be given as follows: A magnetic cooling efficiency of LCAMO is expected by considering the magnitude of |ΔS Max (T, H max )| and δT FWHM (Hamad, 2012). RCP is calculated as follows: The Δ C P,H of LCAMO can be given as follows (Hamad, 2012):
RESULTS AND DISCUSSION
At values of H ext <5 T, there are two magnetic transitions of LCAMO, as can be observed in Figure 2, at two different temperatufvariation, which is about 57% of the correspondingres. It is possible that this is due to the presence of a canted FM phase in the FM matrix, which can be attributed to the additional Ag content (Jeddi et al., 2020), thus expecting two peaks in the ΔS M curves. However, at H ext = 5 T, it seems like a single magnetic transition of LCAMO, expecting a single peak in the ΔS M curve. It is possible that this is due to the presence of a strong interatomic double exchange interaction at H ext = 5 T. To simulate the MCE of LCAMO, the PM parameters (M i , M f , ɵ c , β, and α) of LCAMO for each magnetic transition were determined directly from experimental data (isofield magnetization vs. temperature) as in Jeddi et al. (2020). We can see from Figure 2 that there is a good agreement between the experimental and theoretical results of M(T), confirming the good fit of this model for simulating the MCE of LCAMO. This work demonstrates the good coincidence between the experimental data and the continuous curves given by PM, indicating that this model allows us to predict the MCE for LCAMO under different magnetic fields. The M(T) curves of LCAMO demonstrate the magnetic transition from the FM phase to a paramagnetic one under different magnetic fields. The θ C increases as H ext increases due to the increased alignment of the local spins, resulting in an increase in the interatomic double exchange interaction. As shown in Figure 3A, there are two peaks in the ΔS M (T) curves when H ext <5 T. However, at H ext = 5 T, there is a single peak in the ΔS M curve due to the large interatomic double exchange. ΔS M reaches a peak of 2.75 J/kg K. Though the maximum ΔS M is 2.75 J/kg K upon 5T applied field variation, which is about 57% of the corresponding value of the compound that belongs to the same system as La 0.5 Ca 0.2 Ag 0.3 MnO 3 (ΔS Max = 4.8 J/kg K upon 5 T), the value of RCP (273.5 J/kg upon 5 T) is larger, and the ΔS M distribution of LCAMO is much more broad than that of La 0.5 Ca 0.2 Ag 0.3 MnO 3 (RCP = 168 J/kg δT FWHM = 35 upon 5 T), covering a wider range of temperature (Felhi et al., 2019). Figure 3B shows that ΔS M (T) was calculated by Maxwell relation from experimental isothermal magnetization as a function of H in Ref. 31, and ΔS M (T) was calculated by PM, ranging between 240 and 270 K and covering the highest temperature transition. There is a good agreement and approach between the calculated results of both Maxwell relation and PM. Therefore, these results confirm that Eq. 4 still holds at ΔH of 0.5, 1, 3, and 5 T. Frontiers in Materials | www.frontiersin.org February 2022 | Volume 9 | Article 832703 Figure 4 shows that ΔC P,H (T) has an inverse change from a negative change to a positive one at around θ C for each magnetic transition, causing a modification in the total specific heat. This oscillating temperature dependence of ΔC P,H (T) at different temperatures is a reflection of ΔS M (T) behavior. The behavior of | ΔS M | and ΔC P,H (T) curves suggests how the range of temperature for functioning LCAMO in the MR can be expanded. It is clear that the | ΔS M | and ΔC P,H peaks of LCAMO extend over a large temperature range. This temperature range of |ΔS M | and ΔC P,H expanded with increasing variation in H ext , i.e., the peaks broaden, covering room temperature upon high values of ΔH. This indicates that larger |ΔS M | and ΔCP, H are expected at higher values of ΔH. Moreover, the variation of H ext allows the tuning of θ C of LCAMO. This tunable θ C makes LCAMO practically more helpful for the development of MRs. Figures 5-8 show the values of |ΔS Max |, δT FWHM , RCP, and ΔC P,H(Max) (maximum value of ΔC P,H ) for LCAMO, respectively. It is clear that |ΔS Max |, RCP, and ΔC P,H(max) show a general increase with an increase in ΔH due to enhancing the variations of alignment in the local spins with an increase in ΔH, resulting in an increase in MC properties.
These large values of |ΔS Max |, δT FWHM , RCP, and ΔC P,H(Max) in LCAMO prevailed as well in perovskite manganite due to the strong coupling between spin and lattice (Dhahri et al., 2008). Since lattice change is associated to magnetic transition in the manganite, this caused a further change in the magnetism of manganite (Dhahri et al., 2008). Furthermore, the bond distance of <Mn-O> plus bond angle <Mn-O-Mn> changes to favor the spin ordering with a high value of H ext , leading to enhanced |ΔS Max |, δT FWHM , RCP, and ΔC P,H(Max) in LCAMO (Radaelli et al., 1995;Hamad, 2015b). Table 1 gives a comparative importance of the MCE parameters of LCAMO with those of various materials in terms of the high values of ΔH in previous works (Álvarez-Alonso et al., 2013;Hamad, 2013;Saadaoui et al., 2013;Ho et al., 2014;Bhumireddi et al., 2015;Boutahar et al., 2015;Jerbi et al., 2015;Gupta and Poddar, 2016;Mansouri et al., 2016;Oubla et al., 2016;Long et al., 2018;Biswal et al., 2019;El Boubekri et al., 2020). The MCE parameters of LCAMO are significantly larger than some MCE parameters of MC samples in the corresponding values of ΔH and the higher ones. From this comparative image, we conclude that LCAMO can function as a favorable MC magnet for the MR.
CONCLUSION
Based on thermodynamic calculation via PM, the MCE of LCAMO is simulated under different values of variation in H ext . The MCE of LCAMO is strongly tunable with the value of the variation of H ext . Therefore, LCAMO can be used over a wide temperature range as an effective material for MR, covering a large range of temperatures, including room temperature and lower and higher ones. The MCE of LCAMO is tunable with the variation of H ext , proving that LCAMO is practically more helpful as a MC magnet for the development of MRs in an extensive temperature range, including room temperature. The values of the MCE parameters of LCAMO are practically greater than the MCE ones of some MC samples in earlier works.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. | 2,623.6 | 2022-02-08T00:00:00.000 | [
"Materials Science"
] |
Frequency Domain Interleaving for Dense WDM Passive Optical Network
In this work, we introduce a new concept of frequency interleaving applied to passive optical networks based in Dense Wavelength Division Multiplexing (DWDM). In order to explore colorless ONUs approach, a single and centralized optical source is used to create the interleaved wavelengths for the downstream and upstream path. In this case, frequency comb generation implemented by recirculating frequency shifting technique allowed the creation of all the wavelengths used in each path. The reception operation is accomplished by introducing, at the ONU side, a combination of a set of optical passive elements. The system with 4 × 12.5 Gbit/s NRZ-OOK (Not Return to Zero – On-Off Keying) modulation format were simulated through 20 km of standard single mode fiber link in both transmission paths. In order to guarantee a high quality optical frequency comb source, the optical carrier-to-noise-ratio (OCNR) was evaluated as well as BER (bit error rate), to analyze the system performance. All devices models and simulations were performed in Matlab environment.
I. INTRODUCTION
Wavelength division multiplexed passive optical networks (WDM-PONs) have been extensively studied for applications that involve large bandwidth over the last years.This strong growth observed in WDM-PONs is mainly due to the communication services demand from end-users such as residential and business customers.Applying WDM-PONs to solve this growing demand for bandwidth can contribute to lower deployment costs and strongly disseminate their use.
The nonexistence of active devices connecting the Optical Line Terminal (OLT) to the Optical Network Units (ONUs) is one of the advantages of using PONs.It means enhancing end-to-end transparency, diminishing hardware processing and, consequently, preventing effects produced by electrical noise sources.Furthermore, the high bandwidth combined with the infrastructure, which can be shared among end-users, reduces the costs and simplify the maintenance as well as the operation [1].
On the other hand, when it comes to WDM networks, issues related to wavelength assignment should be considered.Essentially, the channel bandwidth observed for traditional WDM channels However, the need for a wavelength region for downstream and another one for upstream represents an inefficient use of the available spectrum.We should consider a future access system design assuming that existing PONs should coexist [3][4].Theses features will allow progressive migration of existing subscribers as they move to the new technology, without disrupting services for customers on the legacy PON [4].In order to improve the spectral efficiency and provide coexistence with previous generations of PONs, the frequency interleaving of downstream and upstream signals could be considered as a solution for the wavelength plan of the next generation of optical access networks.
In this work, high-capacity optical DWDM access networks based on the concept of frequency interleaving is introduced and numerically simulated, as an efficient method to use the available spectrum and provide coexistence with legacy PON standards, since the downstream and upstream wavelengths are closer to each other and share the same frequency band.All the devices models were created and simulated in Matlab.The methodology used in this work was similar the one utilized by the authors in [5] that created several optical device models for backbone network planning simulations.
The paper is organized as follow.Section II explains the concept of frequency interleaving for DWDM access networks.Section III describes the principle of the optical multi-carrier generator used in this work.Sections IV and V describes, respectively, the demultiplex operation and architecture for optical access networks with frequency interleaving.Section VI presents the simulations discussion and the results.Finally, the conclusions are given in Section VII.
II. FREQUENCY DOMAIN INTERLEAVING CONCEPT
A single and centralized optical comb source is responsible to generate multiple equal spaced and frequency locked wavelength channels, which will be modulated, with a symbol rate equal the frequency space between the wavelengths, for downstream and upstream transmission.Due to the nature of the comb source, any oscillation in the laser seed power will be transfer to all useful wavelengths at the same proportion; i.e., the only power difference between the wavelengths will be defined by the flatness of the comb source.The even wavelengths generated will be used in the downstream path, while the odd ones in the upstream path, creating an interleaving pattern between consecutive wavelengths.
The demultiplexing operation is introduced at the ONU side and is processed by the use of an alloptical passive element (Mach-Zehnder interferometer -MZI).For practical purposes, this processing 198 can be implemented with silicon-on-insulator (SOI) technology in photonic integrated circuits (PICs) [6][7].In its most basic form, the technology allows to implement passive optical components, such as splitters, filters, (de)multiplexers, polarization handling components, interferometers, resonators and their combination with coupling structures to optical fibers or to free space optical elements [7].
In this concept, four wavelengths will be used in the downstream and another four in the upstream path.In this context, each dynamic group of 64 ONUs will share one of the four wavelengths in time domain.
III. RECIRCULATING FREQUENCY SHIFTING (RFS)
Multitudes of methods are reported on the literature that allows generating a series of discrete, equally spaced frequency lines.However, for optical communications and RF (radio-frequency) photonics applications, techniques using opto-electronic devices, such as optical modulators, are suitable due spectral flatness, robustness and tunability [8].A technique based on ultra-dense and parametric comb is described in [9] that generates spectral lines spaced by 6.25 GHz and 3 dB spectral flatness.Although this technique creates a huge number of carrier lines within 100 nm of optical bandwidth (C + L band), the setup is quite intricate that requires elements such as dispersion flattened highly nonlinear fibers and high CW laser powers.A method based on a gain-switched comb, which produces a 12.5 GHz spaced channels over the C-band is reported in [10].The experimental setup is practical, but this technique generates a limited number of frequency lines (six comb lines) with 3 dB flatness and requires a high RF sinusoidal power (around 24 dBm) in the Fabry-Perot laser diode used as slave.Nevertheless, an interesting method, that uses cascaded intensity and dual-parallel modulators, is presented in [11].In this case, however, the complexity increases with the number of generated optical carriers, since they require high RF frequency and power control of each optical modulator [11].
The RFS technique, as sketch in Fig. 1, presents very good stability, flatness, flexibility on wavelength spacing control and low driving voltages [12].Although this optical multi-carrier generator presents a relatively complex configuration, it is an effective tool as an optical multi-carrier source and can generate hundreds of them using commercially available opto-electronic devices.For this reason, RFS technique was chosen as the optical comb source implemented in our proposal.
In the RFS technique, a continuous wave (CW) laser is utilized as a seed, which determines the starting wavelength of the generated optical multi-carriers set.The optical amplifier (OA) in the ring, as illustrated in Fig. 1, compensates any transmission loss observed in the ring as well as the insertion loss in the coupler and optical modulator (IQ-MZM).An optical band pass filter (OF) defines the wavelength range of the optical subcarriers generated.
Every time the optical signal passes through the complex modulator (IQ-MZM), which is adjusted to generate a single side-band suppressed carrier optical signal (SSB-SC), the frequency of the signal 199 will be shifted according to the frequency (fc) of the sine wave input.After each round trip, the input CW will occupy the previous empty position.As a result, new optical subcarriers will be created on each round trip in the ring.The frequency of the IQ-MZM sine wave defines the spacing between the optical subcarriers.GHz), which results in eight useful optical subcarriers (1550.2nm to 1550.9 nm), as shown in Fig. 1.
To guarantee a high quality optical frequency comb in high bit rate transmission, carrier-to-noise ratio (OCNR) was evaluated in terms of optical subcarrier number [13].Fig. 2 shows the result of OCNR obtained for each subcarrier generated in the RFS model simulated.
As reported by R. Essiambre et al. [14], the OCNR needed to transmit a 12.5 Gb/s in OOK modulation format with a bit error ratio of 10 -3 is 10 dB.As shown in Fig. 2, our frequency comb generator provides multiple wavelengths with quality that is sufficiently large for OOK transmission.The demultiplex operation, for downstream and upstream wavelengths, is based in the optical fast Fourier transform (OFFT) method [15][16].It consists of 3-cascaded MZIs for each path with different delays (τ) and phase shifts (ɸ), which is illustrated in Fig. 3.The Ts parameter in Fig. 3 is the symbol period used in the system transmission.The MZI device model, created in Matlab, with two inputs (Ein1 and Ein2) and two outputs (Eout1 and Eout2) can be mathematically defined by the transfer matrix , 1 where k is the coupling coefficient (0.5) of the MZI optical coupler and f is the frequency vector used in the simulations [17].The amplitude of the frequency response of each output port of the MZI cascade device of Fig. 3 can be obtained by applying the Fourier transform of the impulse response, in Matlab, to each port and visualized in Fig. 4. The MZI cascade acts as a multiband optical filter where each port will be responsible for extracting passively a single wavelength of the optical signal transmitted.Furthermore, the MZI cascade demultiplex device avoid 20 dB of crosstalk noise at the carrier frequency [18].The symbol period used in the implementation model of the demultiplex device was 80 ps.
The advantage of this approach is that a single wavelength component of the signal can be easily extracted without the implementation of the complete structure [16].In this case, all MZIs that are not part of the optical path to the corresponding output port can be removed, leaving only one MZI per stage [16].Another possibility for practical implementation relies on, as aforementioned, implementation with silicon-on-insulator technology, obtaining much more compact structures [6][7].At the ONU side, the MZI demultiplex device of Fig. 4 is responsible to separate the wavelengths for downstream and upstream operations.After the wavelength separation, each modulated downstream wavelength is direct-detected (DD).
In the upstream path, after the demultiplex operation, the wavelength is modulated by the upstream data, amplified and transmitted to the OLT.The main controller element in Fig. 4 performs the wavelength allocation procedure and synchronization, based on information from each ONU.
VI. SIMULATION AND RESULTS
We have numerically simulated, in Matlab, the proposed architecture setup shown in Fig. 5, for downstream and upstream path.
The comb source employs the RFS technique, as previously described, to generate eight frequency locked optical subcarriers spaced by 12.5 GHz (0.1 nm).The generated subcarriers were separated, using the demultiplex device of Fig. 3, in four downstream subcarriers and four upstream subcarriers.
The odd subcarriers were chosen for the downstream path and the even for upstream path, i.e., the 204 upstream and downstream subcarriers are interleaved and share the same optical wavelength range.Fig. 6 shows the spectrum of the transmitted signal, where it is illustrated the modulated odd optical subcarriers (downstream) and the unmodulated even subcarriers (upstream).Fig. 6.Spectrum of the transmitted signal.λdown is the modulated odd downstream optical subcarriers (wavelengths 1, 3, 5 and 7).λup is the modulated even upstream optical subcarriers (wavelengths 2, 4, 6 and 8).
In the 4 x 12.5 Gbits/s NRZ-OOK system simulated, we used an independent pseudorandom bit sequences (PRBS) with (2 19 -1) length per wavelength.The BER curve was obtained for the ONUs maximum distance from the OLT of 20 km of SMF in downstream/upstream path without dispersion compensation, as can be seen in Fig. 7 and Fig. 8.The 7% forward-error-correction (FEC) limit, around 3.8 x 10 -3 , was also inserted for performance assessment.
As observed in Fig. 7 and Fig. 8, the BER performance remains under the FEC limit for received optical power higher than -21 dBm, for the downstream, after 20 km of propagation in SMF.For the upstream path, the performance remains under the FEC limit for received optical power higher than -20 dBm.In Fig. 7 and Fig. 8, we observe that the wavelengths in the middle of the spectrum (wavelengths 3 and 5 for downstream; wavelengths 4 and 6 for upstream) presents worse performance in relation to the other channels.This could be explained, due the higher level of crosstalk presents in the middle wavelengths comparing with the others (wavelengths 1 and 7 for downstream; wavelengths 2 and 8 for upstream).
Frequency
Domain Interleaving for Dense WDM Passive Optical Network Diogo V N Coelho 1,2 , Pablo Marciano 2 , Thiago V N Coelho 1 , Marcelo Segatto 2 , Maria Jose Pontes 2 197 strictly follow the ITU-T wavelength grids and spacing, e.g.50 GHz, 100 GHz.Moreover, in the context of WDM-PONs the concept of spectrum efficiency should be incorporated.It requires optimized channel allocation and related improvements such as elastic optical networks (EONs) with elevated granularity (DWDM with 12.5 GHz frequency spacing) [2].
Figure 1
Figure 1 shows the result of the RFS model obtained by simulation.As aforementioned all the devices models and simulations, used in this work, where performed in Matlab.The CW laser model emits in 1550.2 nm with 10 dBm of optical power, 40 dB of optical carrier-to-noise ratio (OCNR) and has a narrow optical linewidth (> 1 KHz) to guarantee low phase noise in the multiple wavelengths generated [13].The optical amplifier (OA) model inside the ring is an erbium-doped fiber amplifier (EDFA) with 6.2 dB of gain and noise figure of 4.5 dB.The amplified spontaneous emission (ASE) noise of the EDFA was also considered in the model to obtain the noise figure values as expected.The frequency (fc) of the RF signal is 12.5 GHz, generating optical subcarriers with 0.1 nm of wavelength spacing and 2.37 dB of flatness.The optical bandpass filter (OF) has 0.7 nm of bandwidth (≈ 87.5
200Fig. 2 .
Fig. 2. Optical subcarriers generated in our simulations IV.DEMULTIPLEX OPERATION FOR FREQUENCY INTERLEAVED DWDM-PON This section describes the demultiplex device for optical access networks that applies the concept of frequency interleaving, that allows downstream and upstream channels coexist in the same wavelength region.It may represents an efficient use of the available spectrum and coexistence with legacy PON systems on the same optical distribution network (ODN) [3].
202Fig. 4 .
Fig. 4. MZIs cascade amplitude response.V. ARCHITECTURE FOR FREQUENCY INTERLEAVED DWDM-PON Figure 5 shows the architecture based on the concept of frequency interleaved DWDM proposed in this work.At the transmitter side, an optical multi-carrier generator creates downstream and upstream wavelengths spaced by fc given in Hz units.The multiple wavelengths, generated by the comb source, are separated by a MZI demultiplex device illustrated in Fig. 3.The odd wavelengths are modulated in NRZ-OOK modulation format, while the even wavelengths pass through the transmitter without any modulation process.After the modulation of the downstream signals, all the wavelengths (odd and even) are multiplexed by an optical combiner, creating an interleaving pattern between them.We used a standard single mode fiber (SMF) model with attenuation factor α = 0.2 dB/km, fiber core effective area Aeff = 80 μ/m 2 , fiber dispersion coefficient D = 17 ps/nm.kmand dispersion slope S = 0.09 ps/nm 2 .km 2 at 1550 nm wavelength.The splitter ratio used to share the optical signals among all the ONUs was 1:256, i.e., each subcarrier will be used by 64 ONUs in a TDM (Time Division | 3,525.2 | 2019-04-17T00:00:00.000 | [
"Engineering",
"Physics"
] |
Protection by Recombinant Newcastle Disease Viruses ( NDV ) Expressing the Glycoprotein ( G ) of Avian Metapneumovirus ( aMPV ) Subtype A or B against Challenge with Virulent NDV and aMPV
Avian metapneumovirus (aMPV) and Newcastle disease virus (NDV) are threatening avian pathogens that can cause serious respiratory diseases in poultry worldwide. Vaccination, combined with strict biosecurity practices, has been the recommendation for controlling these diseases in the field. In the present study, we generated NDV LaSota vaccine strain-based recombinant viruses expressing the glycoprotein (G) of aMPV, subtype A or B, using reverse genetics technology. These recombinant viruses, rLS/aMPV-A G and rLS/aMPV-B G, were characterized in cell cultures and evaluated in turkeys as bivalent, next-generation vaccines. The results showed that these recombinant vaccine candidates were slightly attenuated in vivo, yet maintained similar growth dynamics, cytopathic effects, and virus titers in vitro when compared to the parental LaSota virus. The expression of the aMPV G protein in recombinant virus-infected cells was detected by immunofluorescence. Vaccination of turkeys with rLS/aMPV-A G or rLS/aMPV-B G conferred complete protection against velogenic NDV, CA02 strain challenge and partial protection against homologous pathogenic aMPV challenge. These results suggest that the LaSota recombinant virus is a safe and effective vaccine vector and expression of the G protein alone is not sufficient to provide full protection against aMPV-A or -B infections. Expression of other aMPV-A or -B virus immunogenic protein(s) individually or in conjunction with the G protein may be necessary to induce stronger and more protective immunity against aMPV diseases.
Introduction
Avian metapneumovirus (aMPV) is the causative agent for turkey rhinotracheitis (TRT) and is associated with "swollen head syndrome (SHS)" in chickens, resulting in substantial economic losses to the poultry industry worldwide [1,2].Isolates of aMPV have been classified into four subtypes, A, B, C, and D, based on the level of genetic variations and antigenic differences [2].The aMPV subtypes A and B are present worldwide, excluding the USA; C is present mainly in the USA, France, and Korea [3,4]; and D has only been reported in France [5].
In European and South American countries, cell cul-ture-attenuated or inactivated vaccines are currently being used to control the diseases caused by the subtypes A and B of aMPV [6][7][8].Although these live, attenuated vaccines have been approved and appear to be effective in most countries where the disease is prevalent, several reports have suggested that the stability and safety of some of these live vaccines are of concern [9][10][11][12].Recently in Italy [12] and Brazil [13], field evidence has suggested that the existing vaccines may not fully protect against the circulating field strains of aMPV in these countries.To overcome the problems associated with vaccine safety and stability, efforts have been made to develop inactivated, subunit, virosomal, vectored, or genetically engineered vaccines [14][15][16][17][18][19][20][21].In contrast to live Protection by Recombinant Newcastle Disease Viruses (NDV) Expressing the Glycoprotein (G) of Avian Metapneumovirus (aMPV) Subtype A or B against Challenge with Virulent NDV and aMPV 131 attenuated vaccine, inactivated vaccines are potentially safer, but their protective efficacy remains controversial [14,17].Experimental subunit or vectored vaccines induced varying degrees of protective immunity during clinical trials [15,16,18,21].However, the administration of these non-conventional vaccines may not be practical to large commercial poultry operations.Newcastle disease virus (NDV) is the etiological agent of Newcastle disease, one of the most serious infectious diseases in poultry.All known strains of NDV are of a single serotype, but have been classified into three different virus pathotypes: velogenic (highly virulent), mesogenic (moderately virulent), and lentogenic (low virulence) [22].Naturally-occurring lentogenic NDV strains, such as B1, VG/GA, and LaSota strains, are routinely used as live vaccines throughout the world to prevent Newcastle disease [22,23].These live vaccines induce both strong local and systemic responses and can be readily administered through drinking water supplies or by directly spraying the birds.During the past decade, recombinant NDV viruses have been developed as shuttle-vectors that express foreign antigens, such as avian influenza hemagglutinin (HA) protein, infectious bursal disease virus VP2 protein, and aMPV-C G protein, to protect poultry against NDV and the targeted avian pathogen [24][25][26][27][28].
In this study, we generated LaSota vaccine strainbased recombinant NDV viruses expressing the major surface attachment glycoprotein (G) of aMPV-A or -B using reverse genetics techniques.We evaluated these recombinant viruses in vitro and in vivo for safety, stability, and expression of the G protein for their potential use as bivalent vaccines against NDV and aMPV-A or -B diseases.
Cells, Viruses and RNA Preparation
HEp-2 (CCL-81; ATCC) and DF-1 (CRL-12203; ATCC) cell lines were grown in Dulbecco's Modified Eagle Medium (DMEM, Invitrogen, Carlsbad, CA) supplemented with 10% fetal bovine serum (FBS, Invitrogen) and antibiotics.The DF-1 cells were maintained at 37˚C and 5% CO 2 in DMEM supplemented with 10% allantoic fluid (AF) from 10-day-old specific-pathogen-free (SPF) chicken embryos for all subsequent infections unless otherwise indicated.The NDV LaSota strain was obtained from ATCC and propagated in 9-day-old SPF chicken embryos.The cell culture-adapted strains of aMPV-A (UK, CVL 14/1) and aMPV-B (Hungary, 657/4) and the velogenic strain of NDV, California 2002 (NDV/CA02; game chicken/US(CA)/S0212676/02) were obtained from the pathogen repository bank at the Southeast Poultry Research Laboratory (SEPRL, USDA-ARS, Athens, GA, USA).The pathogenic aMPV-A and -B viruses were obtained from Dr. Kannan Ganapathy (University of Liverpool, UK) and the viruses were prepared from tracheal tissue of virus-infected SPF turkeys as challenge virus stocks and titrated in SPF turkeys for 50% infective dose (ID 50 ) as described previously [29].
Viral RNA was extracted from either AF from NDVinfected chicken embryos or DF-1 cells using the TRIzol-LS reagent according to the manufacturer's instructions (Invitrogen).Total cellular RNA from tracheal tissues was extracted using the MagMAX™ AI/ND Viral RNA Isolation kit (ABI, Austin, TX) following the manufacturer's procedures.
Construction of Recombinant LaSota cDNA Clones Containing the G Gene of aMPV-A or -B
The infectious LaSota clone (pFLC-LaSota) and subclone (pT-LS MF) were previously generated [30] and used as backbones to construct recombinant cDNA clones containing the G gene of aMPV-A or -B (Figure 1).The open reading frame (ORF) of the G gene of aMPV-A (UK, 14/1) or -B (Hungary, 567/4) was generated by RT-PCR amplification from genomic RNA with paired specific primers using a Superscript TM III One Step RT-PCR system with Platinum Taq Hi-Fi kit (Invitrogen).Subsequently, the ORF of the aMPV-A or -B G gene was cloned into the intergenic region between the fusion (F) and hemagglutinin-neuraminidase (HN) genes in the pFLC-LaSota vector through a two-step subcloning process using the In-Fusion ® PCR cloning kit (Invitrogen).The resulting recombinant clones, designated as pLS/aMPV-A G and pLS/aMPV-B G, respectively, were amplified in Stbl2 cells at 30˚C for 24 hours and purified using a QIAprep Spin Miniprep kit (Qiagen).The sequences of primers used in the In-Fusion ® PCR cloning and G gene amplification are provided in Table 1.
Virus Rescue and Propagation
Rescue of the recombinant LaSota/aMPV-A or -B G virus was performed by transfection of the full-length cDNA clones and supporting plasmids into HEp-2 cells as described previously [31].The rescued viruses, which were confirmed by a positive hemagglutination assay (HA) [32], were plaque purified three times in DF-1 cells and finally amplified in SPF chicken embryos three times.The AF was harvested, aliquoted, and stored at −80˚C as a stock.The complete genomic sequences of the rescued viruses were determined by direct sequencing of the RT-PCR products amplified from the viral genomic RNA Protection by Recombinant Newcastle Disease Viruses (NDV) Expressing the Glycoprotein (G) of Avian Metapneumovirus (aMPV) Subtype A or B against Challenge with Virulent NDV and aMPV 133 as described previously [30].
Virus Titration, Pathogenicity, and Growth Dynamics Assays
Analysis of the recombinant viral stock titers, rLS/ aMPV-A G and rLS/aMPV-B G, were completed using the standard HA test in a 96-well microplate, the 50% tissue infectious dose (TCID 50 ) assay on DF-1 cells, and the 50% egg infective dose (EID 50 ) assay in 9-day-old SPF chicken embryos and compared to the parental LaSota virus [32].Pathogenicity of the recombinant viruses was assessed by performing the standard mean death time (MDT) and intracerebral pathogenicity index (ICPI) tests and also compared to the parental LaSota virus [32].Cytopathic effects (CPE) and growth dynamics of the recombinant viruses were examined in DF-1 cells and compared to the parental virus as described previously [30].
Immunofluorescence Assay (IFA)
Expression of the G protein from DF-1 cells infected with the rLS/aMPV-B G recombinant virus was examined by IFA with anti-aMPV-B chicken serum (kindly provided by Dr. Silke Rautenschlein, University of Vet.Med.Hannover) as described previously [30].Fluorescence was examined and digitally photographed using an inverted fluorescence microscope at 100X magnifications with matching excitation/emission filters for FITC or Alexa Fluor ® 568 (Nikon, Eclipse Ti, Melville, NY).
Immunization and Challenge Experiments
Seventy one-day-old SPF turkey poults were randomly divided into seven groups of 10 birds each and housed in Horsfal isolators (Federal Designs, Inc., Comer, GA) with ad libitum access to feed and water in the SEPRL BLS-3E animal facility.Each bird in groups 1, 2 and 3 was inoculated with 100 µl PBS via intranasal (IN) and intraocular (IO) routes as controls.Birds in groups 4 and 5 were vaccinated with 100 µl of rLS/aMPV-A G (1.0 × 10 7 TCID 50 /ml), and birds in groups 6 and 7 were vaccinated with 100 µl of rLS/aMPV-B G (1.0 × 10 7 TCID 50 / ml) per bird via IN/IO routes.At 14 days post-vaccination (DPV), blood samples were collected from each bird to detect serum antibody responses against NDV and aMPV-A or B. Immediately after blood collection, the birds in groups 1, 4, and 6 were challenged with the velogenic NDV/CA02 virus with a dose of 10 5 EID 50 /bird via IN/IO routes as described previously [33].Mortality of the NDV/CA02-challenged birds was monitored and recorded daily for two weeks.Birds in groups 2, 3, 5, and 7 were challenged with homologous pathogenic aMPV through transmission infection by direct contact with infected birds.Two-week-old SPF turkeys were infected with pathogenic aMPV-A or aMPV-B with a dose of 10 2 ID 50 /bird via IN/IO routes.Five of the aMPV-A or -B virus-infected turkeys were then placed into each corresponding group for the homologous aMPV challenge through direct-contact transmission.The co-mingled birds were monitored daily for clinical signs of aMPV disease for 14 days.Typical clinical signs of the aMPV disease were scored as follows; nasal exudates when squeezed (Score 1), nasal discharge (Score 2), and/or frothy eyes (Score 3), according to the scoring system of Cook et al. [34].The clinical sign scores post-challenge were statistically analyzed using two-factor ANOVA with a 1% level of significance between each vaccine treatment and corresponding control group (Microsoft Excel).Tracheal swabs were collected from each aMPV-A or -B viruschallenged birds at 5, 7, and 9 days post-challenge (DPC) for detection of virus shedding.
Detection of Immunoresponse and Challenged Virus Shedding
The NDV-specific serum antibody response was determined using the standard hemagglutination inhibition (HI) test [32] and aMPV subtype-specific serum antibodies were determined by an enzyme-linked immunosorbent assay (ELISA) as described previously, except using sucrose-gradient purified aMPV-A or aMPV-B as an antigen [29,30].Virus replication or viral RNA shedding from turkey tracheal tissues following challenge with aMPV-A or -B virus was detected by RT-PCR using aMPV-A or -B N gene-specific primers (Table 1) as described previously [29,35].
Generation of the rLS/aMPV-A and -B G Virus
Two full-length cDNA clones encoding the complete anti-sense genome of the NDV LaSota vaccine strain and the G gene of aMPV-A or -B were constructed through RT-PCR and In-Fusion PCR cloning (Figure 1).The insertion of the transcription "cassettes" containing NDV LaSota intergenic regions and the G gene ORF of aMPV-A or B increased the length of the recombinant clones by 1338 and 1410 nts, respectively.Thus, the total length of pLS/aMPV-A G and pLS/aMPV-B G is 16,524 and 16,596 nts, respectively, and is divisible by 6 abiding by the "Rule of Six" [36].After co-transfection of the pLS/aMPV-A or -B G clone and supporting plasmids in HEp-2 cells and subsequent amplification in SPF chicken embryonated eggs, the LaSota strain-based recombinant viruses vectoring the G gene of aMPV-A or -B were rescued, purified and propagated.The fidelity of the rescued viruses was confirmed by sequence analysis
Biological Characterization of the rLS/aMPV-A G and -B G Viruses
To determine if the additional foreign G gene affects virus replication of the recombinant rLS/aMPV-A and -B G viruses, pathogenicity and growth dynamics were examined in vitro and in vivo by conducting MDT and ICPI tests and titration assays.As shown in Table 2, the recombinant viruses appeared to be slightly attenuated in day-old chickens with a lower ICPI (0.0) than the parental LaSota strain.The titers of the recombinant viruses grown in either embryonated eggs or in DF-1 cells, as measured by EID 50 , TCID 50 and HA, were comparable to the titers of the parental LaSota strain (Table 2).They were stable and did not show any apparent changes in MDT and virus titers after 10 passages in SPF chicken embryos (data not shown).In addition, cytopathic effects induced by the rLS/aMPV-A G virus infection were indistinguishable from those seen with the parental LaSota virus in infected DF-1 cells (Figure 2).Finally, no significant differences in the growth kinetics between the rLS/aMPV-A G, rLS/aMPV-B G and the parental LaSota viruses was detected (Figure 3).
Expression of the G Protein by rLS/aMPV -B G
Expression of the G protein from aMPV-B G infected DF-1 cell was examined by IFA using chicken anti-aMPV-B serum and FITC-labeled goat anti-chicken IgG.
In ).This result confirms that the aMPV-B G protein is co-expressed with the NDV HN protein from the recombinant virus in the infected cells.
Immune Response and Protection against Challenge
All turkeys that were immunized with either rLS/aMPV-A G or B G virus produced high NDV-specific HI antibody titers (Table 3) and were completely protected against NDV challenge without showing any clinical sign of disease ( signs of the disease from 4 DPC, showing nasal exudates when squeezed (Score 1), nasal discharge (Score 2), and/or frothy eyes (Score 3) (Figure 5).The infected birds showed peak clinical signs between 7 -9 DPC, which gradually decreased in severity thereafter, but at 14 DPC 20% -30% of infected birds still showed some clinical signs.In contrast, turkeys vaccinated with the rLS/aMPV-A G or -B G virus resulted in significantly less severe clinical signs than those in the corresponding control groups (Figure 5, p < 0.01).Most vaccinated birds showed nasal exudates when squeezed or nasal discharge, however these milder clinical signs of the disease disappeared after 11 DPC (Figure 5).Presence of aMPV subtype-specific antibodies in vaccinated turkey sera was not detected by ELISA (data not shown).
Viral RNA shedding, or the presence of the challenge virus (aMPV-A or -B) in the tracheal lumen, was detected in 100% of the control birds at 5, 7, and 9 DPC (Table 4).Viral shedding of the challenge viruses from corresponding rLS/aMPV-A G or -B G vaccinated birds was somewhat less at 9 DPC when 50% and 70% of the birds were negative for viral RNA, respectively (Table 4).
Discussion
In the present study, we generated and evaluated LaSota strain-based recombinant NDV viruses expressing the G protein of aMPV-A or-B as next-generation, bivalent vaccine candidates.The G protein of aMPV is thought to be responsible for attachment of the virus particles to the host cell surface receptors to initiate infection.However, the G deletion or truncation mutants of aMPV were viable in cell cultures, but attenuated in SPF turkeys and induced a weaker immune response than the wild-type virus [29,37], implying that the G protein may play a role in immunogenicity to the natural host.Thus, we selected the G protein to be expressed by the recombinant NDV vector to investigate the role of the G protein in inducing protective immunity against aMPV challenge, as well as the protective efficacy conferred by the NDV vector against an NDV challenge in turkeys.
Our results showed the safety, stability, and possible application of these recombinant vaccine candidates for use in young turkeys, the most vulnerable population to NDV and aMPV diseases [2,22,38].Turkeys vaccinated with either rLS/aMPV-A G or -B G virus had comparable levels of NDV-specific HI antibody response and survived the lethal dose NDV challenge without any clinical sign of disease.To properly evaluate protective efficacy of these vaccine candidates against homologous aMPV challenge, the vaccinated and control birds were challenged with pathogenic aMPV-A or -B through transmission infection to mimic natural infection.It appeared that the infected birds, through transmission, showed clinical signs two days later than birds challenged directly via IN/IO routes, indicating aMPV had a two-day incubation period while spreading through the environment.At 9 DPC with pathogenic aMPV-A or -B, the recombinant virus-vaccinated turkeys showed milder clinical signs and less virus shedding than the birds in the control groups.The lack of detectable aMPV G genespecific antibody response and the partial protection conferred by the recombinant viruses against homologous aMPV challenge suggest that the aMPV G protein is a weak antigen.Our data on the aMPV G protein, inducing partial protective immunity, together with the findings by others on immunogenicity of individual aMPV structural proteins [18,21], demonstrates that a single aMPV protein may not have the capability to induce a strong enough immune response to provide complete protection against aMPV disease.It is reasonable to speculate that co-expression of two or more major structural proteins of the aMPV virus, i.e. the F, G and/or M proteins, perhaps by the NDV vector, may be necessary to induce an enhanced protective immunity against aMPV infection.
In summary, in the present study, we successfully generated NDV/aMPV-A G and -B G recombinant viruses.Turkeys vaccinated with these recombinant viruses were completely protected against velogenic NDV challenge and partially protected against homologous pathogenic aMPV challenge.The results suggest that the aMPV G protein is a weak antigen and other immunegenic components of the virus, most likely the F protein, may be needed and added to the recombinant LaSota vaccine vector in the future to improve the bivalent vac- cine protective efficacy against aMPV infections.
Figure 1 .Table 1 .a
Figure 1.Schematic representation of pLS/aMPV-AG and -BG construction.The open reading frame of the G gene of aMPV-A or -B, amplified from virus genomic RNA, was cloned into the intergenic region between the F and HN genes in the pFLC-LaSota vector through a two-step process using the In-Fusion ® PCR cloning kit (Invitrogen).The NDV Gene Start and Gene End signal sequences and the aMPV-A or -B G open reading frame are boxed.The direction of the T7 promoter is indicated by a bold black arrow.HDVRz and T7Φ represent the site of the Hepatitis delta virus ribozyme and the T7 terminator sequences, respectively.Table 1.Primer sequences used in the study.Primer Primer Sequence e Primer Name 1 a 5'tccaggtgcaagatgGGGTCCAAACTATATATGGCT aMPV-A NI G F 2 a 5'ctggaattcgcccttACTAGTGCAACACCACTCA aMPV-A NI G R 3 a 5'tccaggtgcaagatgGGGTCAGAGCTCTACATCAT aMPV-B NI G F 4 a 5'ctggaattcgcccttAGCTTATTGACTAGTACAGCACCAC aMPV-B NI G R 5 b 5'actacaaaaatgtgaGCTGCGTCTCTGAGATTGCG LS F-M F 6 b 5'gttcctcatctgtgtTCATTAACTAGTGCAACACCACTCA LS-aMPV-A G RE 7 b 5'gttcctcatctgtgtTTATTGACTAGTACAGCACCA LS-aMPV-B G RE 8 c 5'CATCTTGCACCTGGAGGGCGCCAAC pM-F up 9 c 5'AAGGGCGAATTCCAGCACACTGGC pM-F down 10 c 5'TCACATTTTTGTAGTGGCTCTCATC LS vec F-M up 11 c 5'ACACAGATGAGGAACGAAGGTTTCCCTAATAG LS vec F down 12 d 5'AGACTCAGTGACTTGGAGTAC aMPV-A N F19 13 d 5'TACCGTGATATGGCATCGCT aMPV-A N R565 14 d 5'TAAGCTCGCATCCACGGTAGA aMPV-B N F501 15 d 5'CTGCATTCCCCAAAACAACACTT aMPV-B N R979 a Primers 1 to 4 were used to RT-PCR amplify the G gene of the aMPV-A or -B strain.b Primers 5 to 7 were used to amplify the cDNA fragments containing the G gene of aMPV-A or -B and the GE and GS sequences of NDV from subclones.c Primers 8 to 11 were used to amplify or linearize the pFLC-LaSota or subclone vectors.d Primers 12 to 15 were used to detect virus replication or viral RNA shedding in tracheal tissues by RT-PCR.e Nucleotides shown in lower case letters represent homology sequences with a vector backbone, which were used to facilitate the RE independent cloning using the In-Fusion ® PCR cloning kit (Clontech).
Newcastle Disease Viruses (NDV) Expressing the Glycoprotein (G) of Avian Metapneumovirus (aMPV) Subtype A or B against Challenge with Virulent NDV and aMPV 134 from RT-PCR products of the viral genome (data not shown).
addition, to pinpoint the location of the expressed G protein in relation to recombinant virus infected DF-1 cells, mouse anti-NDV HN monoclonal antibody (Mab) and Alexa Fluor ® 568 conjugated goat anti-mouse IgG were also used.As shown in Figure 4, NDV LaSota infected cells were positively stained with mouse anti-NDV HN Mab and Alexa-conjugates, but not with chicken anti-aMPV-B serum and FITC conjugate (Figures 4(a) and (b)), demonstrating the specificity of the antibodies and conjugates.When examining rLS/aMPV-B G infected DF-1 cells stained with a mixture of anti-aMPV-B/FITC and anti-NDV HN/Alexa 568 antibodies, both green (Figure 4(c)) and red (Figure 4(d)) fluorescence were observed by fluorescence microscopy.After merging both fluorescent images, green and red fluorescence Table 2. Biological assessments of the NDV/aMPV recombinant viruses.Mean death time assay in embryonated chicken eggs.b ICPI: Intracerebral pathogenicity index assay in day-old chickens.c HA: Hemagglutination assay.d EID 50 : 50% egg infective dose assay in embryonated chicken eggs.e TCID 50 : 50% tissue infectious dose assay in DF-1 cells.
Figure 2 .Figure 3 .
Figure 2. Cytopathic effects induced by the recombinant viruses.Monolayers of DF-1 cells were infected with rLS/aMPV-A G, rLS/aMPV-B G, or LaSota virus at an MOI of 0.001.Mock infection was included as a control.At days 1, 2, and 3 post-infection, infected cells were digitally photographed using an inverted microscope at 100X magnifications (Nikon, Eclipse, Ti, Melville, NY).
Figure 4 .
Figure 4. Detection of aMPV-B G protein expression by IFA.DF-1 cells were infected with LaSota ((a) and (b)) or rLS/aMPV-B G ((c)-(f)) at an MOI of 0.01.At 24 h post-infection, the infected cells were fixed and stained with a mixture of chicken anti-aMPV-B and mouse anti-NDV Mab followed by a mixture of FITC and Alexa Fluor ® 568 conjugated antibodies.Fluorescence was examined and digitally photographed using an inverted fluorescence microscope at 100X magnifications under UV light with matching excitation/emission filters for FITC or Alexa Fluor ® 568 (Nikon, Eclipse, Ti, Melville, NY).Green and red fluorescent images ((c) and (d)) were photographed from the same field of rLS/aMPV-B G-infected cells and merged into one image (f).In addition, viral CPE induced by rLS/aMPV-B G was also photographed from the same field of infected DF-1 cells as the fluorescent images under bright light (e).co-localized to the same cells (Figure4(f)), which corresponded to viral CPE observed in the same field (Figure 4(e)).This result confirms that the aMPV-B G protein is co-expressed with the NDV HN protein from the recombinant virus in the infected cells. | 5,240.2 | 2013-09-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
FRAMEWORK SUSTAINABILITY STRATEGY , INHOTIM , BRAZIL
I will give you the overview of the presentation. The start is the introduction of myself as President of ICAMT, what we do and organise. And even so important owner and director of ToornendPartners and the projects we are involved in this moment. Next follows the introduction of Inhotim, where the museum is, in Brazil, and what kind of buildings there are in the Park, it gives a global idea. This all together with a lot of beautiful and bright pictures of the landscape, the surrounding nature, activities and Belo Horizonte. Then the step in the direction of the Framework for a sustainability strategy is made. First with the big question. Why? There are probably a thousand reasons why sustainability for Inhotim gives value, although reasons in the sense of resources, future and children are good to name, it is also about efficiency and money. The Framework has three dimensions: museum, sustainability and strategy. In the presentation these dimensions are explained. The first dimension is the museum. The museum contains the elements of collection, people and building. These three effects each other all the time and it is, whatever you want, plan or do, the task of the museum staff to keep these three elements in balance.
FRAMEWORK SUSTAINABILITY STRATEGY, INHOTIM, BRAZIL
Jean Hilgersom1 I will give you the overview of the presentation.The start is the introduction of myself as President of ICAMT, what we do and organise.And even so important owner and director of ToornendPartners and the projects we are involved in this moment.
Next follows the introduction of Inhotim, where the museum is, in Brazil, and what kind of buildings there are in the Park, it gives a global idea.This all together with a lot of beautiful and bright pictures of the landscape, the surrounding nature, activities and Belo Horizonte.
Then the step in the direction of the Framework for a sustainability strategy is made.
First with the big question.Why?There are probably a thousand reasons why sustainability for Inhotim gives value, although reasons in the sense of resources, future and children are good to name, it is also about efficiency and money.The Framework has three dimensions: museum, sustainability and strategy.In the presentation these dimensions are explained.
The first dimension is the museum.The museum contains the elements of collection, people and building.These three effects each other all the time and it is, whatever you want, plan or do, the task of the museum staff to keep these three elements in balance.This is the daily job of the Inhotim organisation, find out what is best for the collection, what is best for the gardens and building and what is best for all he visitors.The awareness of this is essential for the approach we will follow.
Sustainability is the second dimension, and we've thought it is important to make a connection with the sustainable development goals of the United Nations.These are very well conceived, agreed and written down on the website of the UN, and there is also a very nice working app available for your iPhone and iPad.Most of the UN countries agreed on all these themes and future goals.We propose to bring these global goals to the local goals of Inhotim.And that's why we have to make a joint with the first dimension, the museum.
This results in the UN goals: Healthy Lives, Water and Sanitation, Energy for all, Consumption and Production and the last Implementation and Partnership.Translated to the museums these could be successively: Indoor Air Quality, Water Use, Energy Use, Material Use and Management.
The third dimension is Strategy.This dimension is to get everything moving.And of course, moving in the direction you will achieve.
Here we pick up the "good old" and proved method of Mr Deming, the Deming Circle or PDCA circle.Plan, Do, Check and Act.This is a circular movement what never stops and improving all the time.Elements of the Japanese method Kaizen could also take into account.This is the essence of sustainability, it never stops, it improves all the time.
Knowing that sustainability never stops, and that every step, even when this is a small one, is important.It gives Inhotim the power for improvement.The condition is that you have to know where the start is, that you measure what you do, analyse the results to make a step upwards, raising the standards.The keys of Strategy, this third dimension, is a process and the phases in this process are Visionary thinking, Defining, Designing, Preparing, Implementing, Monitoring, Analysing and Improving.And this leads again to Visionary thinking.It's even better to say that this gives the Inhotim organisation the drive to improve.
And Inhotim is already on the spot, it is not necessary to start from scratch, all the keys are there and can be turned, if Inhotim likes.
We can conclude that all the three dimensions of the framework are there: Museum -Sustainability -Strategy.The museum what needs the balance, the goals for sustainability and the strategy to get there, or even better to raise the standard.Now it's time to build the framework for Inhotim.Building is always an adventure, building is unique, finding new paths and insights.Don't make it difficult, keep all involved informed, otherwise you lose commitment and that's the start to lose control.
You must be aware that nothing is difficult, nothing is complicated, although it is sometimes hard to explain, the framework is the instrument to find and have the appropriate arguments you need for explanation, communication and commitment.
When we start with building the structure, whatever you do, find a reason why you do it, why it is important for Inhotim, what does Inhotim want to accomplish.This is the vision, the vision of Inhotim.The vision for Inhotim for the sustainability goals.Vision is about something Inhotim wants to reach on the long term.
What could be the vision of how to use energy, water, materials.What are the needs of these in your museum, for your collection, building or the people, visitors and staff?Is the collection growing?How long are the visitors staying in the museum, is the capacity of the public facilities adequate, what is the impact for the organisation, for the museum.etc. etc.There are lots of ideas to think about, and think about these for the long run, the future.What are the needs over 5 year, over 10 year?
It is not necessary to solve everything at the same time.The framework gives the room and opportunity to plan the visions and make steps.The steps won't have the same size for each measure, there are larger steps and smaller steps.
And don't forget, small steps are also steps, isn't it?The output of this phase is the vision of Inhotim with all the sustainable goals on the short and long term.
You will understand that the output of each phase is the input for the next phase.Communication is a major aspect in the process.It should be clear what is expected in each phase and what will be delivered, when there are doubts, communication is the only way to avoid misunderstandings.
**Vision**
Every project starts with a vision, bright view for the future.A long-term idea about sustainability for Inhotim.Where will Inhotim be in 20 years.What will Inhotim mean for the surrounding.How efficient will the organisation develop, and how will it be an attractive place for visitors in the future.
This vision could be big, and it should have commitment to make next steps in the project, next steps for the development of sustainability.
**Defining**
In this phase of defining the vision will be translated into possible sustainability measures, what will be the impact of these measures, and how to choose?Therefor there are several available tools, like risk and opportunity assessments, analyse the strength and the weaknesses of Inhotim.With these tools you are able to develop the measures.
With the tool Life Cycle Cost Analyse measures could be compared and Inhotim is able to decide which measure is most effective.This is a decision generated by money, at the end most of the time it's a matter of money, but other criteria, and impacts could be taken into account as well.
**Designing** This is the phase in which the measures are designed, these are named in the definition phase.Actually, design the result of the definition phase.The design start with deepening, the definition results, this could be done by developing scenarios., and the exact **Preparing** Preparing is the technical design, brings the conceptual design further, into technical details.The quality of the results of this detailing, makes it possible to work on the planning of the execution and then it is possible to find out where the possible pitfalls are, and who could be responsible for these pitfalls.When the execution will take place, it will affect several organisational aspects, probably the visitors.Policy had to be developed about this as well as health and safety for the people, staff and visitors, and the buildings.Also, important to think about are permits of local government, these could be needed.And what about procurement, the contractors should have sustainable way of working, and are to able to guarantee this by certification.**Implementing** This is phase of execution the measures, actually that is quite simple, the execution from everything what is thought out in the previous phase.You will find out of all the aspects are designed well, and it is obvious that you would need some flexibility and probably budget to solve upcoming problems.
But the time is needed, to prepare aspects for the development of the operation strategy, again risk assessment, with the focus on the operation.The change and control procedures, what will be the impact for the visitors., and is it wise to develop your way to communicate the results of the sustainability in the period of operation.Even when the results show only small improvements, then it is wise to communicate the way you do the implementation of sustainability in Inhotim, everyone understands that it is a long road in which you improve constantly.
**Monitoring**
The next phase is the phase where we all were waiting for: it is the start of the in-use period.Inhotim is able to experience what the effects are of the sustainable measures.What does it bring and what it is the impact for the operation?
This phase is the Monitoring phase.In the monitoring phase of this sustainability project measurements are made.Measurements from.Everything what is important, collecting all the data, to measure the results of all the implement sustainability items.This starts with writing a plan of measurements, what will be counted and what can you do with the results, so that the analysing of the next phase will be effective.Measuring performance is possible by using sensors, just counting, and log instruments.A lot of values could be measured, like the weather, temperature inside and outside, humidity, number of people inside the building, how Important is that commitment with staff members and with the visitors will develop as well, and the communication withal the people keeps this commitment up to date.
**New Vision**
The improvement of ideas, policy and commitment leads to a new vision, development of a new sustainable horizon, and the whole process starts again, it is rolling.Rolling the standard to a new level, raising the quality of the visitor experience and the quality of the organisation to a higher level.
This new sustainability horizon gives the opportunity to set new long-term goals.**Framework** This is the framework with the three dimensions: museum, sustainability and strategy.
The museum needs to be in balance, the three elements collection, building and visitors needs to be in balance.And although these needs are constantly moving and changing the framework of sustainability and strategy gives a solid base to develop improvements for the needs and balance.
long are they staying, what is the energy-use during day-time, nighttime.What's is the CO2 inside the building, where are the deliveries come from, what are the certificates deliveries need, etc. **Analysing** All the data we have collected could be analysed, all kind of methods are available to analyse, counting averages etc.And what about feedback from the visitors, how do they experience the way sustainability plays a role in Inhotim, and what do they expect from sustainability in Inhotim.Tools like prediction and disruption could help to analyse the data and figures.This analysation gives the input for the development of new policy.**Improving**Theprocess leads to new ideas, innovation.The redefinition or development of the new policy, what could be possible for the nearby future.The development of sustainability makes steps, small steps are also steps.This development goes hand in hand with the development of the organisation, and the efficiency of the internal processes. | 2,926 | 2019-01-20T00:00:00.000 | [
"Economics"
] |
Novel Insights on Dietary Polyphenols for Prevention in Early-Life Origins of Hypertension: A Review Focusing on Preclinical Animal Models
Polyphenols are the largest group of phytochemicals with health benefits. Early life appears to offer a critical window of opportunity for launching interventions focused on preventing hypertension, as increasing evidence supports the supposition that hypertension can originate in early life. Although polyphenols have antihypertensive actions, knowledge of the potential beneficial action of the early use of polyphenols to avert the development of hypertension is limited. Thus, in this review, we first provide a brief summary of the chemistry and biological function of polyphenols. Then, we present the current epidemiological and experimental evidence supporting the early-life origins of hypertension. We also document animal data on the use of specific polyphenols as an early-life intervention to protect offspring against hypertension in adulthood and discuss underlying mechanisms. Continued research into the use of polyphenols to prevent hypertension from starting early in life will have far-reaching implications for future health.
Introduction
Polyphenols are the largest group of phytochemicals, all of which are natural compounds synthesized entirely by plants [1]. Polyphenols are generally categorized as flavonoids and nonflavonoids. Flavonoids have a chemical structure of 15 carbons constituted by a common skeleton with a C6-C3-C6 structure. Polyphenols are potent antioxidants and have been linked to many health benefits [2][3][4]. Polyphenols display a large range of biological effects, including antioxidant properties, anti-inflammatory effects, anticancer activity, improvement of endothelial function, antiobesity activity, antidiabetic activity, antiatherosclerotic properties, restoration of NO bioavailability, etc. [2][3][4][5]. However, further trials are required to provide recommendations on the dietary reference intake of polyphenols for health and disease prevention, and to fully assess the molecular mechanisms of action [5].
Increasing evidence has demonstrated the beneficial role of polyphenols in the treatment of hypertension [6][7][8]. Hypertension is one of the most important risk factors for cardiovascular disease (CVD), which is the primary cause of mortality worldwide [9]. The WHO estimates that more than a billion people have hypertension, upwards of 1 in 4 men and 1 in 5 women [10]. Even though pharmacological and interventional strategies have advanced in the past decades, the worldwide prevalence of hypertension is still high and continues to grow [11]. As the scope of the condition expands, greater attention should be focused on preventing and not just treating hypertension.
Although hypertension is an inheritable condition, genetic variants explain only a tiny fraction of phenotypic variations and disease risk [12]. Prior work suggested that missing heritability in hypertension can be a result of adverse events during prenatal, perinatal, or early postnatal life. Indeed, growing evidence supports the supposition that hypertension can originate in early life, resulting from a complex interplay of genetic, epigenetic, and environmental factors [13][14][15].
The link between one's environment in early life and disease as an adult is summed up in the concept of developmental origins of health and disease (DOHaD) [16]. Particularly, adverse programming processes can be avoided or postponed by early intervention, that is, through reprogramming, to avoid the development of chronic diseases across the lifespan [14,17]. A broad spectrum of environmental stimuli can induce the early-life origins of hypertension, including maternal malnutrition, illness, substance abuse, toxin/chemical exposure, medication use during pregnancy, etc. [14,15,[18][19][20][21][22].
During pregnancy and lactation, a plant-based diet can effectively meet energy and nutrient needs [23]. It is known that plant-based diets are rich in polyphenols; however, the protective or deleterious effects of polyphenol-rich foods on chronic diseases in pregnant women remain unclear [24]. Published data support the idea that early-life treatment with certain polyphenols can counteract the adverse processes behind developmental programming and thereby prevent the development of chronic diseases later in life [25]. Although polyphenols have been shown to have benefits for hypertension, the literature focusing on maternal polyphenol supplementation to avoid the early-life origins of hypertension remains limited.
Polyphenol: Chemistry and Biological Function
The word "polyphenol" is a generic term derived from Greek: "poly" means many, and "phenol" is an aromatic ring with a hydroxyl group attached. Phenolic compounds are secondary metabolites broadly spread in the plant kingdom that can be categorized as flavonoids and nonflavonoids. So far, more than 8000 phenolic structures are known, and among them, around 5000 flavonoids have been discovered [2]. Phenolic compounds comprise one (phenolic acid) or more (polyphenol) aromatic rings with attached hydroxyl groups. Polyphenols are found in plant-based foods and beverages, notably fruits, vegetables, whole grains, chocolate, wine, and tea.
Polyphenols have been classified by their chemical structure, biological function, and source of origin [2]. In the interest of brevity, classification of polyphenols in this review is done based on the chemical structure. As illustrated in Figure 1, the flavonoids mainly present in foods are flavonols, flavanones, isoflavones, flavones, flavan-3-ols, and anthocyanins. Among the nonflavonoid phenolic compounds are xanthones, stilbenes, lignans, and tannins. Here, for the sake of brevity, we provide only a concise overview as an introduction to the chemistry of polyphenols. For more in-depth information, please refer to reviews published elsewhere [1,2].
Flavonoids
One of the most extensively studied groups of polyphenols is the flavonoids. Daily intake of flavonoids constitutes about two-thirds of the total intake of dietary polyphenols. Flavonoids have the C6-C3-C6 general structural backbone, in which the two C6 units are of phenolic nature. Flavonoids can be further classified into different subgroups based on the hydroxylation pattern and variations in the chromane ring, such as flavones, flavanones, isoflavones, flavanols, flavonols, and anthocyanins.
A diverse range of pharmacological activities, including antioxidant, antiinflammatory, antibacterial, antihyperlipidemic, and cardioprotective effects, are attributed to flavonoids [26]. Quercetin and kaempferol are the main representative flavonol molecules. Quercetin is mostly present in apples, onions, and berries, and has shown antihypertensive action [7]. Flavanones include naringenin, hesperetin, and eriodictyol. Flavanone intake has been linked to a reduced risk of obesity and diabetes [27].
The presence of isoflavones is almost entirely restricted to the leguminous family of plants. Isoflavones include biochanin A, genistein, daidzein, and glycitein [28]. The leading dietary source of isoflavones is soybean, which contains mainly genistein and daidzein. The chemical structure of isoflavones enables their attachment to and activation of estrogen receptors. Accordingly, isoflavones can exert estrogenic or antiestrogenic effects [28].
The basic chemical structure of flavones is in the form of two benzene rings united by a heterocyclic pyrone ring [29]. The main flavones in food are luteolin, apigenin, and tangeritin. Although flavones have demonstrated many potentially beneficial activities, they are not well absorbed compared to other polyphenols.
Flavonoids
One of the most extensively studied groups of polyphenols is the flavonoids. Daily intake of flavonoids constitutes about two-thirds of the total intake of dietary polyphenols. Flavonoids have the C6-C3-C6 general structural backbone, in which the two C6 units are of phenolic nature. Flavonoids can be further classified into different subgroups based on the hydroxylation pattern and variations in the chromane ring, such as flavones, flavanones, isoflavones, flavanols, flavonols, and anthocyanins.
A diverse range of pharmacological activities, including antioxidant, anti-inflammatory, antibacterial, antihyperlipidemic, and cardioprotective effects, are attributed to flavonoids [26]. Quercetin and kaempferol are the main representative flavonol molecules. Quercetin is mostly present in apples, onions, and berries, and has shown antihypertensive action [7]. Flavanones include naringenin, hesperetin, and eriodictyol. Flavanone intake has been linked to a reduced risk of obesity and diabetes [27].
The presence of isoflavones is almost entirely restricted to the leguminous family of plants. Isoflavones include biochanin A, genistein, daidzein, and glycitein [28]. The leading dietary source of isoflavones is soybean, which contains mainly genistein and daidzein. The chemical structure of isoflavones enables their attachment to and activation of estrogen receptors. Accordingly, isoflavones can exert estrogenic or antiestrogenic effects [28].
The basic chemical structure of flavones is in the form of two benzene rings united by a heterocyclic pyrone ring [29]. The main flavones in food are luteolin, apigenin, and tangeritin. Although flavones have demonstrated many potentially beneficial activities, they are not well absorbed compared to other polyphenols.
Flavanols, or flavan-3-ols, are usually termed catechins [30]. The main sources of flavanols are cocoa, dark chocolate, and berries. Unlike most flavonoids, flavonols have no C4 carbonyl in ring C and no double bond between C2 and C3. They can also form gallic acid conjugates such as epigallocatechin, epicatechin gallate, and epigallocatechin gallate [30]. Cocoa and chocolate are rich in flavonols, which has attracted attention as an option for the prevention of CVD and hypertension [31].
Represented by over 600 structures identified to date, anthocyanins are naturally occurring plant pigments [32]. Specifically, cyanidin, delphinidin, malvidin, and pelargonidin are widely distributed in plants [33]. Similar to other flavonoids, anthocyanins also have a number of health benefits [32,33].
Tannins are water-soluble, high-molecular-weight polyphenolic compounds that are categorized into two major groups: hydrolyzable and nonhydrolyzable. Hydrolyzable tannins are further classified into gallotannins and ellagitannins. Proanthocyanidins, better known as condensed tannins, are flavonoid polymers that exist widely in common foods [34]. Tannins provide protection against a broad range of biotic and abiotic stressors and have several pharmacological effects involving antihypertension [34,35].
Nonflavonoids
As shown in Figure 1, nonflavonoids phenolic compounds include xanthones, stilbenes, lignans, and diarylheptanoids [1,2]. Xanthones comprise a family of O-heterocycle symmetrical compounds with a dibenzo-γ-pyrone scaffold. Their distinctive tricyclic aromatic ring gives them cardioprotective potential and a broad spectrum of physiological properties [36]. Stilbenes are a small family of phenylpropanoids produced in a number of plant species. The basic chemical structure of stilbenes consists of a C6-C2-C6 skeleton, usually with two isomeric forms. [37]. Resveratrol, from grapes and red wine, is one of the best-studied stilbenes [38].
Lignans form a group of phenolic compounds with a backbone of two phenylpropanoid (C6-C3) units [39]. Plant lignans occur in the form of glycosides. Compared to other phenolic compounds, lignans are relatively less studied even though they are widely distributed.
Diarylheptanoids are phenolic compounds with a skeletal structure of two aromatic rings conjugated with seven carbon chains [40]. Diarylheptanoids have been used as nutraceuticals due to their broad array of health-promoting properties [41]. Among nutraceuticals, curcumin is an important diarylheptanoid compound, which has been studied widely for its role in protection against many diseases [42]. The antihypertensive effect of curcumin has been reported in spontaneously hypertensive rats [43].
Biotransformation and Bioavailability of Polyphenols
The metabolic fate of dietary polyphenols in the body is schematically displayed in Figure 2. Only a minor portion of dietary polyphenols (5-10% of total polyphenol intake) can be directly absorbed in the small intestine, generally after deconjugation reactions such as deglycosylation [44]. After absorption, these less complex polyphenolic compounds undergo phase I and II reactions in the liver and enterocytes, giving rise to a series of watersoluble conjugate metabolites that are rapidly released into the systemic circulation for further organ distribution and urinary excretion. The remaining unabsorbed polyphenols (90-95% of total polyphenol intake) are known to be metabolized by gut microbes.
The biological characteristics of polyphenols are determined by intestinal absorption and bioavailability. Bioaccessibility, which determines the release and solubility of bioactive compounds during digestion for further absorption, is a crucial factor in bioavailability. Most polyphenolic compounds show low bioavailability, which is mainly linked to their poor bioaccessibility [45]. Importantly, gut-microbiota-derived metabolism and intestinal absorption affect the bioaccessibility of polyphenols [46].
The gut microbiota is responsible for the extensive degradation of the original polyphenolic structures into multiple low-molecular-weight phenolic metabolites. Polyphenol metabolites have attracted great interest, as many of them have shown similar biological effects compared to the parent compounds. There is a two-way mutual reaction between polyphenolic compounds and the gut microbiota that has an impact on human health. First, the gut microbiota mediates the biotransformation of polyphenols into their microbial metabolites, helping to increase their bioavailability. Second, polyphenols can act as prebiotics to shape gut microbiota composition and enhance beneficial bacteria [47]. effects compared to the parent compounds. There is a two-way mutual reaction between polyphenolic compounds and the gut microbiota that has an impact on human health. First, the gut microbiota mediates the biotransformation of polyphenols into their microbial metabolites, helping to increase their bioavailability. Second, polyphenols can act as prebiotics to shape gut microbiota composition and enhance beneficial bacteria [47]. As an example, the catabolic transformation of resveratrol has been extensively studied in recent years. In humans, resveratrol is mainly absorbed orally (approximately 70%) [47]. Resveratrol absorption occurs by diffusion or by forming complexes with membrane transporters. In the liver, sulfation and glucuronidation are the principal phase II metabolic pathways of resveratrol. As a result, the free form of resveratrol is at very low levels in the circulation [48]. In the circulation and target organs, the major forms of resveratrol are sulfate (e.g., trans-resveratrol-3-sulfate) and glucuronide (e.g., trans-resveratrol-3-glucoronide) conjugate metabolites. Other resveratrol derivatives, such as dihydroresveratrol and piceatannol, are also detectable in target organs [49,50]. Once metabolized, resveratrol can be rapidly excreted, with an elimination half-life of 130-180 min [47].
In addition, the gut microbiota is involved in resveratrol catabolism by increasing its availability from resveratrol precursors and producing resveratrol derivatives [49]. Showing high inter-individual variation, absorption of orally ingested resveratrol in humans and rats has been reported at approximately 20-70% and 15-50%, respectively [51,52]. These data indicate that the bioavailability of resveratrol largely differs from one person to another, depending mainly on the administration rate and dose, as well as the gut microbial environment.
Beneficial Effects of Polyphenols in Hypertension
Many polyphenol-containing foods and beverages, such as grapes, tea, cocoa, and soy products, have been studied as antihypertensive agents [6]. The basic chemical aspects As an example, the catabolic transformation of resveratrol has been extensively studied in recent years. In humans, resveratrol is mainly absorbed orally (approximately 70%) [47]. Resveratrol absorption occurs by diffusion or by forming complexes with membrane transporters. In the liver, sulfation and glucuronidation are the principal phase II metabolic pathways of resveratrol. As a result, the free form of resveratrol is at very low levels in the circulation [48]. In the circulation and target organs, the major forms of resveratrol are sulfate (e.g., trans-resveratrol-3-sulfate) and glucuronide (e.g., transresveratrol-3-glucoronide) conjugate metabolites. Other resveratrol derivatives, such as dihydroresveratrol and piceatannol, are also detectable in target organs [49,50]. Once metabolized, resveratrol can be rapidly excreted, with an elimination half-life of 130-180 min [47].
In addition, the gut microbiota is involved in resveratrol catabolism by increasing its availability from resveratrol precursors and producing resveratrol derivatives [49]. Showing high inter-individual variation, absorption of orally ingested resveratrol in humans and rats has been reported at approximately 20-70% and 15-50%, respectively [51,52]. These data indicate that the bioavailability of resveratrol largely differs from one person to another, depending mainly on the administration rate and dose, as well as the gut microbial environment.
Beneficial Effects of Polyphenols in Hypertension
Many polyphenol-containing foods and beverages, such as grapes, tea, cocoa, and soy products, have been studied as antihypertensive agents [6]. The basic chemical aspects of flavonols, flavanols, isoflavones, anthocyanins, and stilbenes, as agents possibly responsible for the observed effects of polyphenol-rich foods on BP, are addressed. The reported mech-anisms mediating the protective effects of polyphenols in hypertension, mainly supported by experimental data in animals, include inhibition of oxidative stress, enhancement of nitric oxide (NO) bioavailability, improvement of endothelial function, inhibition of vasoconstrictor endothelin-1 synthesis, and regulation of the renin-angiotensin-aldosterone system (RAAS) [6,7,53,54].
Although several systematic reviews indicated that dietary flavonoid intake reduces CVD risk [39,55,56], some data did not suggest that flavonoid-rich fruits can affect systolic and diastolic BP [57]. In addition, one meta-analysis that examined 45,732 cases of hypertension from 20 studies demonstrated that flavonoid intake showed a nonsignificant association with decreased risk of hypertension, while dietary anthocyanin intake was associated with an 8% reduction in hypertension risk [58]. Even when the data are inconclusive and many questions remain open, on the whole, the evidence is encouraging to start considering polyphenol intake that can provide benefits to hypertensive subjects.
Epidemiological Evidence
There are several lines of evidence to support the idea that early-life environmental stimuli are closely linked to the risk of hypertension later in life. The first is observations from famine. Children born to women exposed to famine develop multiple chronic diseases involving hypertension in later life [59][60][61]. Another line of evidence comes from mother-child cohorts. Prior work found several risk factors related to the early-life origins of hypertension, including maternal malnutrition [62], maternal obesity [63], gestational hypertension [64], short-term breastfeeding [65], low maternal vitamin D levels [66], maternal smoking [67], and environmental chemical exposure [22].
The third line is many studies indicating that preterm birth and low birth weight (LBW) are key risk factors for hypertension later in life [13,[68][69][70]. A meta-analysis study of 1342 preterm babies reported that preterm or very LBW babies had higher systolic BP in adulthood [70]. Further, in twin studies, associations have been reported between LBW and hypertension [71][72][73].
However, such epidemiological studies are unable to test direct cause-and-effect relationships or provide the molecular mechanisms that underlie the programming processes in order to develop efficient early-life interventions. Hence, animal models have been created to establish the biological plausibility of the associations observed in epidemiological studies, providing proof of causality.
Experimental Evidence
A wide range of early-life insults using animal models to study the early-life origins of hypertension has been reported, including maternal malnutrition, maternal illness, pregnancy complications, environmental chemical exposure, and medication use in pregnancy [14,15,18,19]. Several small (e.g., rat and mouse) and large (e.g., ewe and cow) animal models have been used to assess the early-life origins of hypertension, with rats the most commonly used species [15,[74][75][76]. So far, animal models have provided significant insights into the pathophysiological mechanisms involved in the early-life origins of hypertension. These molecular mechanisms include but are not limited to oxidative stress [20], dysregulated NO signaling [77], aberrant activation of the RAAS [78], dysfunctional nutrient-sensing pathways [79], dysbiotic gut microbiota [80], and epigenetic regulation [81]. As detailed descriptions of these mechanisms are beyond the scope of this review, readers are referred to reviews elsewhere for more in-depth information.
While the mechanisms underlying the early-life origins of hypertension remain to be fully elucidated, our knowledge of potential molecular mechanisms has advanced in recent years by running experiments on animals, which aid in developing efficient early intervention measures, specifically reprogramming, to prevent hypertension from happening [14,17]. Given that polyphenols regulate many biological functions, we might presume that using them as an early-life intervention could reprogram adverse programming pro-cesses and prevent the development of hypertension throughout life. A summary of the links between polyphenols and protective mechanisms implicated in the early-life origins of hypertension is given in Figure 3. intervention measures, specifically reprogramming, to prevent hypertension from happening [14,17]. Given that polyphenols regulate many biological functions, we might presume that using them as an early-life intervention could reprogram adverse programming processes and prevent the development of hypertension throughout life. A summary of the links between polyphenols and protective mechanisms implicated in the early-life origins of hypertension is given in Figure 3.
Polyphenols as a Reprogramming Strategy
So far, no information is available from human clinical studies with regard to the effects of perinatal polyphenol supplementation on the offspring's BP. Given that polyphenols appear to offer many promising health benefits, and that many polyphenols are claimed as nutraceuticals, it is no wonder supplementation with polyphenols during gestation and/or lactation has been examined in animal models to improve maternal and fetal outcomes [24,[82][83][84].
Among the animal studies that analyzed polyphenol compounds in the context of DO-HaD-related disorders, many focused on the impact of resveratrol. Our understanding of the potential beneficial effects of early polyphenol supplementation to prevent hypertension of developmental origins is limited. Thus, in this review, we summarize experimental evidence documenting the use of polyphenols to prevent hypertension considering early-life interventions through pregnancy and lactation, which is presented in Table 1 [85][86][87][88][89][90][91][92][93][94][95][96][97]. The studies are limited to those that evaluated offspring outcomes starting post-weaning.
Polyphenols as a Reprogramming Strategy
So far, no information is available from human clinical studies with regard to the effects of perinatal polyphenol supplementation on the offspring's BP. Given that polyphenols appear to offer many promising health benefits, and that many polyphenols are claimed as nutraceuticals, it is no wonder supplementation with polyphenols during gestation and/or lactation has been examined in animal models to improve maternal and fetal outcomes [24,[82][83][84].
In the current review, rats were found to be the most widely used animal species. Only a mouse model was reported with regard to the protective effects of quercetin treatment against maternal high-fat diet-induced hypertension [85]. The reprogramming effects of polyphenol supplementation in rats range from 12 weeks to 6 months of age, which equates to human ages from adolescence to adulthood [98], while there is a paucity of information on the long-term effects of early polyphenol intervention on offspring in old age.
A previous study revealed that a soy isoflavone-deficient diet during gestation results in elevated BP in male adult offspring, which can be prevented by switching to a soy isoflavone-rich diet for 6 months in adulthood [102]. Additionally, prior research has shown the antihypertensive effects of flavones, flavanones, anthocyanins, and xanthones [6,103,104]. Accordingly, whether these polyphenols also have their own protective effects in the earlylife origins of hypertension should be determined by further research.
All of these observations provide insight into several core mechanisms behind the protective effects of polyphenols, including oxidative stress, dysregulated NO pathway, aberrant RAAS activation, dysfunctional nutrient-sensing signals, dysbiotic gut microbiota, and inflammation. The interconnection between polyphenols and the proposed protective mechanisms underlying hypertension programming in response to adverse early-life insults is depicted in Figure 3. These will be discussed in detail in the following sections.
Oxidative Stress
One of the protective mechanisms of polyphenols and their metabolites is against oxidative stress [105]. The antioxidant activities of polyphenols are interrelated with their capacity to scavenge reactive oxygen species (ROS), upregulate antioxidant defenses, inhibit NADPH oxidase, increased glutathione (GSH) levels, and increase NO bioavailability [105,106].
Since the fetus has low antioxidant capacity, overproduction of ROS under suboptimal intrauterine conditions prevails over antioxidant defenses, giving rise to oxidative stress damage and consequently fetal programming [107]. As illustrated in Table 1, several earlylife insults link oxidative stress to hypertension of developmental origins, including high-fat diet [85,95,97], antenatal dexamethasone exposure [86], maternal CKD [88], prenatal TCDD and dexamethasone exposure [93], and maternal bisphenol A and high-fat exposure [94].
As an antioxidant, quercetin has been used as a nutraceutical to offer protection against various diseases [106]. In a mouse model, adult offspring of dams fed a highfat diet during pregnancy exhibited hypertension, which was protected by quercetin supplementation in the pregnant dam [85]. Another study revealed that maternal treatment with epigallocatechin gallate attenuated the developmental programming of hypertension induced by antenatal dexamethasone administration [86].
Resveratrol, a stilbene, has been widely explored in many diseases [36]. In a maternal CKD model, perinatal resveratrol supplementation protecting against hypertension was related to reduced expression of renal 8-hydroxy-2 -deoxyguanosine (8-OHdG, a biomarker for assessing oxidative DNA damage) [88]. Additionally, the effect of perinatal resveratrol therapy in reducing oxidative stress is evidenced by the protection against hypertension in adult progeny of dams exposed to TCDD and dexamethasone [93], and to bisphenol A and a high-fat diet [94]. Moreover, supplementation with grape skin tannins in pregnancy and lactation protects against hypertension induced by a maternal high-fat diet, accompanied by restoration of decreased superoxide dismutase, catalase, and glutathione peroxidase activity [97]. These observations indicate that the interplay between polyphenols and oxidative stress is implicated in the early-life origins of hypertension.
Dysregulated NO Pathway
NO, a potent vasodilator, plays a crucial role in pregnancy and fetal development. Ample evidence indicates that a dysregulated NO pathway contributes to the pathogenesis of hypertension developing in early life [77]. Asymmetric dimethylarginine (ADMA) is an NOS inhibitor [108]. As a reprogramming strategy, restoring the ADMA-related ROS/NO imbalance has been proposed to avert developmental programming and avoid the resulting hypertension [109]. Table 1 shows the reprogramming effects of polyphenols targeting the ADMA/NO pathway to avert hypertension of developmental origins reported in various animal models, including maternal high-fat diet [87,95], maternal CKD [88], maternal L-NAME administration [90], prenatal ADMA and TMAO exposure [91], prenatal 2,3,7,8tetrachlorodibenzo-p-dioxin (TCDD) exposure [92], prenatal TCDD plus dexamethasone exposure [93], maternal prenatal bisphenol A and high-fat diet exposure [94], and maternal hypertension [96].
Garlic is a polyphenolic and organosulfur-enriched nutraceutical [110]. Garlic oil supplementation during gestation and lactation was reported to protect against maternal high-fat diet-induced hypertension in adult rat offspring, coinciding with decreased ADMA levels and increased NO bioavailability [87].
Prior research reveals that resveratrol can stimulate NO production via upregulating endothelial NOS expression, stimulating NOS activity, reducing oxidative stress, and reversing eNOS uncoupling [111]. Our previous study revealed that perinatal resveratrol supplementation reduced plasma ADMA levels and restored NO bioavailability, providing protection against hypertension in offspring programmed by a high-fat diet [95].
Aberrant Activation of the RAAS
The RAAS is a major hormone cascade involved in the regulation of BP [112]. It contains two opposite pathways: the classic angiotensin-converting enzyme (ACE)-angiotensin (Ang) II-angiotensin type 1 receptor (AT1R) pathway, mediated primarily by Ang II, and the nonclassic ACE2-angiotensin-(1-7)-Mas receptor axis, mediated mainly by angiotensin- (1-7). It is well known that aberrant activation of the classic RAAS leads to hypertension. Conversely, inhibition of the classic RAAS or activation of the nonclassic RAAS can prevent the development of hypertension [112].
In line with previous studies showing the antihypertensive actions of several polyphenols in hypertensive models [113][114][115], maternal resveratrol supplementation was shown to protect adult offspring against hypertension in rat models of prenatal ADMA and TMAO exposure [91], prenatal TCDD plus dexamethasone exposure [93], and a high-fat diet [95]. Hypertension in offspring programmed by a maternal high-fat diet was associated with increased Ang I levels and reduced Ang (1-7) levels in the plasma [95]. Resveratrol treatment reversed these changes but decreased plasma Ang II levels. Together, the RAAS signals affected by polyphenol resveratrol appear to be in favor of vasodilatation. Still, the detailed protective mechanisms behind the modulation of RAAS components by different polyphenols involved in the early-life origins of hypertension await further exploration.
Dysfunctional Nutrient-Sensing Signals
Nutrient-sensing signals have a decisive role in fetal development and are mainly determined by maternal nutrition [116]. Resveratrol has been well-studied for its role in regulating nutrient-sensing signals. Several signals, such as AMP-activated protein kinase (AMPK), sirtuin 1 (SIRT1), and peroxisome proliferator-activated receptor (PPARs), are molecular targets of resveratrol [117]. Resveratrol is an AMPK or SIRT-1 activator [118]. Given that AMPK and SIRT-1 can mediate the expression of PPAR target genes, and that several PPAR target genes contribute to the pathogenesis of hypertension [119], dysfunctional nutrient-sensing signals appear to be a core mechanism behind hypertension of developmental origins. On the contrary, the use of early-life interventions targeting AMPK signaling has been proposed to prevent the early-life origins of hypertension [120].
Supplementation with resveratrol during rat pregnancy and lactation protected against the rise in BP programmed by maternal L-NAME and a high-fat diet [90]. Sixteen-weekold offspring of dams treated with resveratrol presented activation of the AMPK/SIRT1 pathway. The same maternal intervention with resveratrol also showed beneficial effects against hypertension programmed by a high-fat diet coinciding with activation of nutrientsensing signals [89]. These observations highlight the need to better elucidate preventive aspects concerning the interconnection between polyphenols and nutrient-sensing signals in early life implicated in hypertension of developmental origins.
Dysbiotic Gut Microbiota
Adverse maternal conditions can alter the offspring's gut microbiota composition, resulting in adverse offspring outcomes [121]. Considering that polyphenols are biotransformed into their metabolites by gut bacteria and polyphenols can act like prebiotics to shape gut microbiota, it is speculated that maternal polyphenol supplementation has potential benefits in preventing hypertension of developmental origins. Indeed, flavanols and stilbenes have shown benefits in the early-life origins of hypertension in models of maternal high-fat diet, maternal CKD, and L-NAME plus high-fat diet [87,88,90].
In a high-fat diet model [87], maternal garlic oil therapy protected adult offspring against programmed hypertension associated with shifts in gut microbiota, with remarkable increases in the genera Bifidobacterium and Lactobacillus, two well-known probiotic strains. Additionally, garlic oil treatment increased plasma levels of acetate, propionate, and butyrate, which are the main microbiota-derived metabolites involved in BP control [122]. Given that the type and amount of active polyphenols were not determined in this study, the extent of the beneficial effect of garlic oil attributed to polyphenols deserves to be explored more fully.
Similarly, perinatal resveratrol supplementation protected against maternal CKDinduced hypertension in adult rat offspring, which was related to increased proportions of Lactobacillus and Bifidobacterium, as well as increased microbial richness and diversity [88]. In a maternal L-NAME plus high-fat diet model [90], the beneficial actions of resveratrol against hypertension of developmental origins are likely related to its ability to reduce the ratio of Firmicutes to Bacteroidetes, a microbial marker for hypertension [122]. It is an important proof of concept that polyphenols used early may act as prebiotics by reshaping the offspring's gut microbiome and reprogramming the early-life origins of hypertension.
Of note, the low bioavailability of polyphenols limits their clinical translation [45]. In this regard, we improved the efficacy of resveratrol via esterification to form resveratrol butyrate ester [123]. Our data show that low-dose resveratrol butyrate ester (25 mg/L) is as effective as resveratrol (50 mg/L) in preventing CKD-induced hypertension [124]. Considering that polyphenol bioavailability is mainly determined by gut microbiota, it would be important to further evaluate how gut microbiota affects polyphenol bioavailability involved in protecting against hypertension of developmental origins.
Inflammation
Pregnancy is considered to be a systemic physiologic inflammatory response, and inflammatory pathways are involved in compromised pregnancies and associated complications [125]. Polyphenols have been proposed to be useful as therapy for many diseases because of their anti-inflammatory actions [105]. Moreover, polyphenols can regulate immunity by interfering with immune cell regulation, gene expression, and proinflammatory cytokine synthesis [126].
The accumulation of T cells, macrophages, and their derived proinflammatory cytokines is involved in the pathogenesis of hypertension [127]. In addition, an imbalance of T helper 17 (TH17) and T regulatory (Treg) cells has been connected to hypertension [127]. The dysregulated Treg/TH17 balance and inflammation can be triggered via the aryl hydrocarbon receptor (AhR) signaling pathway [128]. The activation of AhR signaling can initiate inflammation by increasing monocyte adhesion, upregulating proinflammatory cytokine expression, inducing endothelial adhesion molecules, and reducing NO bioavailability [129].
A previous study showed that TCDD-induced hypertension coincided with TH17induced renal inflammation, as well as AhR signaling activation [92]. Conversely, TCDDinduced activation of AhR signaling and TH17 responses can be restored by resveratrol supplementation during gestation and lactation. In addition, resveratrol was reported to act like an AhR antagonist, showing benefits in preventing offspring hypertension in other models of the early-life origins of hypertension [93,94].
Though the vast number of published studies proved the anti-inflammatory role of various types of polyphenols in prevention and therapy of many diseases [105], only resveratrol has been examined for its anti-inflammatory action in the early-life origins of hypertension. More work is required to gain a comprehensive insight into the role of polyphenols in modulating inflammatory cellular pathways in order to develop inflammation-targeted therapies for the prevention of hypertension of developmental origins.
Others
With regard to the multifaceted biological role of polyphenols, other possible mechanisms might be involved, for example, epigenetic regulation or regulation of H 2 S. Several polyphenols have epigenetic action [82]. Epigenetic deregulation has been identified as a molecular mechanism underlying developmental programming in the context of DO-HaD [82]. Although one report showed that resveratrol therapy prevents obesity in adult progeny, accompanied by epigenetic regulation of leptin and its receptor through DNA methylation [130], the data are insufficient to conclude that the reprogramming effects of resveratrol on programmed hypertension are directly through epigenetic regulation. Additionally, several polyphenols have been reported to regulate H 2 S oxidation [131,132]. Notably, the protective effect of garlic oil on maternal high-fat diet-induced programmed hypertension is relevant to the enhanced H 2 S signaling pathway [87]. These findings reveal that an interaction between polyphenols and H 2 S might be behind the early-life origins of hypertension, although this remains speculative.
Although several core molecular mechanisms were outlined above, additional work will need to be carried out to explore other potential mechanisms. A greater understanding of the interactions between individual polyphenols and the mechanisms implicated in their differential protective action will be the key to identifying proper implementation of polyphenols in early life for further clinical translation.
Conclusions and Perspectives
Accumulating evidence in support of the beneficial role of early-life polyphenol supplementation in preventing hypertension of developmental origins is robust, but still incomplete. The biggest unsolved problem is the lack of a protective effect against programming of hypertension in humans by maternal dietary polyphenol consumption. Although more than 750 clinical trials have been performed on polyphenol-rich foods, polyphenol extracts, or their pure compounds to study their impact on health [133], presently, there is no information on how pregnant women receiving polyphenol supplementation will influence their children later in life.
Another factor limiting the clinical translation of polyphenols is their low bioavailability in vivo [51]. In view of the complexity and inter-individual variability of polyphenol pharmacokinetics, additional research is needed to better explore the differential impact of various polyphenols on the early-life origins of hypertension.
Another important aspect to consider is that substantial progress has been made in clarifying the benefits of different polyphenols in established hypertension, while little attention has been paid to their reprogramming effects in hypertension of developmental origins. In this review, only flavonols, flavanols, stilbenes, and tannins were investigated. Further examination will be required to get a fuller view of the reprogramming mechanisms of various polyphenols and test their dose-dependency using developmental programming models.
In summary, polyphenols have a meaningful role in the prevention of hypertension. After gaining a better understanding of the mechanisms behind hypertension of developmental origins and the latest advances in the early use of polyphenols, further research in humans will be needed to provide important insights into clinical translation and reduce global hypertension rates.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,911.4 | 2022-06-01T00:00:00.000 | [
"Biology"
] |
Phase-dependent bistable transitions in a weakly-coupled GaAs/AlAs superlattice
The bistability between a stable fixed point (SFP) state and a stable limit cycle (SLC) state is observed at the saddle-node bifurcation of cycles occurring in a weakly-coupled GaAs/AlAs superlattice. Controlled transitions between SLC and SFP are induced by external voltage pulses. Intrinsic phase dependence of the transition from SLC to SFP is clearly demonstrated. Using a discrete drift model the experimental observations can be simulated.
Introduction
Vertical transport of semiconductor superlattices (SLs) has been studied extensively in the past few decades. Many interesting transport phenomena have been revealed both experimentally and theoretically in SLs, such as saw-tooth-like I -V characteristics [1]- [4], self-sustained current oscillations (SSCOs) [5]- [8], chaos [9]- [11], U-sequence [12] and coherence resonance [13]. Bistability has also been shown inherent in the vertical transport properties of SLs [3,4,6,14]. It can exist between some stable fixed points (SFPs) [3,4], or between an SFP and a stable limit cycle (SLC) [6,14]. Here, the SFP and SLC correspond to static currents and SSCOs at a given dc bias, respectively [15,16]. The bistability between SFPs arises from the location of domain boundary in different quantum wells at the same dc bias [3,4]. On the other hand, Kastrup et al [6] theoretically predicted the existence of bistability between SFP and SLC in SLs and then Luo et al [14] observed this bistability experimentally in an undoped photoexcited GaAs/AlAs SL within a certain range of laser intensities. In both cases SLs were in a parameter region where SSCOs, i.e. SLCs, were observed throughout the first tunneling plateau and the bistable region only appeared near the edge of the plateau. In the present work, bistability between SFP and SLC is observed in a doped GaAs/AlAs SL without photoexcitation and the SL is tuned in another parameter region where dynamic voltage bands (DVBs) and static voltage bands appear alternatively on the first tunneling plateau [17]. A DVB corresponds to a voltage interval in each saw-tooth-like current branch where SSCOs are observed. By sweeping the I -V curve on the plateau both in sweep-up and sweep-down directions, a hysteresis is clearly observed at the right boundary of each DVB, which corresponds to a bistability between SFP and SLC. The bistable region is shown to end via a subcritical Hopf bifurcation at the left side and a saddlenode bifurcation of cycles at the right side. We further investigate the transition between SFP and SLC in this bistable region by using a rectangular voltage pulse as a short perturbation to the SL. It reveals an intrinsic phase dependence in the bistable transition from SLC to SFP. A numerical simulation based on a discrete drift model [18,19] reproduces all the experimental observations. Our work also indicates that the pulse-induced transition scheme can be a possible way to explore the attracting basins of SLC and SFP in the phase space.
Experimental
The GaAs/AlAs SL sample used in this work was grown by molecular beam epitaxy. It consists of 30 periods of 14 nm GaAs well and 4 nm AlAs barrier and is sandwiched between two n + -GaAs layers. The central 10 nm of each GaAs well is doped with Si (n = 2 × 10 17 cm −3 ). The sample is fabricated into 0.2 × 0.2 mm 2 mesas. The time-averaged I -V curve is measured by an HP 4155A semiconductor analyzer. In order to investigate the phase dependence of the transition from SLC to SFP an experimental set-up shown in figure 1 is utilized. The HP 4155A semiconductor analyzer functions as a dc voltage source. The real time current signal through the SL sample flows into a Stanford SR570 low noise current preamplifier whose input is virtually grounded. The SR570 amplifies and converts the current signal into a voltage signal, which is then simultaneously monitored by an Agilent Infiniium 54830B digital oscilloscope and fed into a home-made comparator (ADC) with adjustable reference level. The output of the comparator, which is a transistor-transistor logic (TTL) signal with the high level at 5 V and low level at 0 V, is connected to the trigger input of an Agilent 33220A function generator. At the rising edge of the TTL signal, the function generator is triggered to output a specific rectangular voltage pulse to the SL sample. By tuning the reference level of the comparator a voltage pulse can be generated at a well-defined phase position of the SSCO signal. In the study of the transition from SFP to SLC, the function generator is triggered manually and the comparator is disconnected from the circuit shown in figure 1. Thus weakly-coupled semiconductor SLs constitute a class of bistable nonlinear systems between SFP and SLC in the DVB regime. Although the bistability between SFP and SLC has been observed in an undoped photoexcited GaAs/AlAs SL, the bistability only appears near the edge of the first plateau and the upper and lower photocurrent branches in the hysteresis correspond to an SLC and an SFP solution, respectively [14]. But here the bistability is observed in each DVB on the plateau and the upper and lower current branches correspond to an SFP and an SLC solution, respectively. The observed bistability between an SFP and an SLC at the right boundary of a DVB can be reproduced using the discrete drift model, which has been widely adopted to study the transport properties of weakly-coupled SLs [18,19]. Since the bistability can also be observed in DVBs on the second tunneling plateau (data not shown), the diffusion component of the total current which is generally considered on the first tunneling plateau has been neglected in the simulation for simplicity. For detailed information about the model and parameters we used in the simulation, please refer to our recent work [20]. In order to determine the nature of the bifurcation occurring at the right boundary of the DVB, numerical simulations have been performed to show the dependences of the peak-to-peak current density, J pp , and the frequency, f, of the calculated SSCO on the applied dc bias, V dc , while V dc increases towards the bifurcation point V 0 (indicated by a vertical arrow in figure 3(b)). The obtained results are shown in figure 4. As V dc approaches V 0 , J pp decreases and f increases. Both J pp and f obey the same scaling law: J pp ∼ O(1) and f ∼ O(1). This is a generic scaling law for the saddle-node bifurcation of cycles [21], i.e. an unstable limit cycle and a stable limit cycle collide at the bifurcation point V 0 . Furthermore, it can also be expected that the unstable limit cycle would shrink with decreasing dc bias and eventually engulf the SFP at the left boundary of the hysteresis, giving rise to a subcritical Hopf bifurcation. Therefore, the bistability disappears via a subcritical Hopf bifurcation and a saddle-node bifurcation of cycles at the left and right boundaries, respectively. This bifurcation scenario is the same as that in [14], but different from that in [6], where supercritical Hopf bifurcation and homoclinic bifurcation were predicted theoretically.
Pulse-induced transition from SLC to SFP
The transition from SLC to SFP is studied utilizing the phase-dependent pulse generation scheme as shown in figure 1. The SL system with V dc = 473.2 mV is initially at the SLC state as marked by a red dot on the lower current branch in figure 2(b), and then a rectangular voltage pulse is applied as a perturbation to the SL system, with the pulse width and height denoted by and h, respectively. The results for two different phase positions A and B, marked by the red dots in figure 2(c), are shown here. Figures 5(a) and (b) show the transition efficiency (η) obtained under different pulse conditions for phases A and B, respectively. η is defined as the figure 5(a)). We have also investigated the transition from SLC to SFP with the same pulses of opposite polarity, i.e. h < 0, but no successful transitions are observed (data not shown).
SFP and SLC correspond to zero-dimensional and one-dimensional (1D) attractors, respectively, in the phase space. There are attracting basins for SFP and SLC in the phase space. Any phase flow falling into an attractor's basin will be attracted and finally settle down on that attractor. Generally speaking, in steady state of a bistable system the system settles down on one of the two attractors depending on the initial conditions. A dynamic perturbation can drive the system away from the initial attractor to a transient intermediate state. After the perturbation the system will return to a steady state. But depending on the basin of attraction where the transient intermediate state is located, the system can settle down on either its initial attractor or the other attractor. In the latter case, the transition between the bistable states of the system is achieved. Indeed for the bistable SL system studied here, the voltage pulse-induced transitions from SLC to SFP (as shown in figure 5) and the transitions from SFP to SLC (discussed in the next section) are clearly observed. Most interestingly, we note that the SLC is a 1D attractor in the phase space and it consists of an infinite number of phase points. Starting from different phase points the system will evolve to different intermediate states even when the same perturbation is applied. As a result, the transition from SLC to SFP through a transient process is phase-dependent. The difference between figures 5(a) and (b) as discussed above clearly demonstrates this inherent phase dependence. Furthermore, it is possible that the perturbationinduced transient intermediate state is located in the vicinity of the phase boundary of the two attracting basins of SLC and SFP. Under this condition, whether the system will finally evolve to the new target state is mainly determined by random noises presented in the experiments. This leads to stochastic transitions (0 < η < 1) as observed in figure 5(a). As for the pulse polarity-dependent features mentioned above, it is believed that they are closely associated with the detailed phase structures of the SL system in the multidimensional phase space. Different relative distribution of SLC and SFP and anisotropic attracting basins of SLC and SFP are possible reasons for this polarity dependence.
To confirm the phase-dependent transition from SLC to SFP, we also performed a numerical simulation using the discrete drift model. The transitions from SLC to SFP at V dc = 195.8 mV for two phase positions C and D, as indicated by red dots in figure 3(c), are investigated with a voltage Gaussian pulse defined by: a * exp(−t 2 /b 2 ), where a is the pulse height and b specifies the pulse width. Note that using a Gaussian pulse instead of a rectangular pulse in the simulation does not change the essence of the problem.
Figures 6(a) and (b) show the η obtained under different pulse conditions for phase points C and D, respectively. One can see that these numerical results are qualitatively the same as the experimental ones shown in figures 5(a) and (b). Successful transitions from SLC to SFP for phases C and D by applying the Gaussian pulse to the SL system are achieved and the difference between figures 6(a) and (b) confirms the inherent phase dependence of the transition from SLC to SFP. Since noise is negligible in the numerical simulation, stochastic transitions are not observed in figures 6(a) and (b).
Pulse-induced transition from SFP to SLC
The transition from SFP to SLC is also investigated, but this time the pulse generation is triggered manually instead of phase-dependently. Figure 7 shows the η obtained experimentally under different pulse conditions. Note that the pulse polarity is opposite to that used in SLC to SFP transitions (see figure 5). Blue colored squares in figure 7 clearly demonstrate the successful transitions from SFP to SLC. Since an SFP corresponds to a single phase point in the phase space, no phase dependence exists in the transition from SFP to SLC. This is a striking difference compared to the transition from SLC to SFP. It is also noted that no successful transitions from SFP to SLC are observed when the pulse polarity is changed (data not shown). Besides these, the same minimum absolute value of h in figures 5 and 7 suggests comparable size of the attracting basins of SLC and SFP at V dc = 473.2 mV. This is consistent with the fact that the applied dc bias is right in the middle of the hysteresis region, as shown in figure 2(b), since one could expect that the closer to the left (or right) boundary of the bistable region the applied dc bias is, the larger the attracting basin of SLC (or SFP) is.
In the above, we have investigated the bistable transitions between SFP and SLC induced by a voltage pulse. The pulse widths used are of the order of the intrinsic SSCO period. So the perturbation is a fast transient process and the response of the system is dynamic. This is completely different from achieving the transitions between SFP and SLC by controlling the dc voltage sweep (see figures 2(a) and (b)) as in the dc voltage sweep the system reaches a steady state at each applied dc voltage and the response of the system is static.
Conclusion
A bistability is observed in each DVB of a weakly-coupled GaAs/AlAs SL, which indicates the coexistence of SLC and SFP states at a given dc bias in the phase space. At the left boundary of the hysteresis, the SFP branch disappears via a subcritical Hopf bifurcation, while at the right boundary, the SLC branch vanishes through a saddle-node bifurcation of cycles. Controllable transitions between these two states through a transient process can be achieved successfully by applying an external voltage pulse as a dynamic perturbation. More importantly, it reveals the phase-dependent feature in the pulse-induced transition from SLC to SFP, which is in agreement with a numerical simulation based on the discrete drift model. In conclusion, the pulse-induced transition scheme as shown in sections 3.2 and 3.3 can be generally applied to any bistable system with bistability between two SFPs, or between SFP and SLC. In order to obtain controllable transitions between bistable states, the minimum requirement of the perturbation is to drive the system to a transient intermediate state located within the attracting basin of the target state. Once this requirement is satisfied, the system will evolve towards the target state after a transient process. But different from the bistable transition from SFP to SFP, or from SFP to SLC, the pulse-induced transition from SLC to SFP exhibits a unique phase-dependent character. This phase dependence is an inherent property in transitions from SLC to SFP through a transient process regardless of the detailed dynamics of the bistable system. As a result, the phase position of the SLC, where a perturbation is applied, is an additional control parameter 9 for successful transitions from SLC to SFP. Besides these, the pulse polarity dependence and the minimum requirement of the pulse revealed in the pulse-induced transitions between SFP and SLC also provide some information about the attracting basins of SFP and SLC in the phase space. | 3,551.8 | 2008-03-01T00:00:00.000 | [
"Physics"
] |
Optimized Leaky-Wave Antenna for Hyperthermia in Biological Tissue Theoretical Model
In this paper, we exploit the enhanced penetration reachable through inhomogeneous waves to induce hyperthermia in biological tissues. We will present a leaky-wave antenna inspired by the Menzel antenna which has been shortened through opportune design and optimizations and that has been designed to optimize the penetration at the interface with the skin, allowing penetration in the skin layer at a constant temperature, and enhanced penetration in the overall structure considered. Past papers both numerically and analytically demonstrated the possibility of reducing the attenuation that the electromagnetic waves are subject to when travelling inside a lossy medium by using inhomogeneous waves. In those papers, a structure (the leaky-wave antenna) is shown to allow the effect, but such a radiator suffers from low efficiency. Also, at the frequencies that are most used for hyperthermia application, a classical leaky-wave antenna would be too long; here is where the idea of the shortened leaky-wave arises. To numerically analyze the penetration in biological tissues, this paper considers a numerical prototype of a sample of flesh, composed of superficial skin layers, followed by fat and an undefined layer of muscles.
Introduction
Microwave hyperthermia is a widely utilized technique in cancer treatment [1].In simple terms, hyperthermia involves delivering a precise amount of energy to the targeted tissue, resulting in a controlled temperature increase.Microwaves offer a unique advantage over other methods by enabling heating in the volume of tissue, leading to a more even temperature distribution.The temperature profile is a crucial factor in hyperthermia, as non-microwave approaches relying on thermal diffusion tend to overheat the skin surface, whereas microwaves distribute energy within the tissue, resulting in lower skin surface temperatures.The release of microwave energy diminishes exponentially with tissue depth, making it essential to achieve a "low exponential decay" of absorbed power to enable deeper microwave penetration.This paper aims to describe a novel type of microwave applicator that facilitates deeper penetration by generating an appropriate inhomogeneous plane wave within the tissue that optimizes the penetration angle with the skin tissue.The applicator is based on a leaky-wave structure, which has been profoundly modified to meet two additional important requirements of microwave applicators: high efficiency (i.e., effectively delivering a substantial portion of the input power to the tissue) and treatment coverage of a significant tissue area.To fulfill these objectives, the original leaky-wave structure is bent, forming two parallel structures that are fed via a Wilkinson power divider.
Through analysis with commercial full-wave simulators, the final structure is proven to be compact, efficient, and capable of effectively releasing energy within the tissues.
The introduction section of this article is followed by a subsection dedicated to the theoretical background, provided to allow the understanding of the theory behind this antenna design.In Section 2, the design is explained and the chosen numerical models for the biological tissues are given.Finally, the simulation environment and the positioning of the model relative to the antenna have been described.In Section 3, the results of the simulation are illustrated, and in Section 4, the importance of the design proposed is highlighted, but also the compromises made are given, and objectives for future research are indicated.Finally, conclusions are provided.
Theoretical Background
It has been demonstrated in the literature that the incidence of an inhomogeneous wave incoming from a non-dissipative medium on a lossy medium may produce a transmitted wave that can penetrate deeper than the one produced by the incidence of a more conventional homogeneous wave, both numerically [2] and analytically [3][4][5][6].In order to obtain a deep-penetration effect, the incoming wave must fulfill some conditions that bind the minimum attenuation vector to the electromagnetic characteristics of the medium and the incidence angle [5].In fact, the minimum attenuation vector for which it is possible to achieve deep penetration is where , respectively, the wave vectors in the lossless medium and in the lossy medium.In correspondence of those values of the attenuation vector, it is either [5]: or depending on the characteristics of the media involved, where ξ is the angle formed by the phase vector with the normal to the interface between the air and the lossy medium and the subscript "c" stands for "critical", meaning that ξ c is the minimum value of ξ that assures the deep-penetration phenomenon, i.e., ζ 2 = 90 • , ζ 2 being the angle formed by the attenuation vector of the wave inside the lossy medium with the normal to the interface with the air [5].Moreover, in [6], an alternative and equivalent description, more suitable for ray tracing techniques, has been developed, and, among the results, it is demonstrated that the deep-penetration effect can be achieved at the interface between two lossy media.The deep-penetration phenomenon requires an incidence angle different from 0 • , and in any case, the presence of an attenuation vector in the incidence wave constitutes a sufficient condition for a possible enhancement of the penetration in a lossy medium, i.e., even in case of normal incidence, an inhomogeneous wave may allow deeper penetration [2].It has to be noted, by the way, that a less attenuated field does not necessarily correspond to a stronger field inside the medium, or at its surface.In practical applications, such as microwave hyperthermia, the absolute value of the field within the lossy medium is of main interest.On the other hand, a great attenuation means that the field strongly loses power with respect to a less attenuated field, so, to reach the desired field at a certain depth, a stronger field at the surface (i.e., the skin) may need to be imposed, thus resulting into overheating or burning of the surface.The inhomogeneous wave behavior promises a more homogeneous distribution of the field amplitude while penetrating in a lossy medium.Following both from the new literature findings listed above, and from some exploratory work that tried to benefit from the leaky-wave antennas' properties in order to create microwave applicators, such as [7], here, to achieve our objectives, we designed a particular leaky-wave antenna (LWA), which we derived from the Menzel antenna [8][9][10], and it was demonstrated as a suitable antenna design for deeper penetration, given the large value of the amplitude of the attenuation vector achievable by this structure.Such an antenna design has been modified here to make it shorter so that it would be suitable for biomedical applications, and more specifically hyperthermia, at a frequency of 2.4 GHz.
Materials and Methods
The antenna has been designed and simulated by using the CST Microwave Studio Software licensed to the DIET Department of "La Sapienza" University of Rome, and all figures shown in the paper have been obtained either by producing them with such a software, or re-designing them, eventually using Matlab, with the addition of custom information, aimed at providing additional details which were not available in the original figures.
The antenna operating frequency is 2.4 GHz.This is because the 2.4 GHz band belongs to the Industrial Scientific and Medical (ISM) frequencies [11], and it makes it particularly easy to penetrate inside biological tissues, since the wavelength is much larger than the average electrical thickness (with respect to the dielectric constant) of skin or muscles.
Leaky-Wave Antenna Design
Leaky-wave antennas (LWAs) are structures in which a propagating travelling wave loses power as long as it propagates.This happens due to asymmetries in general, periodically placed along the LWA, that disturb the normal flow of the energy producing radiation out of a guided mode [12][13][14].The LWA design proposed here is based on the Menzel antenna, already considered in [2], and opportunely modified for the hyperthermia application requirements.The antenna in [2] has been designed to operate at 12 GHz.In that case, the chosen medium's electric permittivity and magnetic permeability were chosen to be equal to the ones of a vacuum, but a finite non-zero conductance was added.That was a reasonable choice, since the main interest in those papers was purely theoretical, without addressing any specific application.As a result, we had to re-design the antenna for operating at 2.4 GHz.An LWA is usually several wavelengths long; therefore, keeping the form factor of the original design to operate at 2.4 GHz is not feasible (i.e., simply scaling the dimensions of a factor equal to 12/2.4).That is why we designed a particular antenna configuration, splitting the LWA into three sub-antennas.The single-element dimensions and performances in free space can be seen in Figure 1.
The antenna shown in Figure 1 operates at 2.4 GHz but is only 150 mm long, i.e., only a little longer than a wavelength.A proper transition of about 8 mm for each port is needed to match the antenna to 50 Ω, leading to a total length equal to 166 mm.
Note that a longer antenna would not have been practical for hyperthermia applications.This design, which results in reduced efficiency, since only a fraction of the power supplied to the antenna is effectively radiated, will be modified in the following.It is understood that the remaining part is absorbed by the waveguide port placed at the end of the Menzel, so as not to let it radiate.
With respect to the antenna presented in [2], this structure has been halved, exploiting the symmetry of the radiating mode (i.e., the first odd mode) through several via holes all along the symmetry axis.This simpler structure allows it to be easily fed by a 50 Ω microstrip line.Moreover, halving the Menzel antenna allows us to suppress the dominant mode, as illustrated in [15].The difference between the radiating mode excited in the classical configuration with respect to the halved Menzel antenna is visible in Figure 2. Figure 2 represents an example of the image theory [16].The S-parameters shown in Figure 1 represent the relationship between the power at the two ports.The first one is where the power is excited (one side of the antenna), and the second one is where the remaining part is absorbed (the opposite side of the antenna); see Figure 1.It is worth noting that, in this case, due to the losses of the net (i.e., the radiation of the antenna), the S-matrix is no longer Hermitian; this means that the sum squared of the S 11 and S 21 is not equal to 1 [17].Of course, power that is not reflected nor transmitted is radiated.
Since the S 21 in Figure 1 shows that the second port receives −4.9 dB of the power injected, it means that due to the shortness of the antenna, a consistent part of energy is still present in the dielectric.In order to obtain an efficient applicator, avoiding wasting such a consistent portion of provided power, and to extend the area under treatment, we modified the basic design, shown in Figure 1, by putting a power divider (in the figure below, a Wilkinson divider [17]) at the very end of the Menzel.To better understand this modification, let us consider the three identical Menzel antennas shown in Figure 3. Let us assume that the power generated by an RF source is connected to the input port of the "Menzel #1", shown in Figure 3. Part of this power would be radiated along the "Menzel #1", according to the S-Parameters shown in Figure 1.Instead of absorbing the remaining part, we have split it through a 3 dB power divider and used it to feed the input port of the sections of the antenna labelled as "Menzel #2" and "Menzel #3", in Figure 4. Concerning the choice of the 3 dB divider, it is important to observe that this choice was made to preserve the phase relation between the three Menzel components; in order to radiate properly, is important that all those are excited coherently.A simpler split of the two branches, as is customary in antenna arrays, could also be performed, but the Wilkinson divider assures the isolation between the two channels [18].
The considered Wilkinson power divider is shown in Figure 5, while its performances are summarized in Figure 6.The S-parameters are normalized to 50 Ω.Of course, in a possible antenna manufactured for this prototype, proper transitions and connectors should be considered.Under the schematic point of view, the structure is summarized in Figure 7.Given the power of the input signal equal to P 0 e j∆φ 0 , the returned signal to port 5 and 5 of the LWA is where the cables losses have been neglected.In order to ensure the good behavior of the structure, it is important that the returned signal is coherent with the input signal.This happens if Or, equivalently, The latter represents the design law for the cables' length.
In Figure 8 are shown the reflection coefficient and the realized gain of the structure presented in Figure 7.The realized gain G r for the loss-free antenna designed has been obtained by simulating the antenna in CST in free space (in a vacuum) and in the absence of the simulated lossy tissues.This simulation has been carried out to evaluate the antenna characteristics and to validate its design against that of the original Menzel antenna.
G r = 4π power radiated per unit solid angle total radiated power where Γ is the impedance mismatch loss [19].Eventually, a Monte Carlo analysis [20] was carried out to evaluate the impact of the cables and manufacturing tolerances over the antenna radiation mechanism.The analysis has been realized considering 1000 iterations and a uniform distribution for the tolerances of the phase and the amplitude mismatch on the returned signal, with respect to Figure 7.The idea is to consider a cumulative error vector that takes into account the non-ideality and asymmetries of the cables and the Wilkinson branches.For the analysis, a phase mismatch of [−5 • ; +5 • ] and a power mismatch of [−2; +2] dB were considered between Menzel antennas #2 and #3.It is worth noting that the considered intervals are wide for modern technological capabilities.
In Figure 9 can be seen the interval of confidence in which the antenna's performances are guaranteed with the considered intervals of phase and amplitude matching.While we aim to design an antenna for near-field application, the far-field diagram still provides some insight into the quality of the obtained antenna design and provides the LWA pointing angle.In fact, to exploit the benefits of the deep-penetration effect, precise values of α and β are required.Calculating them directly in the near field may be very difficult.But for an LWA, these two values are bound to the far-field performances, i.e., the pointing angle and the beamwidth [12].In Figure 10, we show the electric field amplitude in the proximity of the antenna (near field) which is relevant for the targeted hyperthermia application.
The Biological Tissues
To investigate the chance of using this novel structure in hyperthermia applications, we referred to the biological tissue models presented in [21][22][23][24].Their electric permittivity vs. frequency is shown in the figures below, following the Cole-Cole model [24].The considered stratification is as follows: Figure 11 represents the geometry for the biological tissues considered, while Tables 1 and 2 illustrate their electromagnetic and thermal characteristics, respectively.
Assessment of the Heated Region
The simulations were carried out through a unidirectional solver, part of the suite CST-MW [25], that combines a FIT in the time domain [26,27] with a thermal transient solver that elaborates the power losses due to the electromagnetic fields.The thermal properties of the skin, fat, and muscles are chosen according to [24] and indicated in Table 2.The biological stack-up has to be placed in the near field of the LWA to experience an improved penetration due to the improper wave.The use of waveguide ports as feeding structures in this setup results in challenges, since they would electrically face a non-homogeneous medium that would affect the calculation or, in the worst case, they could interfere with the skin and the fat placed just above the antenna.That is why a different, and more realistic, feeding network had to be designed, i.e., a 50 Ω coaxial feed inserted to excite the structure, as visible in Figure 12.Of course, the radiation mechanism remained unaltered.The distance between the antenna surface and the biological medium has been optimized; different behaviors of the reflection coefficient have been assessed simply by varying the air gap.On the other hand, the reflection coefficient is not sufficient to guarantee that the power is correctly delivered.In fact, power that is not reflected to the coaxial connector could have generated a back-lobe or could have been coupled to other connectors.That is why another parameter has been taken into account to ensure that the distance would have really been optimized with respect to the biological stack up: the integral of the power density inside the skin, fat, and muscle.In Figure 13 is reported the result of the optimization.The value d = 0.5 mm has been chosen, due to the fact that the antenna proves to be very well-matched (left picture) and the power is correctly delivered inside the medium.The right side of Figure 13, in fact, represents the ratio between the power dissipated in the biological stack-up with respect to the stimulated one: more than 0.9.Of course, the remaining part of the power is absorbed by ports or by the radiative boundary conditions.For the excitation signal, three different trapezoidal feedings were considered; they are shown in Figure 14.The length is chosen to allow the temperature achieved by the skin tissue to reach 323 K. Once the desired temperature is reached on the surface part, the temperature distribution inside the medium is evaluated.The body temperature at rest has been set to 310 K in the simulation environment.Furthermore, to simulate the presence of blood flow, the blood perfusion coefficient has been activated.This simply enables the heat exchange mechanism between the tissue and blood, set to a temperature of 310 K.
Results
To better understand the simulation performed, we will give a little insight into the simulator tasks.
We firstly designed the electromagnetic model on the CST software, inserting all the electromagnetic and thermal characteristics of the media involved.
Then, we considered a power loss monitor at the frequency of interest, analyzing the problem with a time domain solver.The power loss monitor considered all the power radiated by the antenna and dissipated inside the tissue.After this step, the simulator used the obtained results as a source in the thermal solver, both the steady-state and the transient.
The simulation schematic is shown in Figure 15.In Figure 16, the maximum value of the temperature reached by the body tissue vs. the depth inside it is illustrated.The different colors, stacked vertically at different relative depths, represent the different biological media (skin, fat, and muscle, respectively).In essence, the figure represents the behavior of the temperature along the stratification direction given different excitation signals.The behavior between the curves is similar; the temperature behavior is stable in the skin, remaining quite constant.Then, it drops in the fat quite linearly, eventually decaying exponentially inside the muscle.The optimization of the angle at the interface with the skin allows us to have an attenuation vector almost tangent to the separation surface between the skin and the air; this attenuation component needs to be preserved at the following interface (the one between skin and fat) for the wellknown conservation of the attenuation component of electromagnetic fields at the interface between two media.Thus, the wave in the second medium cannot attenuate exponentially in the direction of propagation, increasing penetration.This "deeper penetration" is found to diminish as other interfaces are encountered, and so the attenuation decays almost exponentially in the direction of propagation in the third medium.
Discussion
While the design of leaky-wave antennas for hyperthermia applications has been presented in the literature, even recently (see for instance [28,29]), the novelty of this article consists of proposing a structure that benefits from previous studies on deep penetration, allowing us to generate an electromagnetic wave that has an attenuation vector parallel to the separation surface between air and skin (ζ 2 = 90 • ), thus allowing a constant temperature in the skin, as shown in Figure 16.This result is significant, as it demonstrates that it is possible to shape the attenuation of the transmitted wave in practical antennas, overcoming some of the challenges evident in the previous literature on deep penetration, and in particular, the longitudinal dimension of the Menzel LWA necessary to achieve such an angle, which proved to be too large at the frequencies employed here.The antenna presented here aims to optimize the penetration in the skin and to minimize the probability of burns, by shaping the attenuation angle between air and skin.In doing so, we modify the attenuation component of the transmitted vector, obtaining a tangential component that must be preserved and increasing the penetration with respect to the case of normal incidence with a homogeneous wave.Anyway, the overall penetration could be further optimized, because the choice made did not optimize the transmitted wave at the interface between skin and fat or the one between fat and muscle.An alternative objective could be to reach the optimum overall penetration, rather than optimizing the temperature at the skin.In this case, the optimum incident angle at the first interface may not be a deep-penetration angle for any particular interface, but it could result in the best compromise.Clearly, doing so, a higher temperature on the skin surface would be expected.
Past studies have shown very good agreement between numerical models of tissues and real tissues [22].Other studies have also demonstrated excellent agreement between the prediction obtained from numerical models and experimental verifications of applicators based on leaky-wave antennas; see [30].However, while the models employed here represent a good average for typical values in the body, it is known that electromagnetic parameters can vary for factors such as the water content of the local tissue, or the skin thickness [23].Therefore, a possible future antenna prototype may need specific optimizations for the region of interest.This optimization may also simply consist in modifying slightly the frequency to benefit from the frequency-scan properties of LWAs, to optimize the radiation angle for the region chosen.The antenna prototype shall also need to take into account constructive factors such as the tolerances for the materials employed.Finally, a prototype may need to take into account aspects typical of the hyperthermia treatment that are not within the scope of this paper, such as the integration of the antenna with other medical equipment, or the shape that optimizes the patient's comfort and mobility.
Conclusions
In this paper, we presented an LWA for hyperthermia, and we tried to benefit from the deep-penetration effect to reduce the temperature at the surface of the skin, improving the heat distribution.The antenna designed shows that temperature in the skin is able to remain constant, and then start decreasing when the muscle is found, showing overall encouraging results.Some challenges have also been faced and resolved in terms of typical LWA lengths and efficiency that would affect the ideas presented in previous papers.Also, some of the manufacturing problems that may arise in such a design have been analyzed and commented on.Anyway, some future studies will be necessary for finding a good compromise between attenuation at the first interface (air-skin) versus the penetration at the second interface (skin-muscle) to allow an overall better result.Furthermore, the tissue composition that has been considered is sufficiently generic for this analysis.In future work, a particular body zone may be analyzed and optimized with respect to the typical thickness of the biological tissues involved.Finally, those results will have to be validated by performing prototype manufacturing and testing.
Scattering parameters: input reflection S11 and forward transmission (from port 1 to port 2) S21.
Figure 1 .
Figure 1.LWA design (a) and free space performances (b) in air (a vacuum).
Figure 2 .
Figure 2. Electric field of the radiating mode on the input port of the Menzel antennas in Menzel antenna (left) and halved Menzel antenna (right).
Figure 3 .
Figure 3. Considered leaky-wave antenna composed of 3 "sub-Menzel" antennas with field representation at the input ports.The picture illustrates qualitatively the field at the port for the three sub-antennas: the field for the central component, which is attached to the input port, is shown in red, for the other sub-antennas, the color yellow is used.
Figure 4 .
Figure 4. Leaky-wave antenna composed of 3 sub-Menzel antennas and a Wilkinson power divider.The dashed arrows indicate the direction of the electric field from and to the Wilkinson divider, while in red we indicate the position of the input port to the antenna.
Figure 7 .
Figure 7. Circuital block scheme of the leaky-wave antenna structure.
Figure 8 .
Reflection coefficient (a) and realized gain, (b,c), of the considered structure.Different colors, in (c), represent different amplitudes for the realised gain in dB, according to the scale on the right-hand side of the picture.
Figure 9 .
Figure 9. Reflection coefficient (left) and realized gain (right) of the 3 Menzel LWAs.The black curve represents the nominal value, and the red dotted ones represent the maximum and minimum of the Monte Carlo analysis.
Figure 10 .
Figure 10.Near (electric) field generated by the considered leaky-wave antenna.
Figure 12 .
Figure 12.The 50 Ω coaxial feeding network used to feed the LWA from below.The dielectric is represented as transparent.
Figure 13 .
Figure 13.Reflection coefficient (left) and power dissipated inside the biological medium divided by the total power excited (right) of the leaky-wave antenna when placed in front of the tissue stratification.
Figure 14 .
Figure 14.Excitation signals used for the simulation.
Figure 15 .
Figure 15.Schematic of the simulation task used in CST.
Figure 16 .
Figure 16.Temperature distribution, expressed in • K, sampled along the stratification direction.
Finally, in Figure 17
the temperature distribution expressed in • K, inside the biological sample is also illustrated for different timestamps, to provide an insight into the change of temperature in time caused by the designed antenna in the sample tissue theoretical model.
Figure 17 .
Figure 17.Temperature distribution, expressed in • K, inside the biological sample for a 5V signal and at different timestamps.
Table 1 .
Electromagnetic properties for the selected biological tissues.
Table 2 .
Thermal properties for the selected biological tissues. | 6,062 | 2023-11-01T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Volumetric trajectories of hippocampal subfields and amygdala nuclei influenced by adolescent alcohol use and lifetime trauma
Alcohol use and exposure to psychological trauma frequently co-occur in adolescence and share many risk factors. Both exposures have deleterious effects on the brain during this sensitive developmental period, particularly on the hippocampus and amygdala. However, very little is known about the individual and interactive effects of trauma and alcohol exposure and their specific effects on functionally distinct substructures within the adolescent hippocampus and amygdala. Adolescents from a large longitudinal sample (N = 803, 2684 scans, 51% female, and 75% White/Caucasian) ranging in age from 12 to 21 years were interviewed about exposure to traumatic events at their baseline evaluation. Assessments for alcohol use and structural magnetic resonance imaging scans were completed at baseline and repeated annually to examine neurodevelopmental trajectories. Hippocampal and amygdala subregions were segmented using Freesurfer v6.0 tools, followed by volumetric analysis with generalized additive mixed models. Longitudinal statistical models examined the effects of cumulative lifetime trauma measured at baseline and alcohol use measured annually on trajectories of hippocampal and amygdala subregions, while controlling for covariates known to impact brain development. Greater alcohol use, quantified using the Cahalan scale and measured annually, was associated with smaller whole hippocampus (β = −12.0, pFDR = 0.009) and left hippocampus tail volumes (β = −1.2, pFDR = 0.048), and larger right CA3 head (β = 0.4, pFDR = 0.027) and left subiculum (β = 0.7, pFDR = 0.046) volumes of the hippocampus. In the amygdala, greater alcohol use was associated with larger right basal nucleus volume (β = 1.3, pFDR = 0.040). The effect of traumatic life events measured at baseline was associated with larger right CA3 head volume (β = 1.3, pFDR = 0.041) in the hippocampus. We observed an interaction between baseline trauma and within-person age change where younger adolescents with greater trauma exposure at baseline had smaller left hippocampal subfield volumes in the subiculum (β = 0.3, pFDR = 0.029) and molecular layer HP head (β = 0.3, pFDR = 0.041). The interaction also revealed that older adolescents with greater trauma exposure at baseline had larger right amygdala nucleus volume in the paralaminar nucleus (β = 0.1, pFDR = 0.045), yet smaller whole amygdala volume overall (β = −3.7, pFDR = 0.003). Lastly, we observed an interaction between alcohol use and baseline trauma such that adolescents who reported greater alcohol use with greater baseline trauma showed smaller right hippocampal subfield volumes in the CA1 head (β = −1.1, pFDR = 0.011) and hippocampal head (β = −2.6, pFDR = 0.025), yet larger whole hippocampus volume overall (β = 10.0, pFDR = 0.032). Cumulative lifetime trauma measured at baseline and alcohol use measured annually interact to affect the volume and trajectory of hippocampal and amygdala substructures (measured via structural MRI annually), regions that are essential for emotion regulation and memory. Our findings demonstrate the value of examining these substructures and support the hypothesis that the amygdala and hippocampus are not homogeneous brain regions.
Introduction
Early-life trauma and alcohol use disorders in the adolescent period are often co-morbid 1 . A robust link has been established between exposure to childhood trauma and adolescent binge drinking and alcohol misuse 2 . Thus, experiencing early-life physical trauma is six times more likely, and experiencing sexual trauma is 18 times more likely among adolescents with alcohol use disorder (AUD), constituting a major risk factor 1,3 . Harmful patterns of alcohol use often present in adolescence among at-risk individuals, which offers a valuable window for exploring pathways between childhood trauma and the onset of AUD. Emotional dysregulation such as impulsivity and mood lability that are associated with childhood trauma, are potent risk factors for risky behaviors including alcohol and illicit substance use [4][5][6] . Consequently, the impact of heavy alcohol use during adolescence on individuals with early-life trauma may lead to a life-course with persistent AUD by impairing neural systems that regulate goal-directed behaviors, inhibition, memory, anxiety, and fear responses 7 . Conversely, alcohol intake impairs executive control and undermines the function of the reward system 8 .
Several cross-sectional studies have shown that experiencing DSM criteria-A trauma in childhood [9][10][11][12][13] and exposure to alcohol during adolescence both independently influence amygdala and hippocampal structure and function negatively. Evidence from functional neuroimaging shows that early-life trauma exposure impairs top-down prefrontal control of the limbic system 1,14 . The resulting disinhibition of limbic processes poses a risk factor for adolescent alcohol use which itself promotes further behavioral disinhibition and impulsivity 15 that may manifest as binge drinking. Preclinical studies demonstrate that the hippocampus and amygdala are altered by both early-life stress 16 and adolescent alcohol use 17,18 , which is associated with disruptions of hippocampal neurogenesis. However, most human studies are focused on adults rather than adolescents and on total hippocampal and amygdala volumes 19,20 . Investigations of subregions within the adolescent hippocampus and amygdala are vastly underperformed 21 . The effects of heavy alcohol use in adolescence and its interaction with early-life trauma on the developmental trajectories of amygdala and hippocampal subregions volume are not known. Furthermore, the amygdala and hippocampus are not unitary structures. Each structure is composed of several subregions representing manifold functions that mediate distinct emotional and behavioral responses.
Each amygdala subregion communicates with other amygdala subregions, subcortical regions, and cortical regions in the setting and aftermath of trauma to elicit distinct behavioral responses, akin to fight or flight, and cognitive responses such as associative fear learning. Each subcortical structure is composed of several subregions representing multifarious functions that mediate responses to trauma exposure. Likewise, these subcortical structures play equally important, but functionally different roles in adolescents with AUD. For instance, the critical amygdala function of fear perception and responding to threatening stimuli is significantly reduced by alcohol intake 22 . Evidence in humans shows that the anxiolytic effects of alcohol are mediated by the amygdala. For instance, nuclei-specific hypertrophic changes in the basolateral complex of the amygdala (BLA) accompany anxiety-like behavior after exposure to chronic, but not acute restraint stress in rodents 23 . Perturbations in the connection between the BLA and nucleus accumbens produce suboptimal decision-making by diverting choices to more risky options 24 . A substantial decrease in the inhibitory synaptic activity of BLA neurons follows longterm alcohol consumption by way of cellular and molecular circuit-level adaptations. This undermining of inhibitory control in the BLA is thought to explain the high incidence of compulsive drinking and anxietyinduced relapse in patients with alcohol use disorders 25 .
Similarly, the central and medial nuclei of the amygdala are important for mediating responses to fear 26 . Specifically, the central nucleus of the amygdala is essential to limbic activity required in freezing and flight behaviors, and also a critical component in the mechanism of alcohol self-administration 27,28 . In fact, the ventral tegmental area dopamine system has major reciprocal connections with the central nucleus of the amygdala 29 .
Similarly, hippocampal subfield-specific functions play a critical role following trauma exposure. The dentate gyrus (DG) of the hippocampus is important in distinguishing features that are different from other memories in order to store similar memories as discrete events-a phenomenon called pattern separation 30,31 . Pattern separation deficits may underlie fear generalization 32 , a process that occurs in anxiety and stress-based disorders including posttraumatic stress disorder (PTSD) 33 . By contrast, the entorhinal cortex (EC) and cornu ammonis subfield-3 (CA3) of the hippocampus are crucial in distinguishing events with overlapping features-a phenomenon called pattern completion that has important implications in contextual fear conditioning 34 . Longitudinal studies showed accelerated gray matter decline in the hippocampus and parahippocampus of college students 21 and a decrease in hippocampal volume more generally among adolescents 35 . Subregion-specific effects of alcohol were found in the dentate gyrus of the hippocampus. Alcohol modulated molecular mediators are thought to be involved in experiencing interoceptive cues that regulate reward-seeking behavior. Moreover, adults with AUD have been shown to have age-dependent atrophy of CA-2 and CA3 hippocampal subfields 36 . However, the investigations of alcohol use on specific subregions have received minimal attention particularly in adolescents [37][38][39][40] .
In this longitudinal study, we measured the effects of early-life trauma and alcohol use on the developmental trajectories of the amygdala and hippocampal volume subregions in adolescents age 12-21 years in the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA) study 38 . NCANDA has an accelerated longitudinal design, which enrolls multiple single cohorts, each one starting at a different age. Its main advantage is in its ability to span the age range of interest in a shorter period of time than would be possible with a single cohort longitudinal design. The NCANDA study examined the psychological, environmental, neuro-predictors, and neuro-consequences of adolescent alcohol use in a large diverse community sample. Here, we examined the conditioned main and interactive effects of trauma and alcohol use on specific subregions of the amygdala 41,42 and hippocampus 43 as these subregions have different functions and their developmental trajectories may be differentially affected by heavy alcohol use. Although previous studies have examined the association between alcohol use and trauma on total amygdala and hippocampal volumes, to our knowledge, this is the first study to examine the effects of alcohol use and early-life trauma on the developmental trajectories of these subregions. We hypothesize that heavy alcohol use, early trauma at baseline, and their interactions throughout the longitudinal observation period will be associated with an increase or decrease in the rate of subregional volume change during the adolescent period. While the study of subregion-specific impacts is largely exploratory, we expect that the functional specialization of subregions may influence subregion-specific impacts from trauma and alcohol use. Given the lack of literature, it is not possible for us to generate informed subregion-specific hypotheses as each subregion may undergo distinct hypertrophic or atrophic effects under the independent or combined environmental insults of alcohol use and trauma.
Methods and materials
Participants Adolescents (n = 803) ages 12-21 at baseline, were recruited across five NCANDA sites: University of California at San Diego (n = 210); Duke University Medical Center (n = 169); SRI International (n = 160); Oregon Health and Science University (n = 149); and University of Pittsburgh Medical Center (n = 121). The administrative component (UCSD) and the data analysis and informatics component (SRI) facilitated the training, quality assurance, and data integration across sites. The institutional review board at each site approved the study. Adult participants consented to participate, and minors provided written assent along with consent from a parent/ legal guardian. Baseline exclusionary criteria included serious medical, mental health, or learning disorders. NCANDA's primary aim is to determine the neurobiological effects of alcohol use, and exclusion criteria required the majority of participants to meet CDC guidelines for normalized adolescent experimentation with alcohol, meaning limited exposure to alcohol and other drugs such as marijuana or nicotine. The entire NCANDA sample across sites was limited such that only a subset (17%) of enrolled youth could exceed alcohol use thresholds for alcohol only at baseline. Alcohol use thresholds varied by age and sex, and the maximum allowable drinks on any one occasion are detailed in previous NCANDA publications.
Adolescents were also screened for risk status, to sufficiently include at-risk youth who were more likely to initiate heavy alcohol use during the follow-up assessments. Criteria for heavy alcohol use risk were: (1) Initiation of alcohol use before age 15; (2) Positive family history of AUD; (3) One or more externalizing symptoms (e.g., conduct disorder); or (4) Two or more internalizing symptoms (e.g., depression or anxiety). Roughly 50% of the sample met high-risk criteria 38 . The NCANDA sample available through the follow-up-3 data release, which is presented here, includes 831 adolescents at baseline. Twenty-eight participants were removed from the dataset following failure of one or more steps in the FreeSurfer longitudinal stream (n = 2), hippocampus/amygdala segmentation (n = 17), or missing alcohol or drug use data (n = 9). Of the 803 subjects included in the present analyses, 739 returned in 1 year for an annual follow-up, 651 for follow-up 2, and 491 for follow-up 3. A total of 2,684 individual study visits are included in the present study. See Table 1 for sample demographic characteristics stratified by study visit. See Supplementary Table S1 for demographic characteristics of the sample at baseline by the site.
Clinical measures
Drinking class was measured at baseline and subsequent follow-ups to capture alcohol use over time. Categorized by a modified Cahalan inventory 44 drinking class was used to quantify both the quantity (average and maximum consumption) and frequency of alcohol use within the past year ( Supplementary Fig. S1). "No to low" drinkers (e.g., drinking class value of 0) reported no or low quantity and frequency consumption (e.g., <1×/month, <2 drinks on average, and <4 drinks maximum). "Moderate" drinkers (i.e., drinking class value of 1) ranged from low alcohol use frequency (e.g., <1×/month) with moderate quantity consumption (e.g., with 2-3 drinks on average and 4-5 drinks maximum) to moderate frequency (e.g., 1×/week) and low quantity consumption (e.g., with 2 drinks on average and <4 drinks maximum). "Heavy" drinkers (i.e., drinking class value of 2) ranged from Native American and Caucasian moderate frequency (e.g., 2×/month) with high quantity consumption (e.g., with 3-4 drinks on average) to a higher frequency (e.g., 1×/week or more) with moderate quantity consumption (e.g., with 2-3 drinks on average). Lastly, "Heavy binge" drinkers (i.e., drinking class value of 3) reported heavy use, with higher quantity consumption (>4 drinks). Age at initiation of regular alcohol use was assessed using the Customary Drinking and Drug Use Record (CDDR) 45 . On the CDDR, regular use was defined as consuming alcohol (i.e., beer, wine, or liquor) at least once a week. Cumulative trauma at baseline was quantified as the sum of reported DSM-IV or 5 Criterion A traumatic events on the PTSD section of the Semi-Structured Assessment for the Genetics of Alcoholism 46 . Cumulative (i.e., lifetime) trauma at baseline for an individual was labeled as 0, for no reported traumatic events, 1, for a single reported traumatic event, 2, for two traumatic events, 3, for three traumatic events, or 4, for four or more reported traumatic events. A traumatic event was counted once if either the parent and/or youth reported an event. Further information about the baseline traumas experienced in NCANDA was previously published 39 . See Supplementary Materials for more information on baseline trauma variable collection and Supplementary Table S2 for trauma types reported in the NCANDA sample. SES was quantified using the highest parental years of education of either parent 38 . Family history of Alcohol Use Disorder (AUD) density was calculated based on the presence of AUD in first and second-degree relatives (positive parents + positive grandparents * 0.5; yielding a range of 0-4) 47 .
MRI acquisition
All five NCANDA sites used comparable anatomical MRI data collection protocol and 3T systems: 3T General Electric (GE) Discovery MR750 and 3T Siemens TIM TRIO. The GE sites (SRI, Duke, and UCSD) used an Array Spatial Sensitivity Encoding Technique (ASSET) for parallel and accelerated imaging with an eight-channel head
Longitudinal segmentation pipeline
To extract reliable volume estimates, all T1-weighted structural scans were processed longitudinally using FreeSurfer v6.0, which includes cross-sectional segmentation with longitudinal initialization. The hippocampus and amygdala for each structural scan with a resolution of 1.2 × 0.9375 × 0.9375 mm were simultaneously autosegmented ( Supplementary Fig. S2) using the FreeSurfer longitudinal segmentation pipeline by Iglesias and colleagues 48 . The longitudinal segmentation pipeline uses a probabilistic atlas built with ultra-high resolution ex vivo MRI data (~0.1 mm isotropic) to segment 19 hippocampal subfields and 9 amygdala nuclei 48,49 . For a list of all hippocampal subfields and amygdala nuclei (Supplementary Table S3). Neither the main longitudinal pipeline nor the longitudinal segmentation pipeline assumes any specific trajectory (i.e., volume increase or decrease over time) for the segmentation or corresponding volumes 48 . The hippocampus and amygdala are jointly segmented to avoid overlap or gaps between structures 49 . See Supplementary Materials for information on the test-retest reliability of the Freesurfer v6.0 longitudinal segmentation approach, and Supplementary Fig. S3 for intra-class correlations (ICC) of subregion volume estimation across NCANDA sites.
Outlier detection and removal
Quality assurance was achieved through a two-step approach (1) statistically-based outlier detection, followed by (2) visual inspection. Outlier detection removed subregions whose volume was more than 2.69 standard deviations from the mean. All scans were included in calculating the standard deviation of substructure volume regardless of timepoint. Therefore, a specific structure could be excluded but the remaining structures for the same subject's associated timepoints were retained in our analysis. Automated outlier detection was followed by visual inspection by three trained raters (RP, NB, and MM) to rule out mis-segmentation due to image artifacts. The number of outliers by subfield is included in Supplementary Table S3.
Statistical modeling
NCANDA's accelerated longitudinal design (e.g., cohort-sequential design), allows us to consider both the within-subject and within-cohort structural brain changes over the course of the study. Within-person age change represented the difference between a subject's age at each scan and their mean age across individual timepoints. Cohort age represented the difference between a subject's mean age across visits and the mean age of the entire sample across timepoints, thus centering cohort age at the sample mean. Each participant's cohort age remained constant across timepoints.
Following previously implemented approaches for structured multi-cohort longitudinal designs, we modeled the developmental trajectories of hippocampal subfields and amygdala nuclei using a mixed-effects approach 50 . In this design, random effects accounted for the withinsubject covariance across time. In all models, NCANDA site and participant identity were included as random intercepts. Within-person age change, cohort age, whole hippocampal or amygdala volume, sex, race, socioeconomic status (SES), drinking class, family history of AUD density 47 , and cumulative lifetime trauma at baseline were included as covariates for conditional likelihood. Covariates of race, sex, SES, trauma, and family history that were assessed at baseline were modeled as stable variables, which did not vary across time. This was achieved by repeating baseline values across time points per subject.
Two hierarchical models were tested across hippocampal subfields and amygdala nuclei for statistical significance. Model 1 was designed to evaluate the effects of traumatic life events as participants mature through an interaction term with age cohort, age change, and trauma, while also considering alcohol consumption (drinking class). Model 2 added an interaction term to evaluate whether alcohol consumption (drinking class) increased or decreased the effects of trauma from model 1. A secondary analysis of these models was performed while controlling for lifetime marijuana and tobacco use.
All analyses were conducted in the statistical program, R version 4.0.0 (www.R-project.org), using the gamm4 package for generalized additive mixed models (GAMM). All statistics reported were controlled for multiple comparisons per model and have survived false discovery rate (FDR) correction at p < 0.05 37 . Thus, the correction was based on 56 tests (28 subregions × 2 hemispheres) for model 1 and on 56 tests (28 subregions × 2 hemispheres) for model 2.
Age at alcohol use initiation
A portion of the sample (n = 181) initiated regular alcohol use, characterized by consuming alcohol one or more times a week, during the course of the study. The average age at alcohol use initiation was 18.8 years, with a standard deviation of 1.84 years. Age at initiation ranged from 13.5 to 24.3 years old. Male (n = 93) and female (n = 88) participants did not differ in average age at initiation (p > 0.05). Fifty-two adolescents reported regular alcohol use initiation at baseline, thirty-nine at follow-up 1, fifty-four at follow-up 2, and thirty-six at follow-up 3.
Hippocampus and amygdala subregion volumes
Conditional main effects of alcohol use and family history of alcohol use disorder density Greater alcohol use, indicated by higher drinking class, was associated with smaller hippocampal subfield volume (Fig. 1A-D) in the left hippocampal tail (β = −1.2, p FDR = 0.048, R 2 adj = 0.24), and larger hippocampal subfield volume in the right CA3 head, (β = 0.4, p FDR = 0.027, R 2 adj = 0.37), left subiculum head (β = 0.7, p FDR = 0.046, R 2 adj = 0.27), and right basal nucleus of the amygdala (β = 1.3, p FDR = 0.040, R 2 adj = 0.61). These findings in the right CA3 head and right basal nucleus remained significant even when controlling for cannabis and tobacco drug use. Forty-three percent of the NCANDA sample reported lifetime marijuana use (i.e., one or more times used cannabis).
Greater density scores for family history of AUD were significantly associated with greater left hippocampal subfield volume (Fig. 1F-H) in the subiculum head (β = 6.4, p FDR = 0.007, R 2 adj = 0.27), molecular layer HP head (β = 6.0, p FDR = 0.022, R 2 adj = 0.48), whole hippocampus head (β = 25.0, p FDR = 0.044, R 2 adj = 0.55). These findings in the left subiculum head, left molecular layer HP head, and left whole hippocampus head remained significant even when controlling for cannabis and tobacco drug use.
Conditional main effect of early trauma
Higher number of traumatic life events at baseline was significantly associated with larger right hippocampal subfield volume (Fig. 2) in the CA3 head (β = 1.3, p FDR = 0.041, R 2 adj = 0.37). This finding in the right CA3 head remained significant even when controlling for cannabis and tobacco drug use.
Within-person age change by trauma interaction
The interaction between number of traumatic life events at baseline and within-person change in age was significantly associated with larger left hippocampal subfield volume (Fig. 3A-F) in the subiculum head (β = 0.3, p FDR = 0.029, R 2 adj = 0.27) and molecular layer HP head (β = 0.3, p FDR = 0.041, R 2 adj = 0.48), and larger right amygdala nuclei volume in the paralaminar nucleus (β = 0.1, p FDR = 0.045, R 2 adj = 0.35). That is, regardless of the age cohort, as participants got older, those with more traumatic events showed a steeper decline in these subfield volumes compared to those with fewer traumatic events. These findings in the left subiculum head, left molecular layer HP head, and right paralaminar nucleus remained significant even when controlling for cannabis and tobacco drug use.
Drinking class by trauma interaction
The interaction between number of traumatic life events at baseline and alcohol use, indicated by drinking class, was significantly associated with smaller right hippocampal subfield volume (Fig. 4A-D) in the CA1 head (β = −1.1, p FDR = 0.011, R 2 adj = 0.50) and hippocampus head (β = −2.6, p FDR = 0.025, R 2 adj = 0.57). That is, those with higher drinking classes and more traumatic event exposure exhibited the smallest hippocampal subvolumes. These findings in the right CA1 head and right hippocampus head remained significant even when controlling for cannabis and tobacco drug use.
Whole hippocampus and amygdala volumes
Greater alcohol use, indicated by higher drinking class, was associated with slightly smaller whole hippocampus volume (Fig. 1E) (β = −12.0, p FDR = 0.009, R 2 adj =0. 46), and the interaction between number of traumatic life events at baseline and drinking class was significant. This interaction appears to indicate that among those with limited trauma history, higher drinking class was associated with smaller whole hippocampal volume, but this effect diminished with more trauma exposure (Fig. 4E, F), (β = 10.0, p FDR = 0.032, R 2 adj = 0.46). In the whole amygdala, an interaction between number of traumatic life events at baseline and within-person change in age indicated that older adolescents with greater trauma exposure at baseline had smaller whole amygdala volume (Fig. 3G, F) (β = −3.7, p FDR = 0.003, R 2 adj = 0.46). These findings in the whole hippocampus and amygdala remained significant even when controlling for cannabis and tobacco drug use. Additionally, higher rates of lifetime marijuana use were significantly associated with smaller whole hippocampus volume (β = −0.1, p FDR = 0.04, R 2 adj = 0.46).
Discussion
We investigated the conditional main and interactive effects of alcohol use, and youth trauma reported at baseline, on the structural trajectories of hippocampal subfields and amygdala nuclei across adolescent development in a longitudinal sample. Greater alcohol use was associated with smaller whole hippocampus and left hippocampal tail, but larger right CA3 head and left subiculum volumes. Greater alcohol use was associated with a larger volume of the right basal nucleus of the amygdala. The effect of traumatic life events measured at the baseline visit was associated with larger right CA3 head volume in the hippocampus. Baseline trauma and within-person age change interacted such that younger adolescents with greater trauma exposure at baseline had smaller left hippocampal subfield volumes in the subiculum and molecular layer head. The interaction also revealed that older adolescents with greater trauma exposure at baseline had larger volume in the right paralaminar nucleus of the amygdala. Alcohol use and baseline trauma interacted such that adolescents with greater baseline trauma and higher alcohol use had smaller volumes in the whole amygdala, right CA1 head, and right hippocampal head. These findings provide evidence that adolescent alcohol use and early-life trauma interact to alter growth trajectories of hippocampal and amygdala subregions in complex ways.
Whole hippocampus and amygdala
Overall, we found that greater alcohol use was associated with reduced whole hippocampus volume. An interaction effect showed that the relationship between whole hippocampal volume and alcohol use depended on trauma exposure at baseline, such that greater alcohol use was associated with smaller hippocampus volume except for adolescents who reported trauma exposure at baseline (Fig. 4E, F). Those with two or more traumatic events at baseline and greater alcohol use showed increased whole hippocampus volume (Fig. 4E, F). In the whole amygdala, an interaction effect showed that as adolescents age, amygdala volume tends to increase for those with no reported exposure to a traumatic event. However, adolescents with greater trauma exposure showed a decrease in whole amygdala volume as they age (Fig. 3G, H).
Hippocampal subfields
There is recent literature showing reduced right hippocampal CA1 volume is associated with PTSD 51 and that hippocampal CA1 experiences age-dependent decline of volume in PTSD 52 . An animal model of PTSD shows reduced neurotrophin mRNA levels of BDNF and TrkB are localized to the CA1 subfield of the hippocampus 53 . Interestingly, chronic alcohol exposure in rats produced a similar reduction of mRNA-associated signaling of BDNF and TrkB in the hippocampus 54 . An animal model that used predator stress to model PTSD found similar results marked by BDNF and other neurotrophin changes that produced dramatic neuronal proliferation in the basolateral amygdala, but by contrast showed significant neuronal retraction in the hippocampal CA1, CA3, and dentate gyrus. Opposing patterns with atrophy in the hippocampus and hypertrophy in the basolateral amygdala are well-established responses in animal models of chronic stress 23 . The behavioral disturbances resulting from predator stress were associated with a galaninergic response in hippocampal CA1, but were absent when the behavior was not disrupted 55 . Galanine is a neuropeptide that is linked to anxiety-like behaviors. Interestingly, alcohol exposure during the early neonatal period of development demonstrated reduced glial proliferation in a similar anatomical pattern, which affected CA1, CA3, and dentate gyrus 56 . Thus, extensive converging evidence from exposure to alcohol and to traumatic stress in animal model systems indicates overlapping roles of molecular mediators that affect change in the hippocampus, particularly CA1 and CA3. Unfortunately, a profound lack of evidence in humans is available to inform the impact of early-life trauma, alcohol exposure, and particularly concomitant exposure to early-life trauma and alcohol exposure on hippocampal and amygdala substructures in humans. Our results demonstrate that adolescents with greater baseline trauma and higher alcohol use have smaller volumes in the right CA1 head, and right hippocampal head, which appears to be consistent with the animal literature. However, our results for the amygdala under these conditions are in contrast to the animal literature.
Clinical relevance of volumetric changes
Longitudinal patterns of subregional volume change during the adolescent period in a significantly larger sample of unaffected controls from NCANDA were concordant with the findings reported in healthy controls by Tamnes and colleagues 57 . Our results demonstrate that early adolescence is a sensitive time period for the neurotoxic effects of alcohol on right amygdala subregions. Functionally connected with the medial prefrontal cortex (mPFC), these substructures are linked with decreased inhibitory control, a key element of addiction, withdrawal, and craving 58 . The smaller volume of the hippocampal tail in adolescents agrees with a cross-sectional study which showed similar findings in adults with AUD who were in several years of remission 59 . While we cannot infer functional change from the present study, the hippocampal tail has connections to the dorsal lateral prefrontal cortex, a brain region implicated in addiction because of its role in inhibition, attention, decision making, learning, and memory 60 .
Limitations
The present study has several inherent limitations. First, this study did not evaluate the cognitive impact of hippocampus and amygdala volume change due to alcohol use or trauma exposure. An important next step is to explore whether the presence and duration of hippocampal decline are associated with functional cognitive deficits in this sample. Animal studies demonstrate that the upregulation of neuroimmune signaling may link heavy alcohol use to hippocampal decline 18 . Despite significant alterations in the growth trajectories for these substructures, there is reason to believe based on animal studies that the impact of heavy alcohol use may not be long-lasting. Also, studies in adults have shown that abstinence can have salutary effects 40 . Future investigations using the existing NCANDA sample may be designed to understand the effects of dissentients from binge drinking on the young adult brain. See Supplementary Materials for a discussion on the neurodevelopmental trajectories for non-drinkers in this sample (i.e., those who never reported engaging in regular alcohol consumption), and Supplementary Fig. S5 for these subregion trajectories plotted by hemisphere.
Second, our analyses did not control for the presence of developmental psychopathology. Other NCANDA investigators are examining diagnostic information from the sample, and this is beyond the scope of the current manuscript. Criteria for heavy alcohol use risk were: (1) Initiation of alcohol use before age 15; (2) Positive family history of AUD; (3) One or more externalizing symptoms (e.g., conduct disorder); or (4) Two or more internalizing symptoms (e.g., depression or anxiety). Roughly 50% of the sample met high-risk criteria 38 . Third, the hippocampal subfields and amygdala nuclei segmentation used in Freesurfer v6.0 relies on atlas priors. The ultra-highresolution images (100-μm 3 isotropic) used in atlas construction have sufficient contrast to demarcate boundaries of nuclei with high confidence 49 . The segmentation of 1-mm isotropic scans depends on this atlas, particularly when the algorithm has insufficient information for labeling from image contrast. Across the cohort, an unintended consequence is that each subject's volume measurement is more similar to every other subject than if ultra-high-resolution technology was available for in vivo scanning in the present sample. Artificially low variance means that group differences will manifest as smaller effect sizes than the true effect size. However, a lower bound on this reduced variability is imposed by the whole amygdala and whole hippocampal segmentation, which is capable of being segmented with fairly high fidelity at the scanning resolution we used. In other words, the variability in nuclei segmentation will be proportional to the variability in the whole amygdala or whole hippocampal segmentation even in the worst-case scenario that subregion segmentation is 100% atlas driven. Thus, smaller substructures like the GC-DG, CA4, and molecular layer should be interpreted with caution. Fourth, while the NCANDA investigation did collect information from adolescents about memory blackouts resulting from alcohol use, only a small proportion of the sample endorsed blackouts at follow-up 3 visits (13%). The lifetime number of blackouts at the 3-year follow-up ranged from 1 to 20 with a mean of 2.43 (SD 2.99). We did not have enough power to run sub-analyses on the participants who reported blackouts, which were few. Lastly, this study relied on the first 4 years of acquired NCANDA data. Additional timepoints from the ongoing study will expand the number of participants within each developmental cohort and permit the assessment of long-term sequelae in the brain.
Conclusion
We observed unique effects of trauma, as it interacts with alcohol use and age, in the developing bilateral hippocampus and bilateral amygdala. Our results provide initial evidence that heavy alcohol use alters adolescent hippocampal subfields and amygdala nuclei volumes, and these changes during development may be dependent upon youth trauma exposure. | 7,644 | 2021-03-02T00:00:00.000 | [
"Psychology",
"Biology"
] |
Digital Diagnosis of Hand, Foot, and Mouth Disease Using Hybrid Deep Neural Networks
Hand, Foot and Mouth Disease (HFMD) is a highly contagious paediatric disease showing up symptoms like fever, diarrhoea, oral ulcers and rashes on the hands and foot, and even in the mouth. This disease has become an epidemic with several outbreaks in many Asian-Pacific countries with the basic reproduction number $R_{0} > 1$ . HFMD’s diagnosis is very challenging as its lesion pattern may appear quite similar to other skin diseases such as herpangina, aseptic meningitis, and poliomyelitis. Therefore, clinical symptoms are essential besides skin lesion’s pattern and position for precise diagnose of this disease. A deep learning-based HFMD detection system can play a significant role in the digital diagnosis of this disease. Various machine learning and deep learning architectures have been proposed for skin disease diagnosis and classification. However, these models are limited to the image classification problem. The diagnosis of similar appearing skin diseases using the image classification approach may result in misclassification or misdiagnosis of the disease. Parallel integration of clinical symptoms and images can improve disease diagnosis and classification performance. However, no deep learning architecture has been developed to diagnose HFMD disease from images and clinical data. This paper has proposed a novel Hybrid Deep Neural Networks integrating Multi-Layer Perceptron (MLP) network and Convolutional Neural Network into a single framework for the diagnosis of HFMD using the integrated features from clinical and image data. The proposed Hybrid Deep Neural Networks is particularly a multi branched model comprising of Multi-Layer Perceptron (MLP) network in the first branch to extract the clinical features and the modified pre-trained CNN architecture: MobileNet or NasNetMobile in the second branch to extract the features from skin disease lesion images. The features learnt from both the branches are merged to form an integrated feature from clinical data and images, which is fed to the subsequent classification network. We conducted several experiments employing image data only, clinical data only and both sources of data. The analyses compared and evaluated the performance of a typical MLP model and CNN model with our proposed Hybrid Deep Neural Networks. The novel approach promotes the existing image classification model and clinical symptoms based disease classification model, particularly the MLP model. From the cross-validated experiments, the results reveal that the proposed Hybrid Deep Neural Networks can diagnose the disease 99%-100% accurately.
symptoms, like fever, diarrhoea, vomiting, and sore throat, the position of rash and patient's age can improve diagnostic accuracy and robustness. Symptoms play a significant role in the diagnosis and prediction of this disease [15]. Smartphonebased skin disease prediction or detection from integrated clinical symptoms and images is challenging due to the smartphone's resource limitation and lack of lightweight ML or DL architecture that can handle integrated or mixed data. For example, researchers from Google developed a DL architecture for integrated data-based skin diseases diagnosis [16]. However, their solution is not for resource-constrained mobile devices and also did not consider HFMD. We propose a lightweight and smartphone-friendly novel Hybrid Deep Neural Networks that can digitally diagnose HFMD using integrated clinical data and images. The proposed Hybrid Deep Neural Networks integrates Multi-Layer Perceptron (MLP) [17] and modified pre-trained CNN model into a single framework to classify HFMD with other skin diseases using clinical data and images simultaneously. The Hybrid Deep Neural Networks is particularly a multi-branched model which is composed of a Multi-Layer Perceptron as a clinical branch, and a modified pre-trained CNN model as an image processing branch. The MLP is responsible to extract the features from clinical data while the CNN extracts the features from the diseases' images. In particular, we modified the pre-trained models, MobileNet [18] and NasNetMobile [19] and used transfer learning to extract the features from images. The learnt features are finally concatenated to form integrated features from clinical data and images, which is then fed to the subsequent classification network [20]. Most previous studies relied on one source of data. An image classification-based approach, for instance, diagnosed skin diseases from images only. We ran a set of experiments on the proposed Hybrid Deep Neural Networks, image classification models and MLP. The image classification architectures implicated MobileNet and NasNetMobile. We used clinical data and images for our proposed architecture, while we satisfy only the images for the image classification architectures and the clinical symptoms dataset for MLP architecture. The cross-validated evaluation results demonstrate that the proposed Hybrid Deep Neural Networks architecture can diagnose HFMD with accuracy in the range of 99%-100% with very high precision.
The rest of the paper is organised as follows. Section II presents the related works on ML or DL based skin diseases diagnosis. Section III presents the proposed Hybrid Deep Neural Networks-based digital diagnosis of HFMD. The section discussed the proposed solution in terms of (i) the data collection and preparation steps, (ii) the proposed model and selection of a pre-trained model for feature extraction from images, (iv) the model tuning process and (v) the evaluation of the proposed Hybrid Deep Neural Networks architecture. Evaluation results of the proposed architecture are presented and discussed in section IV. Finally, section V concludes the work.
II. RELATED WORKS
Skin diseases detection or diagnosis is a challenging task in image processing and computer vision. Many research works have been carried out to detect or diagnose different skin diseases using AI-based image processing, including DL-based image processing. Alamdari et al. [10] have implemented k-means cluster and HSV model segmentation technique, Support Vector Machine(SVM) and Fuzzy-c-means clustering algorithms for acne classification with an accuracy of 70%, 66% and 80% respectively. Abdul-Rahman et al. [11] elaborated a prototype with Back Propagation Neural Network to assist dermatologists. They used Correlation Feature Selection and Fast Correlation-based Filter feature selection methods with higher accuracy of 91.2%. Another research was performed by Sae-Lim et al. [21] to classify skin lesions using Convolutional Neural Network (CNN) and Mobilenet. The experiment was performed on HAM 10000 skin cancer dataset with customisation of Mobilenet with an accuracy of 83.93%. Rimi et al. [12] have proposed a CNN architecture to detect six types of skin diseases: dermatitis hand, eczema subacute, eczema hand, ulcers, lichen simplex and stasis dermatitis with a precision of 70.8%. Aryan et al. [13] have performed several experiments with the combination of several image processing and recognition techniques for the detection of HFMD lesions. Their research finds that the pre-processing using colour-space conversion followed by segmentation using KMeans-Morphological process with SVM classifier classified the lesion with higher accuracy. Some researchers [22], [23] have classified skin lesions images using traditional machine learning and deep learning to diagnose multiple skin diseases. Hameed et al. [22] proposed an intelligent multi-class multi-level (MCML) classification algorithm to classify multiple skin diseases. Their study has implemented two approaches, traditional machine learning and deep learning, to classify skin lesions with an accuracy of 96.47%. Hameed et al. [23] have used image processing techniques and Quadratic Support Vector Machine to classify skin lesions with an accuracy of 94.74%. Vakili et al. [24] explored a classification in HFMD with other skin diseases using several pre-trained models such as Inception v3, ResNet-34 and ResNet-50. ResNet-50 model outperformed the classification with an accuracy of 95.4%. As their experiment was limited to image data only, some similar appearing skin diseases were misdiagnosed. Researchers from Google [16] have developed an integrated model to detect six skin diseases from skin images and metadata. They have used the Inception-v4 pre-trained model to classify images and the feature transformation technique to extract features from metadata. This model categorises the six skin diseases with an accuracy ranging between 69%-94%. However, this model is not lightweight and mobile-friendly. Also, this model does not consider the diagnosis of HFMD.
In most previous researches, image processing techniques, Convolutional Neural Network or other classification algorithms have been used to detect and classify skin diseases from images. Still, no DL architecture has been designed and developed to learn mixed/integrated clinical symptoms and associated lesion images simultaneously to diagnose HFMD. Figure 1 presents an overview of the proposed smartphone and Hybrid Deep Neural Networks based digital diagnosis of HFMD. The architecture proposed for the diagnosis of this disease is lightweight that can be used on smartphones. The model has been trained and validated on a high-performance workstation with HFMD/Non-HFMD skin images and clinical symptoms. This pre-trained model is transformed into a lightweight TensorFlow lite [25] that can be deployed in mobile devices to diagnose this disease. The images of skin lesions and the clinical symptoms taken by smartphones will work as input for the deep learning model deployed in an app to diagnose the disease.
III. HYBRID DEEP NEURAL NETWORKS BASED DIGITAL DIAGNOSIS OF HFMD
In the following subsections, we briefly discuss the datasets used, data pre-processing, proposed model, model tuning and evaluation process of the model.
A. DATA COLLECTION AND PRE-PROCESSING 1) DATASET
The most crucial step for deep learning is collecting an appropriate dataset to train and validate the model. Unfortunately, though HFMD is one of the most common diseases in Asian-Pacific countries, the dataset for clinical symptoms of HFMD and associated images are not readily available. Therefore, we collected 1455 HFMD lesion images and 1800 typical skin images in various diseases other than HFMD from the Internet [26], [27] for this experiment. Furthermore, we collected clinical data from paediatric doctors for 410 HFMD infected patients and 645 other skin disease infected patients. The clinical dataset has 13 features such as Age, Fever, Sore throat, Diarrhoea, Vomiting, Mouth ulcer, Blister rash, Distressed, Trembling limbs, Staggering, Eyes rolled, Sweating and Gender.
2) DATA PRE-PROCESSING
Deep Learning requires a larger dataset to achieve high accuracy and avoid overfitting. One of the significant challenges for our experiment was sufficient HFMD lesion images and clinical datasets. We handled this problem by generating data in two steps. First, we oversampled the clinical dataset equal to the number of available images. The clinical dataset provided by the doctors was significantly less compared to the number of images. Therefore, we had to generate some synthetic data from the existing dataset. We used Synthetic Minority Oversampling Technique (SMOTE) [28] to oversample the data for both HFMD and Non-HFMD cases. The clinical data contains numerical, Boolean and categorical data types. The numerical Age and Fever features from the clinical dataset were normalised using the MinMax Normalisation technique [29]. The categorical gender and 'position of rash' features were encoded using the one-hot encoding technique [30].
After generating a sufficient number of clinical data, the next step was to map each clinical symptom with an image so that both the images and features will have the same classification label. HFMD images were mapped with HFMD related clinical symptoms. Similarly, images for normal skin or non-HFMD disease were mapped with clinical symptoms that do not appear in HFMD infected patients. The rash position plays a significant role in diagnosing this disease and distinguishes it from similar appearing diseases. Hence, we manually identified the position of rashes for each image and labelled the position. Figure 2 illustrates the final dataset prepared for our model. Here, each image is associated with a set of clinical symptoms.
After oversampling and pre-processing clinical data, the next step was to pre-process images and generate integrated input batches and the corresponding labelled output. ImageDataGenerator API [31] by Keras provides a feature to augment and pre-process images in batches. However, the limitation of this ImageDataGenerator is that this API can generate batches of input from images only. The proposed model was designed to feed integrated data of clinical symptoms and images. Hence, we built a custom data generator using Keras's Sequence API to combine the features from clinical symptoms and associated images and generate integrated input data batches. We implemented Keras's Image-DataGenerator API's image augmentation technique within the custom data generator to generate and pre-process images in batches. The image augmentation methods like rotation by 40 • , flipping the images horizontally and vertically, shearing and zooming were implemented to increase the number of training and validation images as shown in Figure 3. The images were then scaled down between 0 and 1 to improve the performance of the model. The generator then combines the augmented image with its associated clinical symptoms and class labels with providing a batch of integrated input for the model.
B. PROPOSED MODEL
In this paper, we propose a hybrid deep neural networks architecture to diagnose HFMD from clinical and image data. The proposed architecture is particularly a multi-branched model architecture comprising two input branches: (1) clinical branch (MLP) and (2) image-processing branch (see Figure 4). The clinical data is input separately into the MLP network (clinical branch: see section III-C), while the images are fed into the image processing branch (see section III-D), developed using Convolutional Neural Network (CNN). We employed customised pre-trained CNN models, MobileNet [18] and NasNetMobile [19] in the image-processing branch. Both the clinical and image-processing branches are responsible to extract the features from clinical and image data respectively. To combine the features learned from these branches, the last layers of both branches are concatenated to form a concatenation layer using Keras functional API. A classification network having two dense layers with 4 and 2 neurons respectively are added on the top of the concatenation layer. Thus, the final output layer of the hybrid deep neural networks model has two neurons to classify HFMD and non-HFMD datasets. The proposed architecture's novelty lies in designing a multi-branched lightweight and mobile-friendly Hybrid Deep Neural Networks to diagnose HFMD from clinical and image data.
Let us considering C as clinical input for MLP network and D as image input for pre-trained CNN, the mapping equation from inputs to learned features by MLP and CNN branches are expressed as in equations 1 and 2 respectively.
where, T m and T c are the learnt feature from MLP and CNN networks respectively, f and g represent MLP network and CNN network, and θ represents model weights.
The integrated feature Z obtained by concatenating all the learnt features is represented as in equation 3.
where, ''Conc'' represents feature-wise concatenation. After concatenation, the integrated features are considered as input for the subsequent classification network (layer), h φ where, φ represents the weights of classification networks. The classification label Y is achieved by equation 4.
To summarise, the Hybrid Deep Neural Networks architecture is composed of the combination of MLP function (f θ), CNN function (g θ), concatenation layer and classification network (h φ) which can be represented by the function F. Thus, the output of the proposed model with clinical input C and image input D can be represented by equation 5. The model weights θ and φ are optimised while training the model using Adam optimiser and categorical cross entropy loss function. clinical data. We particularly selected MobileNet and Nas-NetMobile pre-trained models to extract features from images in our proposed model as these pre-trained models are lightweight and more efficient for mobile applications [33], [34].
1) MobileNet
We used a modified Mobilenet [35] architecture in the second branch of the proposed architecture to extract features from images. Mobilenet is built upon two layers: depth-wise convolution and point-wise convolutions. The depth-wise convolution applies a single filter to each input channel, and then the point-wise convolution applies a 1 × 1 convolution to combine the outputs of depth-wise convolution. After each convolution, batch normalisation and Rectified Linear Unit (ReLU) are applied. Figure 5 illustrates the architecture of Mobilenet consisting of depth-wise and point-wise convolutions. In order to extract the features from images, we modified the pre-trained CNN model by setting the parameter include-top = false to chop the dense layers, which particularly act as classifier [36]. Then we added a dense layer with 50 neurons and relu activation function to transform the image features learnt from the pre-trained model to N × 50 dimensional features, where N is the number of samples.
2) NasNetMobile
Secondly, we modified the NasNetMobile [37] architecture for our experiment. The NAS (Neural Architecture Search), developed by Google Brain, is a scalable CNN architecture consisting of basic building blocks configured by reinforcement learning. The cell consists of only a few operations (several convolutions and pooling) and is replicated several times according to the necessary network capacity. The lighter version of this architecture, NasNetMobile, consists of 12 cells with 5.3 million parameters and 564 multipleaccumulators. Figure 6 illustrates the reduced architecture of NAS derived with NAS and CIFAR10. We relied on transfer learning for both models. We used these pre-trained models, which are trained over standard datasets such as CYPAR10 and ImageNet. Similar to MobileNet, we modified NasNetMobile architecture excluding classification layers and adding a dense layer of 50 neurons.
E. MULTI-LAYER PERCEPTRON MODEL FOR CLINICAL SYMPTOM-BASED HFMD CLASSIFICATION
To classify the disease solely based on clinical symptoms, we also created a separate Multi-layer Perceptron (MLP) network [17]. The basic architecture of MLP consists of three layers, as shown in Figure 7: an input layer, a hidden layer, and an output layer. However, modern MLP can have multiple hidden layers and dropout layers. Therefore, we developed three layers: input layers, hidden layer and output layer of 14 neurons, eight neurons, and two neurons, respectively. In addition, a dropout of 0.25 and L2 weighted regularisation were implemented to regularise the model and avoid overfitting. We used the Relu activation function for input and hidden layer and softmax activation function [38] for output layer to perform the classification.
F. MODEL TUNING
We developed the model using TensorFlow and Keras based on modern deep learning architectures. We used Adam optimiser to optimise the hybrid multi-branch model. The hyperparameters of the integrated models are the learning rate, decay rate and initial weights. At the same time, the clinical VOLUME 9, 2021 branch of the integrated model had two hyperparameters: the number of layers and the number of nodes in each hidden layer. Systematic experimentation is the most reliable way to configure these hyperparameters [39]. We used a hyperparameters tuning technique to tune and optimise the parameters and train the model with the highest accuracy. We applied a grid search approach to estimate the hyperparameters of our model. Alongside the number of dense layers and the number of nodes in each layer of the clinical branch, we used a grid search approach to optimise the operation related (e.g., training) hyperparameters such as learning rate and decay rate. For each experiment, the optimal hyperparameters were chosen to minimise the error or loss function. Finally, we finalised the clinical branch with three layers with 16, eight, and four neurons by tuning the model. The hyperparameters, learning rate and decay rate of the hybrid deep neural networks model, were determined to be 1e-3 and 1e-3/200, respectively. After getting the optimal parameters, we trained the model with the optimal hyperparameters. We also used an Earlystopping callback while training the model to avoid overfitting.
G. EVALUATION OF THE PROPOSED MODEL
We conducted several experiments to compare the performance of our proposed multi-branch model to that of an image classification model and a clinical symptom-based disease (particularly HFMD) classification model (MLP). We used both images and clinical data for our proposed model, only images for the image classification model and clinical symptom data for the symptoms-based HFMD classification model (MLP). In the first experiment, we retrained the pre-trained MobileNet and NasNetMobile models using images only and evaluated their performances. We trained the Multi-Layer Perceptron using the clinical dataset only in the second experiment. In the final experiment, we satisfied the proposed hybrid model employing mixed/integrated clinical dataset and images data. This proposed model consists of a clinical branch and an image processing branch. Thus, we again adopted an experimental approach to select the best pre-trained image classification model for HFMD diagnosis. Firstly, we used MobileNet along with clinical branch to train mixed input data and secondly, MobileNet was replaced with NasNetMobile, and the same dataset was trained in the model. For all these experiments, we created a checkpoint to save the model with the highest accuracy so that the saved model could be used for the prediction with better accuracy. We evaluated the models using accuracy, sensitivity, specificity and F1-score and visualised the performances using a confusion matrix.
IV. RESULT AND DISCUSSION
We produced three different results for three datasets. All the evaluation results were cross-validated using the k-fold (5-7) validation technique.
A. IMAGE CLASSIFICATION
In the first experiment, we classified HFMD using images only. Here, we retrained the pre-trained models of MobileNet and NasNetMobile using the images. The results are produced by 5-fold cross-validation. Table 1 (first two rows) present the results in terms of accuracy, sensitivity, and specificity of image classification. As shown in the table, the MobileNet model outperforms the NasNetMobile in classifying HFMD images with an accuracy of 88%. Figures 8 and 9 demonstrated the accuracy and loss of the MobileNet and NasNetMobile models using image data. As seen in the figure, both pre-trained models' accuracy increases (Figures 8 and 9 (a)), and loss value decreases (Figures 8 and 9 (b)), gradually with more epochs. This pattern demonstrates that both models can predict HFMD from images with high accuracy and can be used for our proposed hybrid deep neural networks model. Figure 10 presents a confusion matrix for each model to visualise the performance of the models in the validation dataset. As shown in Figure 10 (a) that MobileNet successfully classified 79% of HFMD images correctly and 96% of Non-HFMD images correctly, while NasNetMobile (Figure 10 (b)) classified HFMD images with an accuracy of 85% and Non-HFMD images with an accuracy of 87%. In addition, we trained our images dataset with RestNet50 pre-trained model to further compare our model's performance with the image classification approach. As claimed by Vakili et al. [24] in their experiment, RestNet50 model classified our dataset with an accuracy of 91.2 % (see Table 1 (last row)). From confusion matrices (see Figure 10), we can see that the image based classification models misclassified some skin lesion. We manually verified some false-positive results from MobileNet model and it was found that similar appearing lesions (e.g., herpangina and HFMD) were both classified as HFMD(see Figure 11). Thus, this example illustrates the limitation of existing image based HFMD diagnosis approach, where non-HFMD image is misclassified as HFMD.
B. CLINICAL DATASET CLASSIFICATION
In the second experiment, only the clinical dataset to classify HFMD with other skin diseases using MLP architecture. HFDM's clinical symptoms with very high accuracy (99%). Figure 13a and 13b illustrate the accuracy and loss of one of the validation sets for 50 epochs. Figure 13c visualised the MLP model's performance on the validation dataset. The figure shows that it accurately classifies 100% of the HFMD clinical samples and 92% of the Non-HFMD clinical samples. This result shows that based on the clinical symptoms, HFMD disease can be predicted accurately. However, HFMD's clinical symptoms may conflict with other non-HFDM diseases [40], [41]. Clinical symptoms integrated with images can minimise this conflict and correctly diagnose HFMD.
C. HFMD DIAGNOSIS USING IMAGE AND CLINICAL DATA
The proposed hybrid deep neural networks architecture was tested in two settings: (i) MLP with pre-trained model MobileNet and (ii) MLP with pre-trained model NasNetMobile on the integrated clinical symptoms and images. Table 1 (fourth and fifth rows) present the 5-fold cross-validated evaluation results of the proposed models on the integrated data. As seen in the table, the hybrid deep neural networks using integrated data (clinical symptoms and images) are outperforming MobileNet, NasNetMobile and MLP. According to the results, these models can classify HFMD and non-HFMD with 100% accuracy. The claim is robust as the 6-folds and 7-folds cross-validations of the hybrid deep neural networks demonstrated very similar results (Table 1 (sixth-ninth rows)). Figures 13 and 14 present the training and validation accuracy and loss for both MobileNet and NasNet-Mobile based proposed models respectively. As seen in the figures, hybrid deep neural networks with both pre-trained models' accuracy increases (Figures 13 and 14 (a)), and loss value decreases (13 and 14 (b)), gradually with more epochs. Figure 15 compares the confusion matrix of MobileNet and NasNetMobile based proposed models respectively. The figure illustrates that both the models correctly (100%) For the HFMD diagnosis, the position of the lesion is essential. It is essential to demonstrate whether our model extracts significant features from the expected region or position of interest in the image during model training. To interpret the feature extraction from images, we plotted the heatmap over the validation image using a technique called Grad-CAM (Gradient Class Activation Map) [42]. For each model, a validation image was selected for the prediction, and a heatmap was plotted over the image, as shown in Figure 16. These images illustrate that the image classification branch was extracting the features from the images' expected region.
D. LIMITATION
Deep learning necessitates large datasets in order to develop more accurate and robust models. Despite the fact that we gathered data from various sources, it was still relatively small in the context of deep learning. Although HFMD is one of the most common diseases in Asian-Pacific countries, clinical data and images for the same patient were not readily available. Further, our dataset has an uneven distribution of clinical data and images. The clinical data collected from doctors was significantly less than the number of images collected over the internet. The presence of some low-resolution images was another limitation of our dataset. This research can further be improved using data from diverse ethnic groups.
The proposed experiment integrates the features from clinical data and images; however, we have not analysed the correlation and association between images and clinical symptoms. This experiment can be further extended to analyse the correlation between image and clinical features and its impact on disease diagnosis.
V. CONCLUSION
In this paper, we proposed a lightweight and efficient Hybrid Deep Neural Networks to detect or diagnose HFMD using clinical symptoms and image data. The proposed Hybrid Deep Neural Networks architecture has two input branches 1) Multi-Layer Perceptron and 2) modified pre-trained CNN model to integrate the features learnt from clinical symptoms and image data. The performance of our proposed multi-branch Hybrid Deep Neural Networks for diagnosing HFMD was compared with the image classification model and clinical symptom-based HFMD classification model (MLP). The image classification models: MobileNet, Nas-NetMobile and RestNet50, classified the skin lesions with an accuracy of 88%, 85% and 91.2%, respectively; however, this approach has some limitations of misdiagnosing similar appearing skin lesions. In another experiment, the MLP model using the clinical dataset predicted HFMD with an accuracy of approximately 100%. As HFMD is a skin disease, clinical symptoms-based detection/diagnosis may not always be correct as many other diseases (e.g., chickenpox) may have similar symptoms. Thus, using both images and clinical symptoms can improve the diagnosis of this disease. It is worth noting that previous studies have used only image classification techniques using traditional machine learning or deep learning architectures to diagnose skin diseases. However, to the best of our knowledge, no studies have been conducted to diagnose HFMD from integrated features of image and clinical symptom data. The proposed multi-branch model overcomes these limitations and predicts the disease with accuracy between 99%-100% using clinical symptoms and images. The learned model is lightweight and efficient, which can be deployed in a smartphone to develop a mobile app to detect or diagnose HFMD.
Most medical datasets contain images along with clinical datasets. Thus, this proposed Hybrid Deep Neural Networks architecture can help diagnose other diseases with integrated images and clinical symptoms data for the same patient. Furthermore, this model can be enhanced to learn other diseases using complex radiological images like X-Ray, CT-Scan, MRI images and clinical data. The outputs should be in better expectation by replacing the existed model with the MobileNet layers with other image classification or image segmentation models like U-Net, DenseNet, VGGNet, Rest-Net50 or Alexnet. | 6,494.6 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Mfsd2a Reverses Spatial Learning and Memory Impairment Caused by Chronic Cerebral Hypoperfusion via Protection of the Blood–Brain Barrier
Disruption of the blood–brain barrier (BBB) can lead to cognitive impairment. Major facilitator superfamily domain-containing protein 2a (Mfsd2a) is a newly discovered protein that is essential for maintaining BBB integrity. However, the role of Mfsd2a in vascular cognitive impairment has not been explored yet. In this study, a rat model of chronic cerebral hypoperfusion (CCH) was established by producing permanent bilateral common carotid artery occlusion (2VO) in rats. We found that after the 2VO procedure, the rats exhibited cognitive impairment, showed increased BBB leakage within the hippocampus, and had reduced expression of the Mfsd2a protein. The overexpression of Mfsd2a in the rat hippocampus reversed these changes. Further investigations using transmission electron microscopy revealed a significantly increased rate of vesicular transcytosis in the BBB of the hippocampus of the CCH rats; the rate reduced after overexpression of Mfsd2a. Moreover, Mfsd2a overexpression did not cause changes in the expression of tight junction-associated proteins and in the ultrastructures of the tight junctions. In conclusion, Mfsd2a attenuated BBB damage and ameliorated cognitive impairment in CCH rats, and its protective effect on the BBB was achieved via inhibition of vesicular transcytosis.
INTRODUCTION
Vascular cognitive impairment (VCI) and Alzheimer's disease are major medical issues that affect the health of the elderly population . Chronic cerebral hypoperfusion (CCH) is the common pathophysiological state underlying both conditions (Zhao and Gong, 2015;Shen et al., 2016).
The blood-brain barrier (BBB) is essential for maintaining the stability of the brain microenvironment. Damage to the BBB is an early pathophysiological factor in many diseases involving brain injury (Daneman and Prat, 2015;Liebner et al., 2018). Previous studies found that BBB damage occurred in the early stage of CCH in rat models (Chen et al., 2015;Yin et al., 2015). In addition, disruption of the BBB caused further structural and functional damage to the brain (Chen et al., 2015;Yin et al., 2015), whereas protective measures targeting the BBB alleviated cognitive impairment in CCH rats (Edrissi et al., 2016;Lee et al., 2017). Therefore, BBB damage is considered a key factor in CCH-induced cognitive impairment (Ueno et al., 2016).
The BBB is maintained due to two properties of the brain microvascular endothelium: the continuous tight junctions and the extremely low rate of vesicular transcytosis (Haseloff et al., 2015). It was previously believed that BBB damage was primarily due to the destruction of tight junctions (Siegenthaler et al., 2013). The role that vesicular transcytosis played has been overlooked, and therefore, there is a lack of research on its influence on BBB damage.
Major facilitator superfamily domain-containing protein 2a (Mfsd2a) is a member of the major facilitator superfamily, and it plays a vital role in post-starvation liver metabolism and development of placental syncytiotrophoblast cells (Angers et al., 2008;Toufaily et al., 2013). Mfsd2a is critical for proper barrier function of the BBB, as suggested by recent studies (Ben-Zvi et al., 2014;O'Brown et al., 2019). Mfsd2a suppresses the formation of caveolae vesicles in the brain microvascular endothelium, thereby maintaining an extremely low rate of vesicular transcytosis in the BBB. It has been demonstrated that BBB permeability is positively correlated with the number of vesicles (Wang et al., 2016;Andreone et al., 2017). Thus, disruption of Mfsd2a expression leads to significantly increased vesicular transcytosis and consequently severe BBB leakage (Ben-Zvi et al., 2014). At present, there is a lack of research on the effects of Mfsd2a in BBB damage and cognitive impairment after CCH.
In this study, we constructed rat models of CCH by performing permanent bilateral common carotid artery occlusion (2VO) surgery and evaluated the changes in Mfsd2a expression and vesicular transcytosis in the BBB. We also investigated the effects and mechanisms of Mfsd2a modulation on BBB damage and cognitive impairment in the CCH rats.
Animals
Adult male Sprague-Dawley rats (180-200 g) were housed in a climate-controlled room (22 ± 2 • C with a 12-h light/dark cycle and a relative humidity of 55 ± 5%) and had access to food and water ad libitum. The experimental protocols were approved by the Animal Ethics Committee of the Medical School of Wuhan University.
The recombinant AAV (AAV2/9-CMV-r-Mfsd2a-3xflag-GFP virus) overexpressing Mfsd2a was delivered via stereotaxic injection to the 2VO + Mfsd2a AAV group and an empty vector (AAV2/9-CMV-GFP control virus, Hanbio Biotechnology Co., Ltd., Shanghai, China) to the 2VO + control AAV group. After 14 days, the rats in the respective groups received either 2VO surgery or sham surgery. The hippocampal blood flow of rats in the 2VO and sham groups (n = 6 per group) was measured preoperatively and immediately after surgery by using a laser Doppler flowmeter. On postoperative days 3, 7, 14, and 28, six rats were sacrificed in the sham and 2VO groups to evaluate the changes in Mfsd2a expression in the hippocampus after CCH using western blot. On days 1, 3, 7, 14, and 28 after surgery, the amount of Evans blue (EB) in the hippocampus of rats from the four groups (n = 4 per group) was measured using colorimetric analysis. On day 7, western blot was performed to measure the expression of BBB-related proteins, including Mfsd2a, zonula occludens-1 (ZO-1), occludin, and claudin-5 (n = 6 per group). Moreover, transmission electron microscopy (TEM) was used to observe the ultrastructures of the hippocampal BBB (n = 3 per group). From the 29th day, the spatial learning and memory abilities of rats (n = 9 per group) were assessed using the Morris water maze (MWM) test for six consecutive days. Then a novel object recognition (NOR) test was performed to assess the recognition memory abilities of rats (n = 9 per group).
CCH Model
Chronic cerebral hypoperfusion was induced via 2VO surgery as described previously (Xu et al., 2010). Food and water were withheld for 1 day prior to surgery. Rats were anesthetized with 1% Pelltobarbitalum Natricum (40 mg/kg i.p.). The bilateral common carotid arteries were exposed via a midline ventral incision and permanently ligated with a silk suture. Rats receiving the sham operation were treated in the same manner, except that the common carotid arteries were not ligated. After surgery, the wounds were sutured, and the rats were placed on a homeothermic blanket until they recovered from the anesthesia.
Cerebral Blood Flow
The measurement of blood flow in the hippocampus was performed as described previously (Jian et al., 2013). After anesthetization, rats were fixed in a stereotactic frame with a midsagittal incision on top. In order to detect blood flow in the hippocampal CA1 region (anteroposterior = 4.8 mm, mediolateral = ± 2.5 mm, and dorsoventral = −3.5 mm), a skull hole was made above this area on the left side, and a 0.45-mm-diameter laser Doppler probe was used to drill into the hippocampus from the hole. When stable cerebral blood flow was observed, hippocampal blood flow was continuously recorded for 5 min using Perisoft software. A similar measurement procedure was performed immediately after completion of 2VO or sham surgery. After the measurement was completed, the probe was pulled out, and the wound was sutured. The preoperative measurement value was used as the baseline, and the results were expressed as a percentage of the second measurement value to the baseline value.
Stereotaxic Injection
After anesthetization, rats were placed in a stereotaxic head holder. Solutions of the virus were injected bilaterally into the hippocampal CA1 region (anteroposterior = 4.8 mm, mediolateral = ± 2.5 mm, and dorsoventral = −3.5 mm) with an injection rate of 0.5 µl/min (Shen et al., 2019). The effect of viral transfection was evaluated using western blotting at different time points after transfection.
MWM Test
The MWM is a classical test of spatial learning and memory for rodents (Redish and Touretzky, 1998;Yu et al., 2019). The MWM consisted of a circular pool (150 cm in diameter and 60 cm in height) filled with opaque water to a depth of 32 cm at a temperature of 20 ± 1 • C. The maze was equally divided into four quadrants by four signs on the pool. A platform (9 cm in diameter and 30 cm in height) was placed in one quadrant and was invisible in the water. The pool was located in a dimly lit room surrounded by several orientation cues. Each rat was given four trials per day for five consecutive days. Rats were randomly placed into the pool from a different quadrant in each trial, facing the wall of the maze. The time for the rats to find the hidden platform was recorded if it was less than 60 s. However, if the time exceeded 60 s, the latency time was recorded as 60 s. All rats were placed on the platform to observe their surroundings for 20 s after each trial. On the sixth day, each rat was subjected to a probe trial for 60 s in the maze in which the platform was removed. The time the rats swam in the target quadrant (where the platform had been placed) was recorded.
NOR Test
The NOR test was performed as described previously (Bevins and Besheer, 2006). The test consisted of a test box (white square box, 65 cm × 45 cm × 40 cm) and two sets (two per set) of different objects. The test object set A contained two identical white printed porcelain cups with a base diameter of 6.5 cm and a height of 10 cm; while the B set was made of two identical cylindrical transparent glass bottles with a bottom diameter of 5 cm and a height of 8 cm. The test environment was quiet and dark, and the light in the test box was even without shadow. In the first stage (adaptation), no objects were placed in the box. A rat was placed in the test box with its back to the box and allowed to move by itself for 10 min. The next day, two identical objects (AA) were placed symmetrically in the box (9 cm from the long axis and 10 cm from the short axis). The rat was placed with its back to the objects from the same distance point between the two objects and allowed to move by itself for 10 min. Then the rat was returned to its home cage. After 1 h, two different objects (AB) were placed in the box in the same position as described above, and the rat was left to explore the box for 5 min. The stopwatch software (Time Left 3) was used to record the exploration time of the old object (A) and the novel object (B) when different objects (AB) were placed. A discrimination ratio (DI) of exploring the novel object was calculated, expressed as DI = N/(N + F), where N was the time for exploring the novel object and F was the time for exploring the old object.
Measurement of BBB Permeability
The permeability of BBB was evaluated using the EB extravasation technique (Huang et al., 2018;Ni et al., 2018;Luh et al., 2019). Rats were injected with 2% EB (Sigma, 4 ml/kg) through the tail vein. After 2 h, the rats were deeply anesthetized and infused with 50 ml heparinized saline through the left ventricle for 15 min. The hippocampal specimens were removed and immersed in formamide (3 ml/100 mg) at 60 • C for 24 h and then centrifuged at 15,000 g for 30 min at 4 • C. Spectrophotometric determination of extravasated EB in the supernatant was assayed at 620 nm.
TEM
After anesthetization, rats were perfused with saline for 1 min and subsequently with 5% glutaraldehyde and 4% paraformaldehyde for 4 min. Brain tissues of the hippocampal CA1 region (1 mm × 1 mm × 1 mm) were removed and postfixed at 4 • C. Then the tissues were dehydrated in gradient ethanol and embedded in epoxy resin. Sections (80 nm) were cut from the embedded specimens with an ultrathin slicer (Leica EM UC7, Germany), placed on copper grids, stained with lead citrate and uranyl acetate, and observed using a Tecnai-G220-TWIN TEM (FEI, United States). Quantitative analysis of vesicles of six comparable-sized vessels in each rat was performed.
Immunofluorescence Staining
After perfusion, the brains were removed and fixed with 4% PFA at room temperature for 1 h, followed by immersion in 30% sucrose. The brains were then cryopreserved in OCT and sectioned in a cryostat. After blocking with goat serum, the sections were incubated with anti-Mfsd2a primary antibody (species: rabbit; 1:50; Abcam) overnight in a dark chamber. The next day, fluorescent secondary antibody (FITC-labeled goat anti-rabbit antibody) was added, followed by blocking with goat serum. The blocking solution was decanted, and the sections were incubated with anti-CD31 primary antibody (species: mouse; 1:100; Abcam) overnight in the dark. The following day, fluorescent secondary antibody (Cy3-labeled goat anti-mouse antibody) was added. The brain sections were visualized using a fluorescence microscope (OLYMPUS BX53, Japan).
Statistical Analysis
The statistical analyses were done by using SPSS for Windows (version 24). Data were presented as mean ± SEM. Differences in escape latency were analyzed with a two-way repeated-measures ANOVA followed by the post hoc Bonferroni test for multiple comparisons. The significance of differences between two and three or more groups was determined using one-way ANOVA followed by the Bonferroni post hoc test, the Student's t-test, or non-parametric tests. Statistical significance was defined as P < 0.05.
Expression of Mfsd2a Protein Was Downregulated in the Hippocampal CA1 Region of CCH Rats
Preoperative measurements of hippocampal blood flow in each group were used as their baseline values. The hippocampal blood flow decreased significantly after 2VO surgery but did not change obviously after sham surgery. There were significant differences in hippocampal blood flow changes between the two groups (P < 0.01, Figure 1A).
Western blot was performed to measure the expression levels of Mfsd2a in the hippocampal CA1 region of rats at different time points after 2VO surgery. The expression of Mfsd2a protein decreased from postoperative day 3 (P < 0.05) and reached the lowest level on day 7 (P < 0.01, Figures 1D,E). The expression level began to recover on day 14 (P < 0.01) but remained lower than that of the sham group on day 28 (P < 0.05). In addition, the results of immunofluorescence staining also confirmed that the fluorescence intensity of Mfsd2a in the 2VO group was lower than that of the sham group (Figures 1B,C, P < 0.01). These results suggest that the expression of Mfsd2a protein in the hippocampal CA1 region was downregulated after CCH.
Overexpression of Mfsd2a Reversed Learning and Memory Deficits in CCH Rats
The virus was successfully transfected into the hippocampus of rats via stereotaxic injection (Figures 2A,B). To validate the effect of viral transfection, western blotting technology was used to evaluate the expression of Mfsd2a protein in the rat hippocampus at different time points after transfection with the Mfsd2a AAVs. The data showed that the expression of Mfsd2a in the 2VO + Mfsd2a AAV group remained at a high level from day 14 to day 56 post transfection (P < 0.01, Figures 2C,D).
The results of the MWM test showed that the escape latency of rats was significantly shortened in all groups as training progressed [F(4,128) = 159.83, P < 0.01], but there were significant differences across the four groups [F(3,32) = 21.13, P < 0.01, Figure 2F). Bonferroni post hoc test showed that from day 2 of training, rats in the 2VO group required a longer time to locate the platform than did the sham rats (at days 2, 4, and 5, P < 0.01; at days 3, P < 0.05). Between days 3 and 5 of training, the rats in the 2VO + Mfsd2a AAV group required substantially less time to locate the platform compared with the 2VO + control AAV rats, at the corresponding time points (at days 3 and 5, P < 0.05; at day 4, P < 0.01).
In the probe trial where the platform was removed, memory was evaluated by measuring the time spent in the target quadrant ( Figure 2G). We observed that the 2VO rats spent significantly less time in the target quadrant than the sham rats (P < 0.01). The time spent in the target quadrant by the 2VO + Mfsd2a AAV rats was significantly increased, in comparison to that spent by the 2VO + control AAV rats (P < 0.01).
The results of the NOR test showed that rats in the 2VO group had lower DI scores than the sham rats (P < 0.01, Figure 2E), indicating a recognition memory impairment following 2VO surgery. However, the transfection with the Mfsd2a AAVs significantly improved the DI scores (2VO + Mfsd2a AAV group vs. 2VO + control AAV group, P < 0.01).
Overexpression of Mfsd2a Attenuated BBB Leakage in CCH Rats
To determine the effect of Mfsd2a overexpression on CCH-induced BBB damage, we quantified the amount of EB in the hippocampal CA1 region using colorimetric analysis.
The results showed that compared with the sham rats, EB leakage in the 2VO group started increasing from day 3 after the 2VO procedure (P < 0.01) and reached its peak on day 7 (P < 0.01). The increase remained significant on day 14 (P < 0.01), and by day 28, EB leakage remained higher than that in the sham group (P < 0.05, Figure 3). In contrast, the 2VO + Mfsd2a AAV rats exhibited significantly reduced EB leakage at all time points (vs. the 2VO + control AAV group, at days 1, 7, and 28, P < 0.05; at days 3 and 14, P < 0.01).
FIGURE 3 | Time course of Evans blue content in the hippocampus of rats in each group after 2VO surgery. *P < 0.05, **P < 0.01, vs. the sham group at the corresponding time point; # P < 0.05, ## P < 0.01, vs. the 2VO + control AAV group; n = 4 per group.
Effect of Mfsd2a Overexpression on BBB Tight Junctions and Vesicular Transcytosis in the Rat Hippocampal CA1 Region
We used western blotting to examine the expression of Mfsd2a and tight junction-associated proteins in the hippocampal CA1 region 7 days after the 2VO procedure. The expression levels of Mfsd2a, ZO-1, occludin, and claudin-5 in the hippocampal CA1 region of rats in the 2VO group were downregulated compared to those in the sham group (P < 0.01, Figures 4A-E). Compared with the 2VO + control AAV group, the expression of Mfsd2a significantly increased in the 2VO + Mfsd2a AAV group (P < 0.01), whereas no significant differences were observed in the expression levels of ZO-1, claudin-5, and occludin proteins (P > 0.05).
In addition, we utilized TEM to observe the changes in the BBB ultrastructures in the hippocampal CA1 region following Mfsd2a overexpression in CCH rats (Figures 4F,G). The results showed that the vesicular densities in the brain microvascular ECs were significantly higher in rats of the 2VO and 2VO + control AAV groups than in the sham group (P < 0.01). However, the vesicular density was significantly lower in rats of the 2VO + Mfsd2a AAV group than in the 2VO + control AAV group (P < 0.01). The tight junction structures were not significantly different among the four groups.
DISCUSSION
We have identified that the expression level of Mfsd2a protein is reduced in the hippocampus of CCH rats, leading to enhanced vesicle transcytosis and resulting in high permeability of BBB. The recombinant AAV (overexpressing Mfsd2a) upregulated the expression of Mfsd2a protein in the hippocampus of CCH rats, inhibited the active vesicle transcytosis, and ameliorated cognitive impairment of CCH rats. These findings reemphasize the importance of the BBB in cognitive impairment and for the first time elucidate the role of Mfsd2a in the regulation of BBB permeability in CCH rats, indicating that not only is paracellular transport involved in this process but also that vesicle transcytosis cannot be neglected.
The BBB is critical for maintaining the normal function of the central nervous system. It limits the entry of bloodborne neurotoxins into the brain and helps eliminate harmful substances produced internally, thereby avoiding neuronal injury and sustaining a stable brain microenvironment (Zlokovic, 2011;Iadecola, 2013). BBB damage is a key pathophysiological factor in CCH-induced cognitive impairment (Ueno et al., 2002;Chen et al., 2015;Yin et al., 2015). Therefore, protection of the BBB is believed to be a promising strategy to improve cognitive function after CCH (Edrissi et al., 2016;Lee et al., 2017). The expression of tight junction-associated proteins such as ZO-1, claudin-5, and occludin decreased after CCH, and regulation of these proteins can rectify the CCH-induced BBB hyperpermeability, thereby improving cognitive function (Hawkins et al., 2004;Edrissi et al., 2016;Lee et al., 2017). However, these studies explored the role of tight junctions as one of the two key factors in maintaining BBB permeability but did not describe the role of another key factor, vesicle endocytosis (Haseloff et al., 2015). The extremely low rate of vesicular transcytosis is vital for maintaining the barrier function of the BBB (Haseloff et al., 2015). The expression of Mfsd2a (Ben-Zvi et al., 2014), the key protein to maintain this effect, is decreased under some pathological conditions, leading to an increase in the number of vesicles and an active transcytosis, which in turn leads to a significant increase in BBB permeability and aggravation of nerve function damage (Andreone et al., 2017;Yang Y. R. et al., 2017). Consistent with this, our results also showed that CCH caused a decrease in Mfsd2a expression and an enhancement of vesicle transcytosis. This may be related to pericytes. The expression of Mfsd2a is regulated by pericytes, and knockout of pericytes can cause the disappearance of Mfsd2a (Ben-Zvi et al., 2014). Recent evidence has shown that pericytes in the BBB are significantly reduced in the coverage of endothelial cells after CCH (Liu et al., 2019). However, the researchers did not detect Mfsd2a expression at the time.
Major facilitator superfamily domain-containing protein 2a is a novel mammalian major facilitator superfamily domain protein, first identified in 2008 (Angers et al., 2008). It was found by chance in a study that Mfsd2a was highly expressed in brain tissues (Ben-Zvi et al., 2014). Mfsd2a has recently been identified as an important component of BBB formation and integrity. Ablation of Mfsd2a results in increased BBB leakage from embryo to adult without disruption of tight junctions (Ben-Zvi et al., 2014). Since then, Mfsd2a has been involved in the study of neurological diseases related to BBB integrity. For example, in the early stage of cerebral hemorrhage (Yang Y. R. et al., 2017;Zhao et al., 2020) and cerebral infarction (Andreone et al., 2017), the expression of Mfsd2a decreases, and upregulating its expression can reduce neurological damage. In the cognitive impairment caused by CCH, BBB dysfunction can trigger neuroinflammation and oxidative stress, cause brain cell The expression of BBB permeability-related proteins, including Mfsd2a, ZO-1, claudin-5, and occludin in each group. *P < 0.05, **P < 0.01, vs. the sham group; # P < 0.05, ## P < 0.01, vs. the 2VO + control AAV group; n = 6 per group. (F) Representative microphotograph of the ultrastructure of BBB in the hippocampal CA1 region of rats. Scale bar = 500 nm (the upper four pictures); scale bar = 150 nm (the lower four pictures). (G) Quantification of vesicular density of six comparable-sized vessels (4-5 µm lumen) in each rat. *P < 0.05, **P < 0.01, vs. the sham group; # P < 0.05, ## P < 0.01, vs. the 2VO + control AAV group; n = 3 per group. The black arrow refers to vesicles in endothelial cells, and the red arrow refers to tight junctions. L, lumen; EC, endothelial cell. edema and neuron apoptosis, increase amyloid beta production, and decrease its clearance, and the toxic effect of amyloid beta further aggravates BBB dysfunction, finally leading to cognitive impairment (Cai et al., 2018;Cockerill et al., 2018). Our results showed that upregulation of Mfsd2a could reduce the BBB damage caused by CCH, partially break this vicious circle, and improve the cognitive dysfunction of CCH rats.
There may be some possible limitations in this study. To validate the effect of viral transfection, western blotting technology was used to evaluate the expression of Mfsd2a protein in the rat hippocampus after transfection with the Mfsd2a AAVs. If double staining of sections for Mfsd2a and CD31 and colocalization analysis were conducted in the research, then the result could be more convincing. In the present experiment, the effect of Mfsd2a gene deletion (such as a gene knockout model) on CCH-induced cognitive impairment was not evaluated, and the relevant mechanisms were not further explored. In future studies, we will try to improve experimental animals and methods to further explore the role of Mfsd2a in cognition. In addition, the role of Mfsd2a in other neurological diseases closely related to BBB, such as Parkinson's disease, epilepsy, and intracranial infection, is also worthy of investigation.
CONCLUSION
In conclusion, this is the first report exploring the relationship between cognition and the vesicle endocytosis. Mfsd2a alleviates CCH-induced BBB damage by inhibiting vesicular transcytosis, thereby improving spatial learning and memory impairment in CCH rats. Our results provide new evidence on the amelioration of cognitive function via BBB protection and present novel targets for the prevention and treatment of CCH-related diseases, such as VCI and Alzheimer's disease.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The animal study was reviewed and approved by the Animal Ethics Committee of the Medical School of Wuhan University.
AUTHOR CONTRIBUTIONS
ChQ and HS were involved in the study design, performed the study, and drafted and revised the manuscript. JS was involved in data analysis. LX was involved in data analysis and performed | 5,979.4 | 2020-06-16T00:00:00.000 | [
"Biology"
] |
Physics-aware differentiable design of magnetically actuated kirigami for shape morphing
Shape morphing that transforms morphologies in response to stimuli is crucial for future multifunctional systems. While kirigami holds great promise in enhancing shape-morphing, existing designs primarily focus on kinematics and overlook the underlying physics. This study introduces a differentiable inverse design framework that considers the physical interplay between geometry, materials, and stimuli of active kirigami, made by soft material embedded with magnetic particles, to realize target shape-morphing upon magnetic excitation. We achieve this by combining differentiable kinematics and energy models into a constrained optimization, simultaneously designing the cuts and magnetization orientations to ensure kinematic and physical feasibility. Complex kirigami designs are obtained automatically with unparalleled efficiency, which can be remotely controlled to morph into intricate target shapes and even multiple states. The proposed framework can be extended to accommodate various active systems, bridging geometry and physics to push the frontiers in shape-morphing applications, like flexible electronics and minimally invasive surgery.
Introduction
Shape-morphing systems that can undergo morphological changes in response to external stimuli have great potential for a wide range of applications, such as soft robotics [1][2][3][4][5][6][7] , minimally invasive surgery [8][9][10] , and flexible electronics 11,12 .These shape morphing applications typically require non-uniform and large deformation in the structures to realize intricate functional shapes, which is challenging to realize with ordinary materials.In contrast, inspired by the ancient art of paper cutting, kirigami introduces cuts to relax the continuum constraints in materials, allowing for significant spatial variation of the deformation within.
Despite its great potential, kirigami has been largely limited to a few regular periodic or heuristic cuttings developed through trial-and-error and ad hoc approaches 13 .Most research primarily focuses on forward kinematic analysis, fixing the cutting patterns with hand-pick parameters.There is a lack of effective inverse design methods that can efficiently explore the vast design space while satisfying the complex constraints associated with cuttings.Some studies have explored parameter optimization in a periodic kirigami, often involving parameter sweeping to screen desired parameters, such as widths of hinges and panel aspect ratios 24,25 .However, due to the restrictions imposed by periodicity and parametric space, the full potential of kirigami in shape morphing remains largely untapped.A recent inverse design framework has successfully relaxed the periodicity requirement in kirigami, creating optimized aperiodic cutting patterns to realize versatile deployed shapes 26 .This approach has been further extended to encompass various types of kirigami, addressing complex kinematic requirements such as compact reconfigurable designs 27 , and accommodating different topologies 28 .However, to the best of the authors' knowledge, existing inverse design processes for shape-morphing kirigami, including the aforementioned recent works, primarily focus on the design of geometries or kinematics but do not explicitly consider the physics 13,[29][30][31] .By physics, we mean the fundamental laws and principles underlying the forces, energy, and other physical interactions governing the deployment or actuation process.Consequently, most existing kirigami designs fail to address the essential physical feasibility and rely on simple mechanical manipulation (mostly by hand) to drive the shape-morphing process, which is impractical in real applications.While there are a few physics-aware exceptions, they only consider physics in post-analysis after completing the design process 26,27,32 , focus on periodic and unit-cell designs via brute-force parameter sweeping and heuristic optimizers 16,19,20,22,23,33 , or assemble precomputed unit cells with weak interactions [34][35][36][37][38] .As a result, existing methods either cannot ensure physical equilibrium in the resulting designs or become rather restrictive in terms of design complexity and applicability.The absence of an effective and flexible physics-aware inverse design framework hampers the integration of kirigami with stimuli-responsive materials to realize more complex and practical applications, which is the very issue we aim to address in this work.
The key barrier in designing physics-aware active kirigami is to incorporate the complex interplay between geometry, materials, and external stimuli in an iterative, automated design process.Furthermore, simulating the deployment process is often time-consuming and non-differentiable, without the analytical gradient of the design objective required to effectively navigate in a high-dimensional design space.This study addresses these issues and demonstrates how to explicitly incorporate both geometry and physics into kirigami design for active shape morphing via differentiable modeling and gradient-based optimization.It aims to fill the existing knowledge gap in physics-aware kirigami design, enabling a better understanding of the design principles and efficient optimization processes.We focus on magnetically actuated kirigami made of hard-magnetic soft material as the illustrative case.Magnetic actuation offers the advantage of performing tasks remotely in confined and enclosed spaces, making it particularly valuable for many shapemorphing applications mentioned earlier.It has demonstrated promise in fields such as surgical robotics and flexible electronics 39 , where precise control, intricate behaviors, and rapid customization are required.
Given the practical significance of these applications, the need for an improved design approach that considers physics while maintaining high flexibility and efficiency becomes even more critical.Meanwhile, magnetic actuation presents unique challenges due to its position-and deformation-dependent magnetic potential [39][40][41] .It involves complex interactions between geometries, materials, and external stimuli, which are also encountered in other types of actuation such as thermal load 23 , humidity 42 , and pH 8 .Therefore, the design principles and insights obtained in this study have broad applicability across various designs actuated by different physics.Specifically, we propose an energy-based differentiable inverse design framework for magnetically actuated kirigami, explicitly incorporating physics into the design process.This approach allows simultaneous optimization of geometry and active materials to achieve complex shape-morphing behaviors, including multi-state designs that can freely transform into different stable configurations under different magnetic stimuli.It demonstrates superior efficiency, effectiveness and flexibility in quickly responding to new design scenarios for solutions with both kinematic and physical feasibility, unlocking design possibilities that were previously unachievable.
Physics-aware differentiable design
While our method can be applied to various kirigami patterns, we have chosen to focus on the quadrilateral kirigami pattern for ease of illustration.As illustrated in Fig. 1a, our design process begins with a compact quadrilateral kirigami consisting of a repeating unit cell of four square panels.The panels are connected by hinges at the nodes to enforce mutual kinematic constraints, only allowing each pair of connected panels to counter-rotate and uniformly morph the overall configuration into a squared deployed shape, as shown in Fig. 1b.To achieve a kinematically admissible path between the compact and deployed states, it is important to ensure geometrical compatibility between the panels, as depicted in Fig. 1c.For instance, edges overlapped in the compact state must have equal lengths, and the panel angles around the center node of a compacted unit cell must add up to 2.Beyond ensuring geometrical compatibility, how to actuate the kirigami into its deployed state is also a crucial aspect to consider in real-world applications, which is overlooked in existing designs.In our design, we utilize magnetic torque as the actuation to enable remote and active control of the kirigami.This is achieved by uniformly dispersing hard-magnetic particles with programmed magnetization within the polymer matrix of each panel through a direct ink writing (DIW) printing method 43,44 (See Method for material and printing details), as illustrated in Fig. 1a-c.A uniform magnetic field = B of magnitude and direction B will then impart distributed magnetic torques in kirigami to achieve shape morphing (Fig1.d).For the ith panel with magnetization vector = of magnitude and direction , the induced magnetic torque can be computed as where and are the thickness and area of the panel, respectively.The positive direction of x-and ycomponents for both B and is defined to be aligned with the axes shown in Fig. 1a.The magnetization direction of each panel should be carefully designed so that the induced magnetic torque can rotate the panel into the desired orientation upon the applied magnetic field.Specifically, the deployed configuration of the kirigami should satisfy the physical equilibrium between magnetic torques and mechanical forces, as shown in Fig. 1d.In a conventional kirigami design without considering the physics, this equilibrium condition is usually not satisfied, and thus the design is often physically infeasible.Our goal is to develop a fully automated inverse design approach, so that for any target deployed shapes, the kirigami cutting and magnetization of each panel can be rapidly obtained to achieve the desired reconfigured shapes after actuation while ensuring both geometrical and physical feasibility.Since periodic cuttings impose significant restrictions on the achievable deployed shapes, we turn to general kirigami designs with aperiodic cuttings.We begin by conformally mapping the deployed configuration of a regular kirigami (Fig. 1b) into the desired target shapes (Fig. 1f) using Schwarz-Christoffel mapping 45 .
The magnetization orientation of each panel is adjusted accordingly by the average angle change of vectors connecting the center and the four nodes.The resulting kirigami design may no longer meet the geometrical compatibility requirements, and the deployed state is usually not in magneto-elastic equilibrium.As a result, although the deployed shape closely approximates the target, it is impossible to retrieve a compact kirigami design that can morph into the deployed shape in a geometrically and physically feasible manner (Fig. 1e).
To address these issues, we conduct a constrained optimization on the mapped deployed state (Fig. 1f) to optimize both the cutting and magnetization orientation for a geometrically and physically feasible deployed state (Fig. 1i), from which the compact design (Fig. 1h) can be easily identified by direct contraction.
It is important to acknowledge that although optimization-based inverse design methods have been proposed in the literature to achieve geometrical compatibility [26][27][28] , on which our method is built, they are unable to incorporate physical equilibrium requirements into the design process.This is due to the complex interactions among the panels and the strong elastic-magnetic coupling, as demonstrated in Fig. 1d.The resulting lack of gradient information of the design objective precludes the use of efficient and effective gradient-based solvers, and the computational cost for multi-physics simulation is prohibitive for iterative design (e.g., a typical genetic-algorithm-based search usually requires thousands of evaluations 33,46 and may take days or even weeks in our case).Furthermore, the compact and deployed states are intertwined to determine the physical equilibrium.The equilibrium is thus design-dependent and changes iteratively throughout the design process.This results in a dynamic optimization problem that is notoriously difficult to solve.To overcome these challenges, we first develop differentiable kinematic (Fig. 1g) and energy models of kirigami (Fig. 1j).Among all kinematically admissible configurations, only the one corresponding to the total energy minimum can achieve physical equilibrium and remain stable.Ideally, we want this minimal-energy configuration to be the designed deployed state so that the kirigami can transform into and retain the target shape under a given stimulus.With the differentiable models, we can easily integrate this minimal-energy requirement into a constrained optimization framework with an analytical gradient to enable automatic and efficient solutions (Supplementary Note S6).
Specifically, assuming an external magnetic field aligns along the vertical direction, to achieve compatible counter-rotation in a four-panel basic cell (Fig. 2a-b), the horizontal components (x-axis) of magnetization in each panel (Fig. 2b) should have the sign indicated in Fig. 2a.Hence, we only need to use the vertical component (y-component) of a unit magnetization vector to determine the orientation of the magnetization panel, which is combined with the coordinates ( , ) of the nodes as the design variables (Fig. 2c).(marked by the yellow triangle).
To ensure the geometrical feasibility of the deployed design, we apply constraints to the edges and angles, as described in Supplementary Note S1 and illustrated in Supplementary Fig. S1a-c.Additionally, we incorporate constraints on the deployed contour to keep it aligned with the target shape (Supplementary Note S1.4 and Supplementary Fig. S1d).An optional constraint is introduced to regularize the shape and aspect ratio of the compact state (Supplementary Note S1.5 and Supplementary Fig. S1e).To prevent overall rigid-body rotation when subjected to external excitation, the kirigami design is constrained to be symmetric along the vertical axis.This symmetry requirement ensures that the kirigami maintains net magnetization to be zero or along the external magnetic field throughout the reconfiguration process.While these geometrical constraints ensure a geometrically feasible compact kirigami (Fig. 2d) can always be retrieved for the design deployed kirigami (Fig. 2f), they do not guarantee a kinematically admissible morphing path between the two states (Fig. 2e).Geometrical frustration can still occur in the kirigami, making it difficult to obtain an analytical energy model.To address this, we add an extra constraint on the angles around each of the rotating hinges (marked in the same color in Fig. 2d-f).Each pair of these angles (red or blue color) should sum up to and achieve a straight cutting 27 .A kirigami satisfying this constraint is called rigid-deployable.It can be considered as a mechanism with a single degree of freedom (DOF), whose configuration can be fully determined by the deployed angle as shown in Fig. 2g.
To incorporate physics into the optimization process, we have developed an analytical model to describe the total energy Π t of a given rigid-deployable kirigami, which consists of elastic energy Π e and magnetic energy (potential) Π m , as shown in Fig. 2h.Observing that the panel only has negligible deformation, we assume the panel to be rigid and utilize a modified hyper-elastic beam model to describe the elastic energy in a bending hinge induced by the counter-rotation between a pair of panels, expressed as an analytical function of the rotation angle (see Supplementary Note S4 and Supplementary Fig. S3) 47 .It can be considered an equivalent nonlinear spring shown in Fig. 1j.Then, given a kirigami configuration, we can obtain the total elastic energy Π t from the rotated angles of all the hinges.Meanwhile, the magnetic energy potential Π m is calculated by summing up the potential of all p panel as Consequently, the total energy can be obtained via which is a function of the deployed angle (Fig. 2h) for a fixed external magnetic field once the kirigami design is given.The stable deployed state thus corresponds to the deployed angle with the lowest total energy (yellow dot in Fig. 2h).It is noted in Fig. 2h that, there is a complicated relation between geometry (deployed angle) and the energies.As a result, when the physics or panel magnetization orientation is not taken into account in the design, the kirigami tends to deviate from physical equilibrium in the intended deployed shape (yellow triangle in Fig. 2h), leading to a deployed shape that differs from the target.
To realize the co-design of geometry and magnetization orientation, we further develop a differentiable kinematic analysis, from which an analytical expression for the total energy, i.e., Π t (), can be obtained to facilitate later simulation and design optimization.Given any change in the deployed angle, we solve for the corresponding kirigami configuration (kinematic analysis) in sequential steps as shown in Fig. 3e, starting from the center panel columns, propagating to its right side, and then obtaining the left side by mirror symmetry.In each step, we iterate nodes in a specific column of panels or voids, and update their locations based on the constraints with preceding nodes.The updating process is composed of a series of analytical transformations obtained via solving simple geometrical problems (marked by shaded red/blue/purple regions in Fig. 3b), which forms a computational graph for forward analysis (Fig. 3a-b).
Due to the inherent regularity of cutting, each step only involves a limited set of transformation types (indicated by arrows in different colors in Fig. 3).Using chain rules, we can obtain the gradient values of node locations of any nodes by tracing the computational graph backward and multiplying the gradient of basic transformation in order (Fig. 3c).By sequentially composing the transformations in different steps, we can analytically formulate the updated configuration of the whole kirigami (the last graph in Fig. 3e).
Then, using the chain rule again, we can readily calculate the analytical gradient of any nodal locations with respect to the deployed angle and initial node locations 0 .It corresponds to a backward subgraph embedded in the forward computational graph (Fig. 3d).By integrating these analytical kinematics into the previous energy model, we can derive analytical expressions for the total energy Π t () and its gradient.A more comprehensive description of the kinematics and energy models is included in Supplementary Note S3 and S5, respectively.axis) and magnitude = 30 mT.The kirigami is assumed to have a thickness of 0.8 mm, composed of silicone-based resin with 5 μm neodymium-iron-boron (NdFeB) particles embedded, with a shear modulus of 300 kPa, a Poisson's ratio of 0.495, and magnetic moment density of the composite = 70 kA/m.For ease of illustration and consideration of manufacturability, we choose to stack 3 × 3 four-panel basic cells in the kirigami, connected by cuboid hinges of size 1.2 mm × 1.6 mm × 0.8 mm.We also include the constraint on the compacted state to ensure a square shape.The proposed differentiable inverse design method allows for efficient optimization of the kirigami, taking only a few minutes as opposed to days or even weeks required when using a heuristic optimizer integrated with non-differentiable simulation.The design results are shown in Fig. 4a-b and Supplementary Video S1.
Compared with the regular grid-like cutting in periodic designs (Fig. 1a-b), the optimized cutting is nonuniform, with panels on the outer boundary significantly distorted.Panels that tend to move outwards in the deployment are elongated, e.g., segments centered at the four boundaries, while the panels at the corners are greatly compressed to achieve the circular deployed shape.This is a typical way for kirigami to utilize the discontinuity introduced by the cutting to redistribute the deformation for better shape matching 13 .
Despite the highly non-uniform shapes, panels are still compatible and rigid-deployable, due to the imposed geometrical constraints.The optimized magnetization orientation is also distinct from that in the periodic (Fig. 1a-b) or initial mapped designs (Fig. 1f).In the compact state, the magnetization orientation is more aligned in the opposite direction of the external magnetic field, while in the deployed state, the orientation is closer to the perpendicular direction.This leads to a significant decrease in magnetic potential energy that compensates for the increase in elastic energy during the deployment process, as demonstrated by the energy analysis in Fig. 4e.It means that a larger magnetic torque is induced in the deployed state to counteract the elastic forces and maintain the deployed shape.As a result, the deployed state corresponds to the state of minimal energy in physical equilibrium (Fig. 4 e), in contrast to the unstable target deployed shape in Fig. 2g-h without the magnetization orientation design.To further validate our findings, we conducted experiments on 3D-printed kirigami, as depicted in Fig. 4c-d and Supplementary Video S2.The details for the manufacturing and experiment can be found in the Methods section.From Fig. 4d, it can be observed that the deployed angle is almost the same for simulated and experimental designs.Despite a small discrepancy due to manufacturing errors and frictions, the experimental results overall demonstrate a close agreement with the simulated designs in achieving the desired circular deployed shape.
Shape morphing designs under different magnetic field magnitudes
To investigate the effect of the external magnetic field, we designed another two kirigami structures to achieve the same circular deployed shapes but under different magnitudes of .When subjected to a weaker magnetic field of = 20 mT, as shown in Fig. 4f-g, the optimized cutting is modified in such a way that the deployed angle is smaller than that in the design for = 30 mT (Fig. 4b, Supplementary Video S1), resulting in lower elastic energy and thus smaller elastic forces in hinges.Meanwhile, all panels have their magnetization orientation aligned perfectly perpendicular to the external field in the deployed state, maximizing the induced magnetic torque.The reduction in elastic forces resulting from the altered cutting and the increase in magnetic excitation induced by the change in magnetization orientations combine to maintain equilibrium in the same deployed shape, even in a much weaker field.On the other hand, when exposed to a much stronger field with = 50 mT, the magnetic potential is more than enough to induce torques compensating for the elastic forces.As a result, the optimized design, as shown in Fig. 4h-i and Supplementary Video S1, has more panels with their magnetization orientation aligned with the external magnetic field in the deployed state, leading to a decrease in magnetic potential and torques, and thus maintaining a physically stable deployed state.These findings suggest that the interplay among geometrical cutting, magnetization orientations in the panels, and the magnetic fields plays a crucial role in achieving target deployed shapes in physical equilibrium.Our approach can take into account this interplay and thus enable the co-design of different entities for optimal performance.results corresponding to that in Fig. 4a and 4b, respectively.The transparent red lines mark the configuration of the simulated design in Fig. 4b.e Energy analysis of the designed kirigami shown in Fig. 4a, under the constant field = 30 mT.The deployed state shown in Fig. 4b corresponds to the lowest energy state marked by the vertical dashed line.Upon the given constant excitation, the compact state will have a higher total energy and will transform to the energy minimum/designed deployed state, as marked by the dashed arrow.f and g show simulated compact and deployed states of optimized kirigami designed for = 20 mT, respectively.h and i show simulated compact and deployed states of optimized kirigami designed for = 50 mT, respectively.
Shape morphing designs to achieve various deployed shapes
While many existing designs rely on trial-and-error or heuristic designs, the proposed method offers the flexibility to accommodate direct inverse design for various complex deployed shapes, as shown in Fig. 5 and Supplementary Video S3.In these cases, we use the same upward magnetic field with a magnitude of = 35 mT.The compact shapes are still constrained to be rectangular, but their aspect ratios are relaxed to be freely changed by the optimization to further increase flexibility.The results demonstrate the success of the proposed method in achieving deployed shapes that precisely match the given targets, even when the target shapes exhibit significant changes in aspect ratios between states (Fig. 5a and 5d) or drastic variations in curvatures (e.g., Fig. 5e, 5f, 5j and 5l).In particular, we perform experiments on the 3D-printed gobletlike design (Fig. 5m and Supplementary Video S4), whose deployed angle and overall deployed state (Fig. 5n) match well with the design target and simulation (Fig. 5l).Similar to our previous designs for a circular deployed shape, the optimized cuttings in all these cases here lead to non-uniform panels.Panels that expand outward in the deployed state are elongated or enlarged, while panels that contract inward are compressed.
This non-uniformity becomes more pronounced when there is a substantial change in aspect ratio (Fig. 5a and 5d) or boundary curvatures (e.g., Fig. 5e, 5f, 5j, and 5l) during deployment, enabling spatially varying deformation for target shape morphing.and deployed states of optimized kirigami with a heart-like deployed shape, respectively.c and f are compact and deployed states of optimized kirigami with a dog-like deployed shape, respectively.g and h are compact and deployed states of optimized kirigami with a rainbow-like deployed shape, respectively.i and j are compact and deployed states of optimized kirigami with an acorn-like deployed shape, respectively.k and l are compact and deployed states of optimized kirigami with a goblet-like deployed shape, respectively.m and n are experimental results corresponding to Fig. 5k and 5l, respectively.The transparent red lines mark the configuration of the simulated design in Fig. 5l.
It is interesting to note that a larger change in shapes/deployed angles between the two states results in panels having magnetization in deployed states more perpendicular to the external fields (as seen in Fig. 5f and 5l).In the same deployed kirigami, panels with smaller sizes or larger rotation angles have magnetization orientations more perpendicular to the external field, while panels with larger sizes or smaller rotation angles have magnetization orientations more aligned with the external field.For instance, in Fig. 5l, the lower half panels have smaller sizes compared to panels in the upper half, which have magnetization more perpendicular to the external field.Consequently, both parts contribute similar magnitudes of magnetic torques to balance the competing magnetic and elastic forces.These observations underscore the intimate connection between designs on magnetization and geometry in achieving physically stable deployed states, further validating the effectiveness of our proposed method.
Two-way contractible designs with target deployed shapes
All the results discussed thus far have focused on designs that transform only between a single compact state and a deployed state.However, our proposed method can also design kirigami that morphs into The results depicted in Fig. 6a and 6d demonstrate the successful attainment of the target shapes in the zero states, which transform into different compact states upon exposure to magnetic fields in opposite directions (Supplementary Video S5 and S7).These two-way contractible behaviors align well with real physical experiments shown in Fig. 6b and 6e, and Supplementary Video S6 and S8.In the circular design, all the magnetization orientations are almost perpendicular to the external magnetic field in the zero state, resulting in a nearly zero magnetic potential (Fig. 6c).Additionally, the cutting is designed in such a way that the rotation angle of each panel is almost identical in both compact states but with opposite signs.Consequently, the energy curves (Fig. 6c) exhibit approximate symmetry with respect to the deployed angle of the zero state, with the left and right halves corresponding to constant negative and positive fields respectively.It forms two distinct energy-decreasing paths leading to the energy minima corresponding to the two compact states.In contrast, the energy curves for the goblet-shaped design display evident asymmetry (Fig. 6f), with a larger energy change during the transition from the zero state to the negative state compared to the transition from the zero state to the positive state.As a result, both compact states can achieve physical equilibrium with the lowest total energy, but under different constant stimuli.It should be noted that there is a sudden change in the total energy in both Fig. 6c and 6f in the zero state, which is due to the change of sign for the magnetic potentials when switching the direction of external fields.
With this two-way contractible design, we have the ability to control the temporal series of external stimuli in freely transferring between different states and realizing a desired sequence of shapes as shown in Fig. 6gh, Supplementary Video S5-S8.Specifically, in Fig. 6g, we start from a zero state without actuation, then impose a negative field of 35 mT to reach the negative state, and finally switch the actuation direction to get a positive field of 35 mT to realize the positive state.It is interesting to note that the kirigami exhibits an asynchronous morphing process from the negative to the positive state, i.e., starting from the left and then propagating to the right.This might be due to asymmetric manufacturing errors and friction forces.
Similarly, in Fig. 6h, we can transform the kirigami to achieve positive, zero, and negative states in sequence by sequentially imposing a positive field of 35 mT, a zero field, and then a negative field of 35 mT.This capability opens up possibilities for various applications, such as wave-guiding control, locomotion in soft robotics, and mechanical computing.
Conclusion
In this study, we have proposed a differentiable design method for magneto-responsive kirigami in achieving shape morphing.Unlike existing methods that focus solely on kinematics in design followed by post-analysis, the proposed method explicitly integrates physics into the design loop to ensure both the kinematic and physical feasibility of the morphing process.It is built on newly-developed sequential kinematic analysis and analytical energy models to capture the coupling between active materials, geometry, and stimuli.Leveraging the differentiability of our models, we formulate the design as a constrained optimization problem, simultaneously optimizing both cutting and magnetization orientations using gradient-based methods.By integrating physics into the design loop, we successfully obtain active kirigami that can be remotely actuated to morph into a wide range of complex target shapes and allow free transition between multiple stable states via two-way contractible designs, validated through both simulations and physical experiments.It significantly reduces the computational cost of the design process (from days or weeks to minutes), demonstrating superior effectiveness and flexibility in responding to new design scenarios.Our findings shed light on the crucial role of the interplay between active materials, geometry, and stimuli in achieving physically stable morphologies.Hinged on the general energy principles underlying various physics, the analytical models and energy-based optimization framework are applicable for stimuli other than the magnetic field, and active systems beyond kirigami, such as origami 52 , lattice 53,54 , tensegrity systems 55 , and magnetic soft continuum robots 56 .It bridges the gap between geometry and physics in active system designs, paving the way for innovative applications in flexible electronics, minimally invasive medical treatments, and optical manipulation.
Methods
Ink fabrication and preparation for direct ink writing (DIW) The magnetic kirigami patterns are fabricated through the DIW method with ink composited of silicone- with extrusion pressure being 200 kPa.The printed patterns are cured at 80°C for 36 h.
Experiment setup for magnetic actuation
The magnetic kirigami patterns are actuated under a 1D magnetic field generated by a set of single-axis Helmholtz coils.To prevent out-of-plane deformation, magnetic kirigami patterns are covered by a supported acrylic plate.
Simulation and optimization
Both energy-based simulation and iterative kirigami optimization are realized by sequential quadratic programming method, via the built-in function (fmincon) of commercial software MATLAB R2022b (The MathWorks, Inc).The commercial software Abaqus 2022 (Dassault Systèmes) is used for the finite element analysis of the kirigami to validate the proposed energy-based simulation method.
More details about simulation and optimization are provided in the Supplementary Information.
Fig. 1 :a
Fig. 1: Schematic diagram of the physics-aware differentiable design of kirigami.
Fig. 4 :
Fig. 4: Optimized design results to achieve a circular deployed shape.
based resin with 5
μm neodymium-iron-boron (NdFeB) particles (Magnequench Co., Ltd) embedded.The ink is fabricated as follows.First, SE1700 base (Dow Corning Corp.) and Ecoflex 00-30 Part B (Smoothon Inc.) with a volume ratio of 1:2 are mixed together at 2000 rpm for 1 min using a centrifugal mixer (AR-100, Thinky Inc.).Then, NdFeB (77.5 vol% to SE1700 base) is added to the mixture and mixed at 2000 rpm for 2 min and defoamed at 2200 rpm for 3 min.Next, SE 1700 curing agent (10 vol% to SE1700 base) is added and mixed at 2000 rpm for 1 min.The ink is transferred to a 10 mL syringe (Nordson EFD) and defoamed at 2200 rpm for 3 min and subsequently mixed at 2000 rpm for 2 min.The ink is then magnetized by a homemade magnetizer under a 1.5 T impulse magnetic field.The syringe is mounted to a customized gantry 3D printer (Aerotech) and a printing nozzle of 410 μm is utilized.CADFusion (Aerotech) is used to convert magnetic kirigami pattern drawings to G-codes for printing.The printing speed is set to 5 mm⋅s-1 | 7,131.6 | 2023-08-09T00:00:00.000 | [
"Physics",
"Engineering"
] |
Ammonium Removal from Aqueous Solutions by Clinoptilolite: Determination of Isotherm and Thermodynamic Parameters and Comparison of Kinetics by the Double Exponential Model and Conventional Kinetic Models
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.
Introduction
Discharging of wastewater streams containing high ammonium concentrations into the receiving body causes serious problems in the natural nutrient cycle between the living world and the soil, water, and atmosphere [1]. Hence, mitigation of contamination caused by ammonium compounds from wastewater has a vital importance with regard to fresh water usage [2]. Although a number of processes such as air stripping, breakpoint chlorination, nitrification-denitrification, and ion exchange have been proposed for the removal of ammonia from the environment and industrial water systems, only few methods such as biological and physicochemical treatment are commonly applicable for the direct control of ammonia in wastewaters [2,3].
In the last three decades, an increasing number of investigations have been conducted on ammonium removal from wastewater by ion exchange because of the ammonium selectivity of clinoptilolite [4][5][6][7]. Demir et al. [8] examined the ammonium removal characteristics of natural zeolite by using a packed bed. Sarıoglu [9] considered the removal of ammonium from municipal wastewater using natural Turkish (Dogantepe) zeolite. Ramos et al. [10] studied the effects of temperature and solution pH on ammonium ion exchange capacity of clinoptilolite obtained from mineral deposits located in San Luis Potosi and Sonora, Mexico. Besides these studies carried out to remove ammonium from the existing environment, zeolites have also been examined in a variety of applications such as reducing the potential health risks of carcinogens caused by smoking [11], increasing the compost quality by trapping ammonium and reducing nitrogen losses from the compost [12].
Clinoptilolite is an abundant natural zeolite found in igneous, sedimentary and metamorphic deposits in the form of alumino-silicate minerals with high cation-exchange capacity. The adsorption capacity of clinoptilolite is significantly affected by physical and chemical pretreatment and loading or regeneration of clinoptilolite. The pretreatment of natural zeolites by acids, bases, surfactants, etc., is an important method to improve their ion-exchange capacity [8,[13][14][15][16]. Practically, the result of any pretreatment operation is the increase of the content in a single cation, what is called homoionic form. Therefore, prior to any ion-exchange application, certain ions from the structure of the material are removed by pretreatment and more easily removable ones are located [17,18].
Isotherms and kinetic results are valuable information to determine the suitability and effectiveness of the adsorption process [19]. Aksu and İşoğlu [20] examined the mechanisms of biosorption and potential rate-controlling steps including external mass transfer, intraparticle diffusion and biosorption process. They found that for all initial copper(II) concentrations initial sorption of copper(II) occurred rapidly and the majority of copper(II) uptake occurred within the first 30 min. Betul et al. [21] stated that microbial metal uptake generally involves the rapid uptake stage followed by a slow uptake.
Although many studies have been conducted on adsorption isotherm and kinetics, very few works [22][23][24] in the literature have been presented on the evaluation of adsorption stages. Actually, adsorption takes places rapidly on available interior surfaces of the pores in clinoptilolite that are easily accessible at the initial stages of the adsorption process. Then, the adsorption process continues slowly due to limitation in available surface sizes. This two-step adsorption mechanism, the rapidly and slowly absorbed fractions, can be described by a Double Exponential Model (DEM) that can also be used in water and wastewater treatment process optimizations.
The main objective of the present study is to examine ammonium removal by clinoptilolite, which was initially pretreated with aqueous sodium chloride solution, and to analyze the equilibrium modeling by two-and three-parameter adsorption isotherms, the kinetic modeling by nth-order reaction, modified second-order and double exponential model, and the thermodynamic parameters of the ammonium removal.
Physical Properties of the Clinoptilolite
Clinoptilolite used in the experiments was obtained from Balıkesir in the Northwestern part of Turkey. The chemical properties of the clinoptilolite can be found in our previous study [14]. Clinoptilolite was ground down to a grain size range of 0.30-0.60 mm. Prior to the batch adsorption experiments, clinoptilolite was washed with distilled water to remove surface dust and then dried in an oven at 70 °C. Subsequently, it was treated with 2 M NaCl solution at 22 °C by shaking for a period of 24 h to activate its pores and dried again.
Experimental Procedure
Synthetic samples were prepared to give NH 3 -N concentrations of 30, 60, 100, 160 and 250 mg/L by adding required NH 4 Cl salt to distilled water for both isotherm and kinetic studies. For kinetic studies, samples of 5 g were equilibrated with 500 mL ammonium nitrogen solution at 10, 25, and 40 °C for 100 min. Samples were periodically taken for measurement of aqueous phase of ammonia concentrations. Batch mode adsorption isotherm studies were carried out in conical flasks containing 200 mL of the solutions and 2 g clinoptilolite at temperatures of 10, 25, and 40 °C. The flasks were placed in a magnetic stirrer and agitated for 4 h at a fixed agitation speed of 200 rpm. All experiments were carried out at an initial pH of 4.5 where clinoptilolite ion exchange occurs conveniently when the pH is between 4 and 8 in which range ammonium is at ionized form [3]. The final pH value was also monitored and it was found to be below 7.5 for all experiments. Ammonium nitrogen that remained in the solution of the sample was determined using the classic Nessler procedure [25].
The amount of ammonium nitrogen in the solid phase, Q (mg/g), was calculated by using the following equation: where C 0 and C t are the initial and retained ammonium concentration (mg/L) in solution at time t, respectively; V is the solution volume (mL); and m is the weight of adsorbent (g).
Statistical Analysis of Data
The adsorption isotherm and adsorption kinetic parameters were determined by using non-linear regression analysis. The non-linear method is a better way to obtain the kinetic parameters than the linear method, and thus it should be primarily adopted to determine the kinetic parameters [26]. A minimization procedure using the solver add-in function of the Microsoft Excel has been adopted to solve isotherm and kinetic equations by minimizing the hybrid fractional error function (HYBRID) between the predicted values and the experimental data [27].
where the subscripts "exp" and "cal" denote the experimental and calculated values of Q, respectively; n is the number of data points; and p is the number of parameters in kinetic equation.
In order to quantitatively compare the applicability of isotherm and kinetic models for fitting the experimental data, non-linear coefficient of determination (R 2 ) and average relative error (∆Q) [28] were calculated: Table 1. Expression of two-parameter and three-parameter adsorption isotherm models.
Three-parameter isotherms
Redlich-Peterson [34] β e R e R e C a Sips [35] n e n e m e C b where; Q e is equilibrium solid phase concentration (mg/g) and C e is equilibrium liquid phase concentration (mg/L) in all isotherm models. In all models, Q m parameter is relevant with adsorption capacity. In Freundlich isotherm model K f and n are isotherm parameters characterizing adsorption capacity and intensity, respectively. In Langmuir equation K L and a L are the Langmuir constants related to the adsorption capacity and energy of adsorption, respectively. In D-R isotherm, E is energy of adsorption. In Tempkin isotherm K Te is equilibrium binding constant (L/g), b is related to heat of adsorption (J/mol), R is the gas constant (8.314 × 10 −3 kJ/K mol) and T is the absolute temperature (K). K R (L/g) and a R (L/mg) are Redlich-Peterson isotherm constants and β is the exponent which lies between 0 and 1. In Sips isotherm, a S constant related to energy of adsorption and 1/n is exponent. K To is the Toth model constant and n the Toth model exponent (0 < n ≤ 1). b K is the Khan model constants and a K is the Khan model exponent.
Adsorption Isotherms
Adsorption isotherms describe the relationship between the amount of adsorbed ion on adsorbent and the final ion concentration in the solution. Experimental results have been analyzed using four two-parameter isotherm models including Freundlich, Langmuir, Tempkin and D-R; and four three-parameter adsorption isotherm models including R-P, Sips, Toth and Khan isotherm models (Table 1) [29][30][31][32][33][34][35][36][37]. Figure 1 illustrates the two-and three-parameter isotherm models that are fitted to the experimental data obtained at 10 °C. Similar trends were also obtained at 25 °C and 40 °C (results not shown). Increase in temperature didn't cause any significant changes in the adsorption capacity of clinoptilolite which was found to be 14.50, 14.50, and 13.88 mg/g for 10, 25, and 40 °C, respectively. The determined isotherm parameters obtained for all temperatures are shown in Table 2. The isotherm constants, HYBRID, average relative errors (ΔQ e , %), and coefficient of determination (R 2 ) based on the actual deviation between the experimental points and predicted values are also given. As given in Table 2, even though all the R 2 values are above 0.96 which indicates best fit the R 2 values for two-parameter isotherms are slightly smaller than that of three-parameter isotherms. However, when the average relative error of three-parameter isotherms are inspected it can be seen that they are clearly lower than that of two-parameter isotherms. This shows average relative error criteria is more pronounced compared to R 2 criteria to evaluate the experimental results. Gunay [38] also stated that three-parameter isotherm models resulted better performance than two-parameter models. On the basis of the ΔQ e values obtained, out of all the two-parameter and three-parameter isotherm models, D-R isotherm and R-P isotherm, respectively, describe the experimental data effectively for all temparatures except at 40 °C ΔQ e is slightly larger than that of Tempkin and Langmuir models. experimental Q e values. When R 2 values are considered, it was seen that the D-R isotherm and R-P isotherm models were also well fitted to the experimental data. The magnitude of energy of adsorption (E) in the D-R isotherm is around 1-16 kJ/mol and useful for the assessment of the adsorption mechanism. If this value is below 8 kJ/mol, physisorption is considered to occur. In the present case, the values of E slightly increased as the temperature increased from 10 to 40 °C and were found to be between 7.024 and 7.379 kJ/mol, which indicated physical adsorption.
Adsorption Kinetics
Adsorption kinetics are required for selecting optimum operational conditions of water and wastewater treatment facilities for full-scale processes. The results obtained from the experiments at 10 °C were examined to describe the reaction kinetics according to the nth-order kinetics, and the modified second-order and double exponential models. Figure 2 illustrates the kinetic models fitted to experimental data obtained at 10 °C. The determined kinetic parameters are shown in Table 3.
nth-Order Kinetics
It is thought that instead of assuming order of the reaction as 1 or 2, the direct calculation of rate constant and order of the adsorption reaction is a more appropriate method [39]. Thus, nth-order kinetic model can be used. The model is expressed as follows [40]: where Q e is the amount of ammonium adsorbed on the surface of the adsorbent at equilibrium, (mg/g); Q t is the amount of ammonium at any contact time, (mg/g); k n is the rate constant and its unit depends on the order of the reaction, (1/min) (mg/g) 1-n , β n is related to impurities pre-adsorbed on the surface (β n = 1/(1 − θ 0 ) n−1 ), θ 0 is surface coverage at pre-adsorbed stage (θ 0 = Q 0 /Q e ), dimensionless. The determined model parameters are listed in Table 3. The reaction order, n, increased with the increase in initial ammonium concentration, it was found to be between 1.162 and 4.162. This increase can be explained on the basis of the increase in driving force with the concentration gradient for the initially high ammonia concentration. In another study carried out by Özer [39], a similar finding to that obtained in the present study showed that the value of n in nth-order kinetics slightly decreased with an increase in initial temperature. Also, it is stated that the dominant mechanism depends on a combination of adsorbate and adsorbent and adsorption conditions such as temperature and concentration range [41]. The k n values were found to be in the range of 0.123 and 0.231 (1/min) (mg/g) 1-n . β n values were approximately 1, which implies that there are no impurities or pre-adsorbed ammonium initially.
Modified Second-Order
Modified second-order equation can be obtained from the nth-order kinetics for n = 2 as follows [40]: The modified second-order reaction constants k 2 and coefficient of determination R 2 are presented in Table 3 for all initial ammonium concentrations. The values of k 2 were found to be in the range of 0.138-0.236 g/mg-min. β 2 values were about 1, as calculated in the nth-order kinetic equation.
Double Exponential Model
The model describes the adsorption process with respect to both chemical and mathematical points of view [42], correlating the two-step mechanism as rapidly and slowly adsorbed fractions [43]. The model is expressed by the following equation: where D 1 and D 2 are the amount of rapidly and slowly adsorbed fraction of ammonium (mg/l), respectively, and K D1 and K D2 are rapid and slow rate constants (min -1 ). It should be noted that the sum of D 1 /m ads and D 2 /m ads has the same physical meaning as the calculated value of Q e , and K D1 is greater than K D2 . Rapidly and slowly adsorbed fractions (%), RF and SF, can be calculated as: and SF = 100 (D 2 /(D 1 + D 2 ) respectively. Model parameters for the slow and rapid steps and statistical comparison parameters obtained from experimental data are presented in Table 3 for all initial concentrations. While RF values decreased with an increase in initial concentration, SF values increased with an increase in initial concentration. The reason for the decrease in the RF values is that adsorption takes place in restricted zones due to limitation in available surface sites of clinoptilolite for the high initial concentration. K D1 values increase with the initial concentration; meanwhile, K D2 values are almost the same. This finding is also in accordance with the observations of other similar studies [44,45]. Blázquez et al. [44] examined the kinetics of lead(II) biosorption by olive tree pruning waste found out that biosorption of lead(II) ions onto the biomass initially occurred within a fast removal rate stage, followed by a second slower removal rate stage, until reaching equilibrium. Ghaedi et al. [45] determined that the biosorption of Pb 2+ ions by Saccharomyces cerevisiae took place in two stages: in the first stage, biosorption process in which 60-70% of the total process completed was achieved in 3 days, and in the second stage, equilibrium attained in 4 days. It was also suggested that in the first stage which was faster than the second stage, the Pb 2+ ion was accumulated in the large available surface of biosorbent. The biosorption process was slowed down with the gradual occupation of surface binding sites.
Evaluation of Kinetic Studies
For all kinetic models, calculated Q e values were almost the same as the experimental Q e values. Initially, the adsorption rate of ammonium by clinoptilolite was high up to 20-50 minutes, and then it gradually decreased with an increase in the contact time. According to ΔQ e values, results of the kinetic studies show that the best fitted kinetic models are found to be in the following order: nthorder, double exponential and modified second-order kinetic models. Average relative errors are lower than 5%, and coefficients of determination are almost the same, thus indicating that these models effectively describe the adsorption process.
Thermodynamic Parameters
Thermodynamic parameters were evaluated by considering the thermodynamic equilibrium constants. The standard free energy were calculated using the following equation: where ∆G°a ds is the free energy change (kJ/mol), T is the absolute temperature (K), R is the universal gas constant (8.314 × 10 −3 kJ/K mol), K is the Langmuir constant, Tempkin constant or the thermodynamic equilibrium constants obtained using the method of Khan and Singh [46]. For the Tempkin and Langmuir isotherms, the values of Tempkin constant (K T ) and Langmuir constant (a L ), as given in Table 2 were used. In the Khan and Singh [46] method, the thermodynamic equilibrium constant (K KS ) was calculated as follows: where a s is the activity of ammonium in solid phase, a e is the activity of ammonium in solution at equilibrium, ν s is the activity coefficient of the adsorbed ammonium and ν e is the activity coefficient of the ammonium in solution at equilibrium. As the ammonium concentration in the solution decreases and approaches zero, the activity coefficient ν approaches to unity. Equation (10) The other thermodynamic parameters, i.e., the change in enthalpy (∆H°) and entropy (∆S°), were estimated from the following equation: The values of change in enthalpy (∆H°) and entropy (∆S°) were calculated from the slope and intercept of the plot of lnK vs. (1/T) (Figure 4). The results of change in standard free energy, enthalpy and entropy are given in Table 4. The standard free energy of ammonium adsorption on clinoptilolite was found to be in the range of −1.10 to −0.03, 0.72 to 1.97 and 6.77 to 8.07 kJ/mol for Khan and Singh [46], Tempkin and Langmuir isotherms, respectively.
The positive values of ∆G° imply that the adsorption of ammonium on clinoptilolite is not spontaneous. ∆G° value increased with the temperature, indicating that the spontaneous nature of adsorption is inversely proportional to the temperature. Since the value of standard enthalpy change, ∆H°, is negative, the process is exothermic; therefore, an increase in the temperature leads to a lower adsorption of ammonium at equilibrium, and physical by nature and involves weak forces of attraction.
The negative values of ∆S° suggest that the system exhibits random behavior.
Conclusions
The adsorption of ammonium on clinoptilolite was evaluated as a function of two-and threeparameter isotherms, adsorption kinetics and thermodynamic aspect. Generally, equilibrium data fitted better in three-parameter isotherm models than two-parameter isotherms models. D-R and R-P isotherms effectively described the experimental data for two-parameter and three-parameter isotherm models, respectively. Adsorption energy for ammonium-clinoptilolite system was found to be approximately 7 kJ/mol, which lies within the range of 1-8 kJ/mol for the physisorption processes, indicating that ammonium is adsorbed on clinoptilolite predominantly by physisorption process.
The kinetics of adsorption were found to conform to all kinetics studied with a good correlation. The best model that described the kinetic data was the nth-order kinetic model. Reaction order in the nth-order model increased with the initial concentration and was between 1 and 4. In the double exponential model, rapidly adsorbed fraction values decreased with an increase in initial concentration, and vice versa for slowly adsorbed fraction values. Thermodynamic parameters show that adsorption process was non-spontaneous and adsorption rate decreased with an increase in the temperature, thus showing the exothermic nature of the adsorption. | 4,733.4 | 2012-03-01T00:00:00.000 | [
"Engineering",
"Chemistry"
] |
Fire Resistance of Sewage Sludge Ash Blended Cement Pastes
e aimof the present study is to investigate the hydration characteristics and the �re resistance of sewage sludge ash blended cement pastes by the determination of compressive strength, bulk density, and total porosity in addition to XRD and SEM techniques. Sewage sludge ash modi�es the hydration of cement because of its pozzolanic reaction with portlandite favoring the formation of crosslinked �brous calcium silicate of low Ca/Si ratio. Hence, it was concluded that thermal damage of cement pastes a�er the exposure to high treatment temperatures (i.e., crack formation and loss of mechanical properties) was effectively reduced with sewage sludge as content up to 20wt% because of that the presence of crosslinked �brous calcium silicate strengthens the cement matrix.
Introduction
Sewage water is the collection of wastewater effluents from domestic, hospital, commercial, and industrial establishments.e objective of the sewage treatment is to produce treated sewage water and sewage sludge suitable for safe discharge into the environment or reuse [1].International environmental protection agencies recommended that incineration is an attractive disposal method of sewage sludge [2].Sewage sludge ash has been used as an additive in the production of construction materials [3], mortars [4], and concrete [5].e exposure of concrete to high temperatures as in an accidental �re of buildings leads to an undesirable structural quality deterioration [6].Previous studies illustrate that hardened cement paste plays a key role in high temperatures deterioration process.e main damage mechanisms that discuss the deterioration of concrete at elevated temperatures are thermal mismatch, decomposition of hydrates, coarsening of pore structure, and pore pressure effects [7].
Siliceous aggregates expand around 575 ∘ C as a result of the - quartz inversion, whereas cement paste shrinks above 200 ∘ C [8]. is thermal mismatch (i.e., expansion of siliceous aggregate and shrinkage of cement paste matrix) causes a considerable tension at the aggregate-matrix interface leading eventually to interface fracture and cracking [9].e decomposition of hydrates occurs during the thermal damage of cementitious materials including the decomposition of ettringite, C-S-H, and carboaluminate hydrates at 180-450 ∘ C and portlandite at 425-580 ∘ C [10].e decomposition of portlandite damages the C-S-H.e decomposition of hydrates decreases stiffness and strength of cementitious materials.Volume reduction of the hydrated phases because of the loss of bound water leads to air void formation and coarsening of pore structure of cementitious materials that in turn cause cracking and a considerable loss of the mechanical properties of cementitious materials at a high temperature attack [11,12].Free water of saturated cement paste expands and evaporates at about the boiling point of water.e low permeability of cement paste prevents water vapor from escaping through closed pores leading to the buildup of internal pore pressure and the accumulation of tensions in the block sample.e low tensile strength of cement paste under these conditions leads to the formation of microcracks and signi�cant mechanical damage [13,14].
e cracking of cementitious materials exposed to a high temperature attack develops during the postcooling period as a result of rehydration of dissociated CaO associated with a signi�cant volume increase of about 44% [15].e enhancement of the thermal stability of a concrete and the reduction of postcooling cracking have been achieved by the addition of pozzolana that consumes portlandite (Ca(OH) 2 ) liberated from the hydration of ordinary Portland cement (OPC) forming additional calcium silicate hydrates [16].e replacement of OPC by silica fume [17], �y ash [16,18], metakaolin [17,19], homra [20], and granulated blast furnace slag [9] was found to improve the physicomechanical properties, microstructure, and thermal stability of cementitious materials as well as reduce the extent of cracking when exposed to high temperatures.e addition of polypropylene �bers also was found to reduce the damage of self-compacting cement paste because melted polypropylene �bers form a connected pore structure through which the heat and water vapor escapes [21].It was indicated that sewage sludge ash has a high pozzolanic activity and improves the workability and compressive strength of concrete [22].ere are lakes of knowledge in the literature about the �re resistance of cementitious materials containing sewage sludge ash.Hence, the present study aims to investigate the in�uence of sewage sludge ash in the �re resistance of hardened cement pastes.
Materials and Experimental Techniques
Raw materials used in this work were OPC CEM I (no.42.5) and raw sewage sludge from a wastewater treatment facility.Raw sewage sludge was dried, incinerated in an electrical muffle furnace with a heating rate 10 ∘ C/min up to 800 ∘ C with soaking time for 2 hrs, recharged from the muffle furnace, cooled to room temperature in desiccator, and ground to pass 90 m sieve.ese values of parameters were selected because incineration of sewage sludge must be optimized at 800 ∘ C to preserve the pozzolanic activity of the resultant ash as described elsewhere [23].Sewage sludge ash blended cement was prepared by a partial replacement of OPC with 5-20 wt% sewage sludge ash and their mix composition are shown in Table 1.Cement pastes were mixed using a water/cement ratio of about 0.25.Freshly prepared cement pastes were moulded in 2 cm 3 stainless steel cubic moulds at about 100% relative humidity and demolded aer 24 hrs.Hydration characteristics were investigated for cement pastes cured up to 90 days under tap water.e bulk density was determined according to the Archimedes principle [24].e compressive strength was measured using a manual compressive strength machine according to ASTM designation [25].Stopping of the hydration of cement pastes at 3, 7, 28, and 90 days as well as free water content determination were performed using a domestic microwave oven as described elsewhere [26].
e combined water content was determined for stopped samples from weight loss aer ignition in porcelain crucibles at 1000 ∘ C for 1 hr in a muffle furnace.e total porosity of the hardened cement paste is calculated from the values of bulk density, free and total water contents as described elsewhere [27].Fire resistance was investigated for cement pastes hydrated for 28 days and dried at 105 ∘ C for 24 hrs then heat treated from 200 up to 800 ∘ C. e �ring program was carried out in a muffle furnace with a rate of 10 ∘ C/min for 2 hrs at each temperature and cooled to room temperature in desiccator.e compressive strength of heat treated cement pastes was measured as mentioned above.e bulk density and total porosity were calculated according to the ISO 5018-1983 [28].X-ray �uorescence analysis (XRF) of �ne powdered samples was carried out for �nely ground selected samples by Philips PW1606 X-ray �uorescence spectrometer.X-ray diffraction analysis (XRD) was carried out for �nely ground selected samples by Philips X-ray diffractometer PW 1370, Co with �i �ltered Cu� radiation (1.5406 Å).Fourier transform infrared analysis (FTIR) was measured for �nely ground selected samples by spectrometer Perkin Elmer FTIR System Spectrum X in the range 400-4000 cm −1 .Scanning electron microscope analysis (SEM) was investigated for chips of selected samples by Jeol-Dsm 5400 LG apparatus.2 illustrates the chemical composition of OPC and sewage sludge ash determined by XRF analysis.e sum of SiO 2 , Al 2 O 3 , and Fe 2 O 3 content of sewage sludge ash is in accordance with requirements of ASTM designation for pozzolana [29].Sewage sludge ash has high Al 2 O 3 , Fe 2 O 3, and CaO contents due to the use of alum, ferric salts, and lime in the wastewater treatment.Loss on ignition LOI is possibly due to incomplete incineration and adsorbed water.Table 3 illustrates the phase composition of OPC, wt% according to Bouge's calculations [10].Figure 1 illustrates the XRD patterns of sewage sludge ash.Sewage sludge ash contains quartz as the main phase in addition to anhydrite, albite, and hematite minerals.Figure 2 illustrates the FTIR spectra of sewage sludge ash.e absorption bands of silica appear at 1101, 796, and 467 cm −1 corresponding to the asymmetric stretching vibration of F 2: e FTIR spectra of sewage sludge ash.
Characterization of Sewage Sludge Ash. Table
Si-O-Si, symmetric stretching vibration of Si-O-Si, and bending vibration of O-Si-O, respectively [30].
Hydration Characteristics of Blended Cement Pastes.
Figures 3 and 4 illustrate the compressive strength and combined water contents of sewage sludge ash blended cement pastes, respectively.e compressive strength and the combined water content decrease with sewage sludge ash content because of the replacement of OPC with sewage sludge ash that have no cementations properties.is is mainly due to the crystalline structure of the sewage sludge ash that has slight pozzolanic activity and the decrease of OPC portion.e decrease of OPC content decreases the formation of C-S-H in the hydrated pastes that has the main binding properties.e rate of compressive strength and the combined water content of sewage sludge ash blended cement pastes increase markedly at later ages of hydration because sewage sludge ash acts as a �ller at early ages.e accumulation of portlandite liberated from the hydration process in the pore solution of cement paste at later ages activates the slow pozzolanic reaction of sewage sludge ash with portlandite forming additional hydration products.
Figure 5 illustrates the bulk density and total porosity of sewage sludge ash blended cement pastes.e bulk density decreases, whereas the total porosity increases with sewage sludge ash content due to the decrease of OPC as a result of replacement with sewage sludge ash. Figure 6 illustrates the XRD patterns of OPC and sewage sludge ash blended cement pastes hydrated for 28 days.Hydrated OPC paste contains a mixture of unhydrated cement clinker phases -C 2 S and C 3 S in addition to portlandite that liberated from hydration process.Calcite appears in hydrated cement pastes because of the partial carbonation of portlandite.Wollastonite appears in sewage sludge ash blended cement pastes.e content of portlandite reduced in sewage sludge ash blended cement pastes because of the pozzolanic activity and the dilution effect of sewage sludge ash that replaces OPC.�lending cement with sewage sludge ash modi�es the hydration reaction mechanism by favoring the formation of calcium silicate hydrates with low Ca/Si ratio because the addition of sewage sludge ash lowers the Ca/Si ratio in the pore solution of hydrating cement paste [31].Figure 7 illustrates the SEM micrograph of sewage sludge ash blended cement paste (S4) hydrated for 28 days.Sewage sludge ash reacts with portlandite liberated from cement hydration because of the pozzolanic activity forming crosslinked �brous calcium silicate.may be due to the self-autoclaving effect that occurs during �ring.�ater vapor evolved as a result of the partial decomposition of hydration products entrapped inside hardened cement paste due to low porosity.is hydrothermal condition enhances the hydration of unhydrated cement grains (i.e., -C 2 S and C 3 S) forming additional amounts of C-S-H [32].e compressive strength of sewage sludge ash blended cement pastes �red at 200 and 400 ∘ C are lower than that of OPC paste.is is due to dilution of OPC with sewage sludge ash.e self-autoclaving effect diminishes inside sewage sludge ash blended cement pastes that have a higher porosity compared to OPC paste as indicated from Figure 9 that illustrates lower bulk density and higher total porosity of sewage sludge ash blended cement pastes �red at 200-800 ∘ C. us water vapor escapes easily through open pores inside sewage sludge ash hardened cement pastes.is is also due to that the sewage sludge ash is mainly composed of crystalline silica with low pozzolanic activity, then the hydration products especially C-S-H decrease, that is, the main source of compressive strength.On contrast, the compressive strength of sewage sludge ash blended cement pastes �red at 600-800 ∘ C is higher than that of OPC paste.e crosslinked �brous calcium silicate formed because of the pozzolanic activity of sewage sludge ash strengthen the cement matrix and improves the mechanical properties of cement paste exposed to high temperatures.
Fire Resistance of Blended Cement Pastes
Figure 10 illustrates the XRD patterns of sewage sludge ash blended cement paste (S4) �red at different temperatures 200-800 ∘ C. Portlandite partially decomposes aer �ring up to 600 ∘ C and completely at 800 ∘ C, whereas calcite decomposes at 800 ∘ C. e hump in the range 25-35 2 due to C-S-H still appears aer �ring at 600 ∘ C and completely disappears at 800 ∘ C. e content of anhydrous calcium silicate minerals -C 2 S and C 3 S increases due to the decomposition of C-S-H.is proves that C-S-H decomposes over a wide range temperature due to its amorphous nature.e decomposition of C-S-H aer 600 ∘ C leads to the loss of the mechanical properties of cement pastes.Figure 11 illustrates the SEM micrographs of sewage sludge ash blended cement paste (S4) �red at 200-800 ∘ C as well as OPC paste �red at 600 ∘ C. Amorphous C-S-H still appears as a dense cement matrix in hardened cement pastes �red at 200 ∘ C. Partial decomposition of portlandite starts at 400 ∘ C accompanying with coarsening of the pore structure of cement paste and formation of microcracks around portlandite crystals.Firing OPC paste at 600 ∘ C leads to decomposition of the cementitious materials as well as the formation of friable cement matrix.e thermal damage of sewage sludge ash blended cement paste at 600 ∘ C was signi�cantly reduced due to the presence of crosslinked �brous calcium silicate.
Conclusions
(1) Sewage sludge ash contains quartz, anhydrite, gypsum, albite, and hematite minerals and its content of SiO 2 , Al 2 O 3 , and Fe 2 O 3 is in accordance with requirements of ASTM designation for pozzolana.
(2) Sewage sludge ash slightly modi�es the hydration reaction mechanism as a result of its low pozzolanic reaction with portlandite liberated from cement hydration favoring the formation of crosslinked �brous calcium silicate of low Ca�Si ratio.
(3) Decomposition of portlandite and C-S-H at 600 ∘ C leads to the buildup of internal pore pressure, coarsening of the pore structure of cement paste, the formation of micro cracks, and the loss of the mechanical properties of cement paste.
(4) e thermal damage of cement pastes exposed to high temperatures (i.e., the buildup of internal pore pressure, crack formation, and loss of mechanical properties) was effectively reduced with sewage sludge
F 1 :
e XRD pattern of sewage sludge ash.
2 )F 3 :
e compressive strength of sewage sludge ash blended cement pastes.
F 4 :
e combined water content of sewage sludge ash blended cement pastes.
3 )F 5 :F 6 :
e bulk density and the total porosity of sewage sludge ash blended cement pastes.e XRD patterns of OPC and sewage sludge ash blended cement pastes hydrated for 28 days.
Figure 8 F 8 :
illustrates the compressive strength of sewage sludge ash blended cement pastes �red from 200 up to 800 ∘ C. e compressive strength of OPC increases gradually a�er �ring up to 400 ∘ C then decreases sharply.e increase in compressive strength 000010 15 kV ×20000 F 7: e SEM micrograph of sewage sludge ash blended cement paste (S4) hydrated for 28 days.e compressive strength of sewage sludge ash blended cement paste �red at 200-800 ∘ C.
F 9 :
e bulk density and total porosity of sewage sludge ash blended cement pastes �red at 200-800 ∘ C.
T 1: Mix composition of sewage sludge ash blended cements.
T 2: Chemical composition of OPC and sewage sludge ash, wt%. | 3,540.6 | 2013-02-02T00:00:00.000 | [
"Materials Science",
"Environmental Science"
] |
Determination of the Border between the Transmembrane and Cytoplasmic Domains of Human Integrin Subunits*
In this study we have determined the position of the C-terminal end of the transmembrane domains of human integrin subunits (α2, α5, β1, β2) in microsomal membranes using the glycosylation mapping technique. In contrast to the common view, the transmembrane helices were found to extend roughly to Phe1129 in α2, to Phe1026 in α5, to Ile757 in β1, and to His728 in β2. The α-carbon of the conserved lysine present near the C-terminal end of the transmembrane helix (Lys1125 in α2, Lys1022 in α5, Lys752 in β1, and Lys724 in β2) is buried in the plasma membrane, and the charged amino group most likely reaches into the polar head-group region of the lipid bilayer. A possible role for the conserved lysine in integrin function is discussed.
Integrins are cell adhesive receptors composed of non-covalently linked ␣ and  subunits. Each subunit consists of a large extracellular domain, a transmembrane helix (TMH), 1 and a short cytoplasmic tail of usually less than 60 amino acids. Cell attachment to extracellular matrix via integrins is necessary for normal cell growth and differentiation. Integrins are also involved in cellular processes that require migration of cells, e.g. angiogenesis and extravasation of lymphocytes. Upon ligand binding, clustering of integrins leads to formation of focal contacts containing signaling complexes (1,2).
The short cytoplasmic domains of integrin  subunits have multiple functions: to establish contact with the actin cytoskeleton, to start signaling cascades, and to regulate the conformation of the extracellular domain of the receptor and thereby the ability to bind extracellular ligands (3)(4)(5)(6). All these functions depend on interactions with cytoplasmic proteins, some of which mediate outside-in signaling and some that regulate extracellular ligand binding affinity (so called inside-out signaling) (7). ␣ subunits of integrins appear to have a regulatory role over the  subunits, possibly by hindering  subunits from binding to cytosolic proteins in the absence of bound extracellular ligand (8,9). In addition, specific signals are also generated by cytoplasmic tails of ␣ subunits (10 -12). Thus, the cytoplasmic domains are indispensable for proper functioning of integrins as demonstrated by many studies.
However, for both integrin subunits, the border between the cytoplasmic domain and the C-terminal end of the transmembrane domain is unclear. Usually the cytoplasmic domain of integrins is assumed by most authors to start at the first charged residue after the continuous stretch of 23 hydrophobic amino acids. In a few cases the cytoplasmic domain has instead been suggested to be 4 and 5 amino acids shorter for the ␣ and  subunits, respectively (see Fig. 1) (13)(14)(15). Interestingly, all presently known ␣ and  subunits, except for 4, exhibit a similar pattern in this region: a single conserved, positively charged amino acid (Lys/Arg), a short stretch of hydrophobic amino acids, and highly polar sequence unlikely to be buried in the plasma membrane. The regions between and C-terminally adjacent to the predicted ends of the TMHs of ␣ and  subunits ( Fig. 1) are involved in affinity regulation and in dimerization of integrins (4, 16 -20).
In this study, we have used a glycosylation mapping technique to determine the position of the C-terminal end of the TMHs of human integrin ␣ subunits (␣2, ␣5) and  subunits (1, 2) in microsomal membranes. Our results establish that the ␣-carbon of the conserved lysine and the following short hydrophobic stretch of all tested subunits are buried in the membrane. Possible functional implications of this finding are discussed.
Expression in Vitro-Synthesis of mRNA from pGEM1 by SP6 RNA polymerase and translation in reticulocyte lysate in the presence and absence of dog pancreas microsomes was performed as described (25). Proteins were analyzed by SDS-polyacrylamide gel electrophoresis, and gels were quantitated on a Fuji BAS1000 phosphoimager using the MacBAS 2.31 software. The extent of glycosylation of a given construct was calculated as the quotient between the intensity of the glycosylated band divided by the summed intensities of the glycosylated and nonglycosylated bands. In general, the glycosylation efficiency varies by no more than Ϯ 5% between different experiments, and the precision in the minimal glycosylation distance (MGD) determinations is Ϯ 0.2 residues.
RESULTS
Glycosylation Mapping-The glycosylation mapping technique has previously been described in detail (21). Briefly, the lumenally oriented active site of the endoplasmic reticulum enzyme oligosaccharyltransferase was used as a fixed point of reference against which the position of a TMH in the endoplasmic reticulum membrane could be measured ( Fig. 2A). Based on the variation in glycosylation efficiency for a set of constructs differing only in the position of a glycosylation acceptor site, it was possible to define a MGD, i.e. the number of residues in the nascent chain needed to bridge the distance between the end of the TMH and the oligosaccharyltransferase active site. By calibration of the MGD scale against TMHs whose position in the lipid bilayer had been determined by direct techniques such as x-ray crystallography, NMR, or fluorescence quenching measurements, the point where a TMH exited from the lipid environment could be determined to within less than Ϯ 1 residue.
Previously, we used this technique to study how the position of model TMHs in the endoplasmic reticulum membrane changed in response to single mutations such as the introduction of a proline or a charged residue (21,22). We had also calibrated our measurements against two different TMHs of known position in the membrane, the TMH of the H-subunit from the photosynthetic reaction center and the TMH from the phage M13 major coat protein, as well as against model poly-Leu TMHs of varying lengths (21,22). These studies showed that the MGD measured from the first residue after the hydrophobic region of a typical TMH is 9.5-10.5 residues.
Determination of the Membrane-embedded Parts of Integrin Subunits-For the studies of the TMH of ␣ subunits, a segment encoding residues 989 -1028 was amplified from the ␣5 and 1096 -1131 from ␣2 cDNAs and cloned into a series of previously constructed vectors based on the well characterized protein leader peptidase (Lep) (Fig. 2A). The vectors differ only in the position of a single Asn-Ser-Thr glycosylation acceptor site downstream of the TMH and, thus, allow facile determination of the C-terminal MGD for any TMH.
The results of in vitro transcription/translation of three Lep-␣2 and Lep-␣5 constructs in the absence and presence of dog pancreas microsomes are shown in Fig. 2B, and the MGDdetermination is shown in Fig. 2C. Essentially identical results were obtained for both tested ␣ subunits. The glycosylation efficiency drops from 54% for the Asn 88 construct to 6% for the Asn 87 construct for ␣2 subunit and from 85% for the Asn 88 construct to 6% for the Asn 87 construct for ␣5 subunit. Since the expected MGD value for a TMH longer than ϳ23 residues is ϳ10 residues (21), this allows the C terminus of the TMHs of ␣2 and ␣5 to be positioned relative to the reference TMHs (Fig. 2D). Even with allowance for a rather wide margin of error, this clearly places the ␣-carbon of Lys 1125 in ␣2 and Lys 1022 in ␣5 more than one helical turn below the membrane-water interface, similar to the position of Lys 40 in M13 coat protein (26,27) and to single lysine mutations in a poly-Leu TMH (22).
Similarly, for the studies of  subunits, a segment encoding residues 723-761 from the 1 and 695-933 from the 2 cDNAs was amplified and cloned into the Lep vectors. Very similar results were obtained (Fig. 2, B and C), placing the ␣-carbon of Lys 752 in 1 and Lys 724 in 2 well below the membrane-water interface (Fig. 2D).
We conclude that the membrane-embedded parts of the TMHs of the ␣ subunits extend roughly to Phe 1129 in ␣2 and to Phe 1026 in ␣5. For the  subunits, the TMH of 1 extends to Ile 757 , and the TMH of 2, to His 723 . As proposed in the socalled snorkel model (28 -30), the long, aliphatic part of the side chain of the membrane-embedded Lys in integrin ␣ and  subunits most likely extends toward the membrane surface, placing the positively charged terminal amino group in the polar head-group region of the lipid bilayer.
Mutations Near the C-terminal End of the Transmembrane Segments Cause a Shift in Membrane Location-To confirm our interpretation of the glycosylation mapping data, we constructed two derivatives of the 1 TMH: one where the entire hydrophobic segment between Lys 752 and His 758 was replaced by Asn residues (1⌬5N), and one where Leu 753 was replaced by Lys (1⌬L-K) (Fig. 3A). Since both Asn 82 constructs were fully glycosylated (data not shown), the four residues HDRR near the lipid-water interface were deleted to facilitate the MGD determination. For both 1⌬5N and 1⌬L-K, the MGD measurement indicated a substantial shift in the position of the TMH in the membrane (Fig. 3B). Although we have only determined the MGD to within Ϯ 1 residue in this case, it is nevertheless clear that Lys 752 is positioned much closer to the membrane-water interface in these constructs (Fig. 2D). This demonstrates that the hydrophobic segment between Lys 752 and His 758 in wild type 1 is embedded in the membrane and that it is pushed out of the membrane when its hydrophobicity is reduced. DISCUSSION In this study the glycosylation mapping technique has been used to determine the position of the TMHs of two human integrin ␣ and  subunits in the microsomal membrane. The TMHs of the ␣ subunits were found roughly to Phe 1129 in ␣2 and to Phe 1026 in ␣5, and the TMHs of the  subunits were found to extend to Ile 757 in 1 and to His 728 in 2. Interestingly, the ␣-carbon of the single positively charged amino acid present near the C-terminal end of the TMHs (Lys 1125 in ␣2, Lys 1022 in ␣5, Lys 752 in 1, and Lys 724 in 2) was buried in the membrane in all cases. Thus, the same result was obtained, irrespective of whether tryptophan or tyrosine is N-terminally adjacent to the conserved sequence KXGFFKR of the ␣ subunits. Reductions in the hydrophobicity of the short hydrophobic stretch downstream of Lys 752 in the 1 TMH caused a shift in the position of Lys 752 relative to the membrane, confirming that the hydrophobic segment between Lys 752 and His 758 is indeed buried in the membrane in the wild type protein. These results were obtained using microsomal membranes; however, the integrin TMHs can be accommodated in the same way also in the plasma membrane, which actually is thicker than microsomal membranes. Previous measurements (21,22) have shown that MGD ϭ 10.1 for RC-H (counting from the indicated Glu residue), MGD ϭ 10.7 for M13 coat (counting from the indicated Phe residue), and MGD ϭ 9.7 for 23L (counting from the indicated Gln residue). For the integrin subunit ␣2, ␣5, 1, and 2 TMH, the location relative to the membrane-water interface has been estimated based on the assumption that MGD ϭ 10 residues (arrows). Since the MGD has only been determined to within Ϯ 1 residues for the 1⌬5N and 1⌬L-K constructs (see Fig. 3), a wider margin of error is indicated in these two cases (hatched). The position of Glu 82 , the first Lep-derived residue, and Gln 85 are indicated. Lys 1125 in ␣2, Lys 1022 in ␣5, Lys 752 in 1, Lys 724 in 2, and Lys 40 in the M13 coat protein are highlighted in bold. Note that residues HDRR near the membrane-water interface in 1 have been deleted in the 1⌬5N and 1⌬L-K constructs. Residues in the TMH are shown in uppercase.
The highly conserved motif KXGFFKR at the C-terminal end of the ␣ subunit TMH is known to be critically important for integrin function, although the phenotypic effects of modification in this region vary between different integrins. Deletion of the motif from ␣IIb results in a constitutive high affinity state of the ␣IIb3 integrin. Similarly, deletion of LLITIHD in 3, the opposing region of KXGFFKR motif in ␣IIb, also leads to a high affinity state of the receptor. It has been suggested that the amino acids Arg and Asp in these motifs form a salt bridge and that breaking the bridge locks the integrin in a conformation corresponding to a high affinity state (16,17). This idea is further supported by molecular modeling on integrin subunits ␣IIb and 3 showing that the LLITIHD sequence in 3 and the KXGFFKR motif in ␣IIb are likely to be associated (31). Similarly, deletion of VGFFK in ␣L increases the affinity of the ␣L2 integrin, but in this case the mutation also interferes with post-ligand binding events dependent on the cytoskeleton (19,32). In addition, the KXGFFKR motif has been shown to promote the assembly and/or stabilization of the ␣L2 heterodimer (18,19) as well as of several 1 integrins (4,20).
These regions have been demonstrated to interact with a variety of different proteins. The intracellular calcium-binding protein calreticulin was reported to bind the sequence KXG-FFKR in ␣ subunits, and this interaction may be required for integrin activation (33,34). A recent report by Coppolino and Dedhar shows that interaction between ␣ subunits and calreticulin is ligand-specific and transient, occurring shortly after ligand binding (35). The synthetic peptide KLLMIIHDRREFA derived from the 1 sequence was found to interact with focal adhesion kinase in vitro (36). Focal adhesion kinase is known to be involved in integrin-mediated signaling (37) but appears not to be required for integrin activation (38). Recently, the proteins Rack1 and skelemin were shown to bind to the membrane-proximal region of -subunits in yeast two-hybrid assay (39,40). Rack1 binding to 1 and 2 integrins requires activation of protein kinase, which is one of the early events after ligand binding of integrins (41,42), and Rack1 may recruit protein kinase C to adhesion complexes by its ability to bind integrins. However, the functional role of Rack1 in integrin signaling remains to be elucidated. Muscle skelemin was found to bind 1 and 3 subunits but not to 2. In Chinese hamster ovary cells the skelemin-like protein colocalized with integrins at early stages of cell spreading (40).
Our results show that the KLGFF sequence in ␣2 and ␣5, KLLMII in 1, and KALIH in 2 are buried in the membrane in the absence of interacting proteins. Why, then, is a transmembrane lysine conserved in all known integrin subunits (17 ␣ and 8  subunits)? One possibility is presented in Fig. 4A. This model is based on the studies discussed above, demonstrating that the regions between arrows 1 and 2 in Fig. 1 are involved in interactions with intracellular proteins and, thus, are likely to be exposed to the cytoplasm. The charged residue may facilitate a transfer of this region out of the hydrophobic A, activation of integrins is suggested to involve a movement of the "mobile region" (black areas in the TMHs) out of the plasma membrane. In the inactive state this segment of both subunits may associate with each other. In this situation the extracellular domain is in a conformation that is incapable of ligand binding, and the cytoplasmic tail of the ␣ subunit masks the cytoplasmic tail of  subunit. In a fully active integrin the mobile regions are exposed to interacting proteins in the cytoplasm, the transmembrane domains are 4 -5 amino acids shorter, and the extracellular domain is able to bind to a ligand. It should be noted that this represents one of several similar possible scenarios; integrin activation may be obtained by moving the TMH of one or both of the integrin subunits in or out of the membrane. B, the positively charged residues of the integrin TMHs are indicated to participate in interactions with other membrane proteins (striped). Gray areas in the plasma membrane indicate polar head-group regions. environment as a result of binding to intracellular proteins. Such a movement could trigger the conformational changes associated with integrin activation and/or ligand binding.
The possibility that the position of the conserved lysine and the following stretch of hydrophobic amino acids in the plasma membrane may contribute to the transmembrane signaling of integrins was first discussed by Williams et al. (43). Our study provides supporting evidence for this view. We suggest that this region in one or both of the integrin subunits is positioned differently relative to the plasma membrane depending on the affinity state of the receptor. For example, in the inactive conformation, the C ␣ s of the lysines could be buried in the plasma membrane. In this situation the TMHs would probably be tilted in the membrane due to their length, and the ␣ and  subunits could associate close to the cytoplasm, e.g. via the Asp-Arg bridge. Upon integrin activation, the TMHs would be shortened by 4 -5 amino acids, possibly induced by binding of cytosolic proteins to the cytoplasmic tails of integrin subunits. The tension exerted by actin cytoskeleton toward integrin clusters may also have a role in preventing backsliding of the mobile region (Fig. 4A).
Only a few studies have directly addressed the role of the conserved TMH Lys (e.g. Lys 752 in human 1 subunit) in integrin function. Mutation of the Lys in ␣1 (Lys to Asp) did not impair the ␣11-mediated adhesion to collagen IV but resulted in localization of ␣11 to focal contacts also on a fibronectin substrate (44). The phenomenon of ligand-independent clustering of integrins in focal contacts has been suggested to reflect a disturbed interaction between the ␣and -cytoplasmic domains, resulting in unmasking of binding sites in the -tail for components in existing focal contacts (8,44,45). Mutated chicken 1 subunit (Lys to Leu) was also found to localize to focal contacts when expressed in mouse NIH 3T3 cells (46), but in this case it is unclear whether the mutated receptor localized to focal contacts in a ligand-dependent or -independent manner. It is also unknown if this mutation affects the conformation of the extracellular domain.
Another possible role for the positively charged residues in the TMHs of integrins could be to promote interactions with other membrane proteins (Fig. 4B). This would be analogous to the T-cell receptor complex, which depends on charged amino acids in the TMHs for assembly and surface expression on T-cells (47). The T-cell receptor complex is stabilized in the endoplasmic reticulum by salt bridges between negatively charged residues in TMH of CD3 and positively charged residues in TMHs of T-cell receptor chains. In this context it is interesting that several tetraspanin proteins implicated in integrin function, e.g. CD9 and CD81, contain negatively charged glutamic acids in their third TMH (48 -50). Future experiments will be designed to test the validity of the two, not mutually exclusive models for the role of charged residues in integrin TMHs. | 4,383.4 | 1999-12-24T00:00:00.000 | [
"Biology"
] |
Extracellular Vesicles Derived from Allergen Immunotherapy-Treated Mice Suppressed IL-5 Production from Group 2 Innate Lymphoid Cells
Allergen immunotherapy (AIT), such as subcutaneous immunotherapy (SCIT), is a treatment targeting the causes of allergic diseases. The roles of extracellular vesicles (EVs), bilayer lipid membrane blebs released from all types of cells, in AIT have not been clarified. To examine the roles of EVs in SCIT, it was analyzed whether (1) EVs are phenotypically changed by treatment with SCIT, and (2) EVs derived from SCIT treatment suppress the function of group 2 innate lymphoid cells (ILC2s), which are major cells contributing to type 2 allergic inflammation. As a result, (1) expression of CD9, a canonical EV marker, was highly up-regulated by SCIT in a murine model of asthma; and (2) IL-5 production from ILC2s in vitro was significantly decreased by the addition of serum EVs derived from SCIT-treated but not non-SCIT-treated mice. In conclusion, it was indicated that EVs were transformed by SCIT, changing to a suppressive phenotype of type 2 allergic inflammation.
Introduction
The number of patients with allergies such as rhinitis has gradually increased worldwide [1]. Treatment for allergic diseases utilizes two well-accepted approaches; conventional pharmacotherapy and allergen-specific immunotherapy (AIT). In conventional pharmacotherapy, anti-leukotrienes [2], anti-histamines [3], inhaled and systemic corticosteroids [4], and monoclonal antibodies against type 2 cytokines [5,6] can control allergic symptoms. However, it is well-known that allergic symptoms relapse after cessation of pharmacotherapy. Therefore, the development of treatments against the causes of allergic diseases is required.
AIT has been recognized as the only treatment that can modify the natural course of allergy by inducing immune tolerance to allergens [7]. Since the initial publication by Noon et al. [8], the clinical effectiveness of AIT against allergic diseases has been demonstrated in numerous studies [9][10][11][12][13]. AIT is mainly conducted in two forms: subcutaneous immunotherapy (SCIT) and sublingual immunotherapy [14]. However, AIT remains underused because: (1) long-term treatment is required to acquire sustainable remission of allergic symptoms [15,16], (2) some patients are non-responders [17,18], and (3) some patients have rare anaphylactic reactions [19,20]. Therefore, a deeper understanding of the mechanisms is required to develop more-effective and safer AIT.
It has been generally recognized that type 2 allergic inflammation is orchestrated by type 2 helper T (Th2) cells and group 2 innate lymphoid cells (ILC2s), both of which produce type 2 cytokines such as interleukin (IL)-5, IL-13, and IL-4 [21]. Recent research regarding type 2 allergic inflammation has shed light on ILC2s, because ILC2s produce a large amount of IL-5 and IL-13 in response to epithelial-derived cytokines, IL-33, and thymic stromal lymphopoietin (TSLP) [22,23]. Our previous study using a murine model large amount of IL-5 and IL-13 in response to epithelial-derived cytokines, IL-33, and thymic stromal lymphopoietin (TSLP) [22,23]. Our previous study using a murine model of asthma demonstrated that ILC2s produced more IL-5 in vitro than CD4 + T cells, mainly consisting of Th2 cells [24]. The produced IL-5 and IL-13 from ILC2s induce eosinophilic infiltration into allergic inflamed tissues and the development of airway hyperresponsiveness (AHR) [22,23]. Thus, ILC2 regulation is crucial for alleviating allergic symptoms. In recent years, several groups [25,26] reported that AIT down-regulated IL-5 and IL-13 production from ILC2s and suppressed the proliferation of ILC2s in mice and humans. However, the mechanisms underlying the effects of AIT on the functions of ILC2s remain to be clarified.
Extracellular vesicles (EVs), bilayer lipid membrane blebs with a diameter of approximately 30-100 nm, are released from all types of cells [27]. EVs contain host cell-derived proteins, such as tetraspanin proteins CD9 and CD63, adhesion molecules, and immune regulator molecules [28,29]. Tetraspanin proteins such as CD9 and CD63 have usually been recognized as EV membrane markers, because they have crucial roles in EV-associated events such as adhesion, invasion, and membrane fusion to recipient cells [30,31]. Besides proteins, EVs are abundantly loaded with host cell-derived RNAs, such as messenger RNAs and microRNAs (miRNAs) [32]. The tetraspanin proteins work when EVs are captured by recipient cells, inducing functional changes in recipient cells. It has been reported that EVs play roles in the pathogenesis of allergies: Paredes et al. [33] reported that EVs in bronchoalveolar lavage fluid (BALF) derived from patients with mild allergic asthma to birch pollen promoted leukotriene C4 and IL-8 releases in human bronchial epithelial cells. Canas et al. [34] also demonstrated that eosinophil-derived EVs from asthmatic patients contributed to airway remodeling by inducing proliferation of airway muscle cells. Taken together, AIT induced dynamic immunological changes (reviewed in [35,36]), such as decreases in inflammatory cells and induction of anti-inflammatory cells, whereas the roles of EVs in AIT remain to be clarified.
We previously established an ovalbumin (OVA)-induced asthmatic model of mice [24,37], in which SCIT exerted significant suppression of development of AHR and airway remodeling. Moreover, SCIT treatment significantly suppressed ILC2 proliferation in the lung (unpublished data). Therefore, this murine model can be utilized for analyzing the mechanisms of SCIT. In this study, in order to clarify the roles of EVs in AIT using the murine model, (1) SCIT-induced phenotype changes of EVs and (2) suppressive effects of the EVs on IL-5 production from ILC2s were analyzed.
The graphical abstract of this study is described in Figure 1.
Effects of SCIT on the Development of AHR and IL-5 Production in BALF
To clarify the effects of SCIT on respiratory function, AHR to methacholine was measured by the forced oscillation technique using FlexiVent. AHR to methacholine was developed in OVA-challenged asthmatic mice when respiratory compliance (Crs), which reflects the flexibility of the lung, was assessed as a parameter of airway function ( Figure 2A). As shown in Figure 2A, the Crs of OVA-challenged mice declined even in response to lower concentrations of methacholine than "only sensitization" mice, and the magnitude of the decline at relatively high concentrations of methacholine was considerable compared with that of the "only sensitization" group. On the other hand, the Crs decline was significantly improved by treatment with SCIT.
Effects of SCIT on the Development of AHR and IL-5 Production in BALF
To clarify the effects of SCIT on respiratory function, AHR to methacholine was measured by the forced oscillation technique using FlexiVent. AHR to methacholine was developed in OVA-challenged asthmatic mice when respiratory compliance (Crs), which reflects the flexibility of the lung, was assessed as a parameter of airway function ( Figure 2A). As shown in Figure 2A, the Crs of OVA-challenged mice declined even in response to lower concentrations of methacholine than "only sensitization" mice, and the magnitude of the decline at relatively high concentrations of methacholine was considerable compared with that of the "only sensitization" group. On the other hand, the Crs decline was significantly improved by treatment with SCIT. , the development of airway hyperresponsiveness (AHR) was analyzed. Respiratory compliance (Crs) is shown as the maximum value after each methacholine challenge. Following analysis of AHR, bronchoalveolar lavage was conducted. IL−5 concentration in BALF was quantified by ELISA. Each point and column represent the mean ± SEM of 4 or 5 animals. **: p < 0.01 and ***: p < 0.001 versus only sensitization. †: p < 0.05 and † †: p < 0.01 versus sensitization + challenges. Figure 2B represents the effect of SCIT on IL-5 production in the lung of OVA-challenged asthmatic mice. The amount of IL-5 in BALF was significantly increased in OVAchallenged asthmatic mice. Augmented IL-5 production was markedly decreased in SCITtreated asthmatic mice ( Figure 2B). Figure 3 represents particle sizes and CD9 and CD63 expression on EVs derived from sera of non-SCIT-treated and SCIT-treated asthmatic mice. Particle sizes of EVs did not differ between the two groups ( Figure 3A). On the other hand, the expression level of CD9 on EVs derived from SCIT-treated asthmatic mice was markedly higher than that from non-SCIT-treated asthmatic mice ( Figure 3B). No difference in CD63 expression level was observed ( Figure 3C). Twenty-four hours after the 4th OVA challenge (day 41), the development of airway hyperresponsiveness (AHR) was analyzed. Respiratory compliance (Crs) is shown as the maximum value after each methacholine challenge. Following analysis of AHR, bronchoalveolar lavage was conducted. IL-5 concentration in BALF was quantified by ELISA. Each point and column represent the mean ± SEM of 4 or 5 animals. **: p < 0.01 and ***: p < 0.001 versus only sensitization. †: p < 0.05 and † †: p < 0.01 versus sensitization + challenges. Figure 2B represents the effect of SCIT on IL-5 production in the lung of OVAchallenged asthmatic mice. The amount of IL-5 in BALF was significantly increased in OVA-challenged asthmatic mice. Augmented IL-5 production was markedly decreased in SCIT-treated asthmatic mice ( Figure 2B). Figure 3 represents particle sizes and CD9 and CD63 expression on EVs derived from sera of non-SCIT-treated and SCIT-treated asthmatic mice. Particle sizes of EVs did not differ between the two groups ( Figure 3A). On the other hand, the expression level of CD9 on EVs derived from SCIT-treated asthmatic mice was markedly higher than that from non-SCIT-treated asthmatic mice ( Figure 3B). No difference in CD63 expression level was observed ( Figure 3C). Twenty-four hours after the 4th challenge, sera were collected. EVs were obtained from sera by ExoQuick. The particle sizes of EVs were detected using dynamic light scattering. The expression levels of CD9 and CD63 were analyzed by flow cytometry.
Effects of EVs on IL-5 Production from ILC2s
As shown in Figure 4, lung ILC2s derived from non-SCIT-treated asthmatic mice abundantly produced IL-5 in response to combined stimulation with IL-33 and TSLP in vitro. IL-5 production from ILC2s was not suppressed by EVs derived from non-SCITtreated asthmatic mice. On the other hand, EVs derived from SCIT-treated asthmatic mice significantly ameliorated IL-5 production from ILC2s. Twenty-four hours after the 4th challenge, sera were collected. EVs were obtained from sera by ExoQuick. The particle sizes of EVs were detected using dynamic light scattering. The expression levels of CD9 and CD63 were analyzed by flow cytometry.
Effects of EVs on IL-5 Production from ILC2s
As shown in Figure 4, lung ILC2s derived from non-SCIT-treated asthmatic mice abundantly produced IL-5 in response to combined stimulation with IL-33 and TSLP in vitro. IL-5 production from ILC2s was not suppressed by EVs derived from non-SCITtreated asthmatic mice. On the other hand, EVs derived from SCIT-treated asthmatic mice significantly ameliorated IL-5 production from ILC2s. Twenty-four hours after the 4th challenge, sera were collected. EVs were obtained from sera by ExoQuick. The particle sizes of EVs were detected using dynamic light scattering. The expression levels of CD9 and CD63 were analyzed by flow cytometry.
Effects of EVs on IL-5 Production from ILC2s
As shown in Figure 4, lung ILC2s derived from non-SCIT-treated asthmatic mice abundantly produced IL-5 in response to combined stimulation with IL-33 and TSLP in vitro. IL-5 production from ILC2s was not suppressed by EVs derived from non-SCITtreated asthmatic mice. On the other hand, EVs derived from SCIT-treated asthmatic mice significantly ameliorated IL-5 production from ILC2s.
Discussion
Although there has been accumulating evidence that EVs are involved in the pathogenesis of allergic diseases, the roles of EVs in AIT have not been elucidated. In this study, we hypothesized that the phenotype of EVs was transformed by treatment with SCIT, and that transformed EVs suppress the functions of effector cells, leading to amelioration of type 2 allergic inflammation. We demonstrated that the development of AHR and IL-5 production in the lungs were significantly suppressed by SCIT in vivo ( Figure 2). Moreover, EVs derived from serum of SCIT-treated asthmatic mice significantly suppressed IL-5 production from lung ILC2s (Figure 4). To the best of our knowledge, this is the first report to demonstrate that EVs are involved in the suppression of effector cells in the mechanisms of AIT.
The expression level of CD9, but not CD63, on serum EVs was dramatically upregulated by SCIT. CD9 is a canonical EV marker, which participates in the events when EVs are captured and incorporated into recipient cells [38,39]. CD9 is also involved in the enhancement and maintenance of IL-10 secretion in murine and human antigenpresenting cells [40,41]. Indeed, Suzuki et al. [42] demonstrated that CD9 suppressed lipopolysaccharide-induced lung inflammation by inducing IL-10-producing macrophages in mice. IL-10 suppresses type 2 cytokine production and the proliferation of ILC2s, which constantly express IL-10 receptor α subunit [43][44][45]. Therefore, the increase in CD9 on the EVs may suppress the activation of ILC2s through the induction of IL-10-producing macrophages.
Although the source of CD9-highly-expressing EVs is unclear, one possible source is regulatory B (Breg) cells. Several reports [46,47] demonstrated that the number of Breg cells was significantly increased in the peripheral blood of patients with allergic diseases by SCIT. CD9 is highly expressed in Breg cells in mice and humans [48][49][50]. It has been reported that highly expressed proteins in host cells tended to be preferentially loaded into EVs [51][52][53]. Therefore, SCIT-induced Breg cells may produce CD9-highly-expressing EVs. Moreover, Kang et al. [54] demonstrated that Breg cell-derived EVs suppressed neuroinflammation and autoimmune uveitis by inducing IL-10-and IL-35-secreting regulatory T (Treg) cells in mice. Therefore, the induction of Breg cells by SCIT may be associated with increases in the expression of CD9 on EVs.
SCIT induces not only Breg cells but also Treg cells [55,56]. Our group previously demonstrated that Treg cells were significantly increased in the lungs of an SCIT-treated asthmatic murine model [37], and peripheral blood of patients with Japanese cedar pollinosis [46]. Treg cells can also release EVs, which are captured by effector T cells [57,58] and dendritic cells [59], followed by suppression of activation. We also demonstrated that IL-5 production from murine ILC2s stimulated with IL-33 and TSLP was significantly down-regulated in the presence of Treg-derived EVs in a concentration-dependent manner (unpublished data). Treg cell-derived EVs expressed an ectoenzyme CD73 in their extracellular membrane [51], and CD73 converted adenosine triphosphate into adenosine in inflammatory conditions [60]. Two groups [61,62] reported that adenosine contributed to the suppression of IL-5 and IL-13 production from ILC2s via binding to adenosine A 2A or A 2B receptors. Therefore, EVs derived from SCIT-induced Treg cells may also exert suppression of IL-5 and IL-13 production from lung ILC2s.
Not only ILC2 but also Th2 cells produce IL-5, leading to the development of AHR [21]. Our group recently clarified that the numbers of both ILC2 and Th2 cells in the lungs of OVA-induced asthmatic mice were significantly decreased by SCIT (unpublished data). Therefore, SCIT could alleviate the development of AHR via decreasing the numbers of ILC2 and Th2 cells in lungs.
IL-5 production from ILC2 tended to be increased in the presence of the low-doses of EVs regardless of SCIT, and inversely decreased in the presence of the high-dose of EVs derived from SCIT-treated asthmatic mice. CD9 on EV surfaces is a pleiotropic molecule that is associated with not only immune regulation [40][41][42] but also EV adhesion and invasion to recipient cells [30,31]. Although the EV concentration in serum was unclear, Pathogens 2022, 11, 1373 6 of 11 SCIT may have raised EV level in the circulation to a certain concentration, at which the EVs down-regulated ILC2 functions.
EVs contain not only proteins but also miRNAs [32]. miRNAs interfere with the transcription of various genes in recipient cells, followed by transformation of the functions. In recent years, it has been reported that the miRNA expression patterns in blood and sputum were dramatically changed in allergic patients after AIT, including SCIT and insect venom immunotherapy (VIT) [63][64][65][66]. Specjalski et al. [65] demonstrated that the expression levels of 11 miRNAs, including let-7d and miR-143, were significantly up-regulated during wasp VIT. Let-7d affected human T cells to inhibit the expression of IL-13 [67], which has been known to contribute to the development of AHR [68]. miR-143 was involved in down-regulation of IL-13 receptor alpha 1 subunit expression on human mast cells, followed by inhibition of the cell proliferation [69,70]. Taken together, the expression levels of miRNAs, which interfere with the IL-13 signaling, may be up-regulated on serum EVs by SCIT.
In conclusion, SCIT may induce not only immunological changes but also phenotype changes in EVs. The transformed EVs suppressed the development of AHR by inhibiting ILC2 activation. In recent years, Boonpiyathad et al. [71] reported that the number of ILC2s was not decreased in the peripheral blood of non-responders to AIT. Transformed EVs by SCIT are expected to be a useful therapeutic tool. In addition, unlike the usage of allergen extracts, EVs do not theoretically induce anaphylactic reactions. Further understanding of suppressive mechanisms by transformed EVs may lead to the development of moreeffective and safer AIT.
Sensitization, SCIT, and Challenges
Sensitization, SCIT, and challenges were conducted in accordance with our previous report [37], as follows ( Figure 5 derived from SCIT-treated asthmatic mice. CD9 on EV surfaces is a pleiotropic molecule that is associated with not only immune regulation [40][41][42] but also EV adhesion and invasion to recipient cells [30,31]. Although the EV concentration in serum was unclear, SCIT may have raised EV level in the circulation to a certain concentration, at which the EVs down-regulated ILC2 functions. EVs contain not only proteins but also miRNAs [32]. miRNAs interfere with the transcription of various genes in recipient cells, followed by transformation of the functions. In recent years, it has been reported that the miRNA expression patterns in blood and sputum were dramatically changed in allergic patients after AIT, including SCIT and insect venom immunotherapy (VIT) [63][64][65][66]. Specjalski et al. [65] demonstrated that the expression levels of 11 miRNAs, including let-7d and miR-143, were significantly up-regulated during wasp VIT. Let-7d affected human T cells to inhibit the expression of IL-13 [67], which has been known to contribute to the development of AHR [68]. miR-143 was involved in down-regulation of IL-13 receptor alpha 1 subunit expression on human mast cells, followed by inhibition of the cell proliferation [69,70]. Taken together, the expression levels of miRNAs, which interfere with the IL-13 signaling, may be up-regulated on serum EVs by SCIT.
In conclusion, SCIT may induce not only immunological changes but also phenotype changes in EVs. The transformed EVs suppressed the development of AHR by inhibiting ILC2 activation. In recent years, Boonpiyathad et al. [71] reported that the number of ILC2s was not decreased in the peripheral blood of non-responders to AIT. Transformed EVs by SCIT are expected to be a useful therapeutic tool. In addition, unlike the usage of allergen extracts, EVs do not theoretically induce anaphylactic reactions. Further understanding of suppressive mechanisms by transformed EVs may lead to the development of more-effective and safer AIT.
Sensitization, SCIT, and Challenges
Sensitization, SCIT, and challenges were conducted in accordance with our previous report [37], as follows ( Figure 5)
Measurement of AHR
Measurement of AHR was conducted as reported previously [72]. Briefly, AHR to methacholine was measured using the forced oscillation technique with a Flexivent FV-FX1 (SCIREQ, Montreal, QC, Canada). At 24 h after the 4th challenge, the recipient mice were anesthetized with pentobarbital (70 mg/kg) and xylazine (12 mg/kg), followed by methacholine (acetyl-β-methylcholine chloride, Sigma-Aldrich, St. Louis, MO, USA) challenges at concentrations ranging from 0 to 50 mg/mL (0, 6.25, 12.5, 25, and 50 mg/mL) for 12 s each. After each methacholine challenge, Crs, which represents respiratory flexibility, was measured using eight repeats. Crs is shown as a maximal value after each methacholine challenge.
Quantitative Analysis of IL-5 in BALF
After the measurement of AHR, bronchoalveolar lavage was conducted in the right lung lobes, as reported previously [73,74]. The obtained BALF was centrifuged, followed by collection of the supernatant. IL-5 concentration in BALF was determined using an IL-5 mouse ELISA kit (Thermo Fisher Scientific, Waltham, MA, USA).
Isolation of EVs
Twenty-four hours after the 4th challenge, non-SCIT-treated and SCIT-treated asthmatic mice were anesthetized with pentobarbital and xylazine as described above, followed by collection of whole blood from the abdominal vena cava. The whole blood was incubated in a water bath at 25 • C for 30 min. Then, the blood was incubated in a refrigerator at 4 • C overnight. After the incubation, the blood samples were centrifuged at 1200× g for 30 min at 4 • C, followed by serum collection.
In accordance with the manufacturer's protocol, EVs were isolated from serum using total exosome isolation reagent (from serum) (Thermo Fisher Scientific, Waltham, MA, USA). Briefly, the serum samples were centrifuged at 2000× g for 30 min to remove cells and debris. The supernatants were transferred into new tubes, followed by adding 0.2 volumes of the total exosome isolation (from serum) reagent. The samples were incubated at 4 • C for 30 min, then centrifuged at 10,000× g for 30 min at room temperature. The obtained pellets were suspended with phosphate-buffered saline (PBS).
The concentration of EVs was determined using a BCA protein assay kit (Thermo Fisher Scientific, Waltham, MA, USA).
Analyses of CD9 and CD63 Expression Levels and Particle Sizes of EVs
In accordance with the manufacturer's instructions, the expression levels of EV markers CD9 and CD63 were analyzed using a PS capture exosome flow cytometry kit (Fujifilm, Tokyo, Japan). Briefly, exosome capture beads (Fujifilm, Tokyo, Japan) and exosome binding enhancer (Fujifilm, Tokyo, Japan) were added to the EV samples, followed by incubation at room temperature for 1 h. After washing with exosome binding enhancer-containing washing buffer (Fujifilm, Tokyo, Japan), the bead-bound EVs were stained with allophycocyanin (APC)-conjugated anti-mouse CD9 antibody (clone MZ3), APC-conjugated anti-mouse CD63 antibody (clone NVG-2), or APC-conjugated rat IgG 2a , k isotype control antibody (clone RTK2758) (all from BioLegend, San Diego, CA, USA) for 1 h at room temperature. After washing with exosome binding enhancer-containing washing buffer, the samples were analyzed using a FACS Aria Fusion (Beckton Dickinson, San Jose, CA, USA).
Particle sizes of EVs were measured using a Zetasizer Nano ZSP (Malvern Panalytical, Malvern, UK).
Effect of EVs on IL-5 Production from ILC2s
As previously reported [24], ILC2s were sorted from the lung cells by flow cytometry. Briefly, all lung lobes were isolated 24 h after the 4th challenge under general anesthesia and minced in PBS. Cell suspensions were obtained by digesting all lung lobes using 870 units/mL of collagenase type I (Thermo Fisher Scientific, Waltham, MA, USA) at 37 • C for 1 h. Cells were treated with ACK lysis buffer to remove erythrocytes. The total number of leukocytes was counted using a hemocytometer after staining with trypan blue (Thermo Fisher Scientific, Waltham, MA, USA).
Statistical Analyses
One-way ANOVA was performed to compare multiple groups, followed by Dunnett's multiple comparison test. The difference was detected as significant when the p-value was less than 0.05. These calculations were conducted using JMP Pro (Version 15.1.0, SAS Institute Japan, Tokyo, Japan). Institutional Review Board Statement: This animal study was approved by the Experimental Animal Research Committee at Setsunan University (approve numbers: K20-1, K21-1, and K22-1). | 5,303.2 | 2022-11-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Areca nut components stimulate ADAM17, IL-1α, PGE2 and 8-isoprostane production in oral keratinocyte: role of reactive oxygen species, EGF and JAK signaling
Betel quid (BQ) chewing is an etiologic factor of oral submucous fibrosis (OSF) and oral cancer. There are 600 million BQ chewers worldwide. The mechanisms for the toxic and inflammatory responses of BQ are unclear. In this study, both areca nut (AN) extract (ANE) and arecoline stimulated epidermal growth factor (EGF) and interleukin-1α (IL-1α) production of gingival keratinocytes (GKs), whereas only ANE can stimulate a disintegrin and metalloproteinase 17 (ADAM17), prostaglandin E2 (PGE2) and 8-isoprostane production. ANE-induced EGF production was inhibited by catalase. Addition of anti-EGF neutralizing antibody attenuated ANE-induced cyclooxygenase-2 (COX-2), mature ADAM9 expression and PGE2 and 8-isoprostane production. ANE-induced IL-1α production was inhibited by catalase, anti-EGF antibody, PD153035 (EGF receptor antagonist) and U0126 (MEK inhibitor) but not by α-naphthoflavone (cytochrome p450-1A1 inhibitor). ANE-induced ADAM17 production was inhibited by pp2 (Src inhibitor), U0126, α-naphthoflavone and aspirin. AG490 (JAK inhibitor) prevented ANE-stimulated ADAM17, IL-1α, PGE2 production, COX-2 expression, ADAM9 maturation, and the ANE-induced decline in keratin 5 and 14, but showed little effect on cdc2 expression and EGF production. Moreover, ANE-induced 8-isoprostane production by GKs was inhibited by catalase, anti-EGF antibody, AG490, pp2, U0126, α-naphthoflavone, Zinc protoporphyrin (ZnPP) and aspirin. These results indicate that AN components may involve in BQ-induced oral cancer by induction of reactive oxygen species, EGF/EGFR, IL-1α, ADAMs, JAK, Src, MEK/ERK, CYP1A1, and COX signaling pathways, and the aberration of cell cycle and differentiation. Various blockers against ROS, EGF, IL-1α, ADAM, JAK, Src, MEK, CYP1A1, and COX can be used for prevention or treatment of BQ chewing-related diseases.
INTRODUCTION
Chewing betel quid (BQ) is popular in Taiwan, India and many Southeast Asian countries [1][2][3]. This habit increases the risk of oral leukoplakia, oral submucous fibrosis (OSF) and oral cancer. There are approximately 2-2.8 million BQ chewers in Taiwan [4] and 600 million BQ chewers worldwide [1]. BQ contains areca nut (AN), lime and inflorescence Piper betle with or without Piper betle leaf. However, the mechanisms and signaling transduction pathways of BQ chemical carcinogenesis are not clear. The induction of reactive oxygen species (ROS), damage to cellular targets (DNA, protein, lipid) after metabolic activation of BQ components by phase 1 enzymes (e.g., cytochrome P450s) [5], the cytotoxic effects of BQ constituents, keratinocyte inflammation and oncogene activation are suggested to be the contributing factors. ROS may be involved in the initiation, promotion and progression of cancer. During BQ chewing, ROS generation is confirmed by both in vitro [6,7] and in vivo (in saliva) studies [8] and may induce oral squamous cell carcinoma (OSCC) in Papua New Guinea and other countries [2,9], via auto-oxidation or metabolic activation by cytochrome p450 (CYP) enzymes [10]. The roles of ROS production by BQ components and the related upstream/downstream signaling in mediating cytotoxicity, aberrant differentiation and prostanoid production/tissue inflammation are crucial in BQ carcinogenesis.
Clinical studies have found the increased expression of a disintegrin and metalloproteinases (ADAMs) in OSCC of Taiwan and other country [11,12]. Overexpression of epidermal growth factor (EGF) and EGF receptor (EGFR) is also noted in head and neck squamous cell carcinoma (HNSCC) [13]. EGFR can be activated by EGF, heparinbinding (HB)-EGF, transforming growth factor-α (TGF-α) and amphiregulin, as well as by ROS [14]. EGFR (HER1, erbB1) is a receptor tyrosine kinase (RTK) that modulates cell proliferation and differentiation via Janus kinase (JAK), Src and Ras/mitogen-activated protein kinases (MAPKs) signaling. Recently, the elevated expression of EGFR and MAPKs is crucial in the pathogenesis of oral cancer [15,16]. Src is a non-receptor tyrosine kinase that may be activated by metals, ROS and ultraviolet (UV) irradiation [17]. Src kinase activity is necessary for EGF and other HER ligand signaling to signal transducer and activator of transcription (STAT) and MAPK pathways in various cancers [16][17][18].
EGF/EGFR, tumor necrosis factor-α (TNF-α) and IL-1α may be involved in the sequential stages of carcinogenesis and tissue fibrosis. These effects occur via activation of receptors, ADAMs and TAK1 to cleave and release EGF [28]. An increased expression of cyclooxygenase-2 (COX-2) in different stages of oral cancer and marked inflammatory cell infiltration in OSF tissues may play a crucial role in the multi-step chemical carcinogenesis [29,30]. Previous reports have found the induction of COX-2 and PGE 2 production in GK by ANE via the activation of ROS, CYP1A1, EGFR, Ras, Src, (HO-1 and MEK/ERK [25,31,32]. It is intriguing to determine whether EGF, IL-1α, and ADAMs are activated by BQ components to induce the release of oxidative stress markers and inflammatory mediatorse.g., 8-isoprostane and PGE 2 production-in oral mucosal cells. Moreover, signal transduction pathways such as ROS, JAK (a downstream molecule of EGFR), and MEK that mediate these cellular responses should be clarified. We hypothesized that BQ chewing may induce tissue inflammation, leading to OSF and oral cancer via stimulation of ROS, EGF/EGFR, JAK, IL-1α and ADAM17 (also called TNF-α converting enzyme, TACE) to impair differentiation and cell cycle progression, as well as the production of 8-isoprostane and PGE 2 production in oral keratinocytes. These complex cross-talk events among EGF, EGFR, IL-1α, ADAM, JAK, Src and other signaling molecules may play an important role in BQ chewingrelated diseases (e.g., cancer, OSF, and atherosclerosis). The results of this study may highlight our development of methods (small molecule inhibitors, antibodies etc.) for prevention and targeting therapy of BQ chewing-related diseases.
Effect of ANE and arecoline on EGF and IL-1α production by GKs
At concentrations of 400 and 800 μg/ml, ANE stimulated EGF secretion of GKs to 1.8 and 3.3-folds of control, respectively ( Figure 1A). Interestingly, arecoline at concentrations of 0.2-0.8 mM also induced EGF secretion of GKs to 1.4-2.8 folds of control ( Figure 1B). Similarly, ANE (400 and 800 μg/ml) induced IL-1α production of GKs by 1.7 to 5.4-folds of control, whereas www.impactjournals.com/oncotarget ANE inhibited IL-1α production by GKs at concentrations of 50-200 μg/ml ( Figure 1C). On the other hand, arecoline stimulated IL-1α production by GKs at a concentration of 0.8 mM, whereas it slightly inhibited IL-1α secretion by GKs at a concentration of 0.05 mM ( Figure 1D).
Signaling for ANE-induced EGF production by GKs
To determine the upstream signaling molecules responsible for ANE-induced EGF production, we found that anti-EGF antibody (aby) effectively decreased the useful EGF content in the culture medium of GKs ( Figure 3A). Pretreatment and co-incubation of catalase effectively prevented the ANE-induced EGF production by GKs ( Figure 3B). On the other hand, GM6001 (an inhibitor of metalloproteinases), anti-TNFα neutralizing aby, pp2 (a Stimulation of IL-1α production of GK by ANE (50-800 μg/ml) (n=14). D. Effect of arecoline on IL-1α production of GK (n=20). *denotes significant difference when compared with control (P < 0.05).
Upstream signaling for ANE-induced ADAM17 production by GKs
To reveal the upstream signaling molecules responsible for ANE-induced ADAM17 production, we found that pretreatment and co-incubation by anti-EGF neutralizing aby slightly decreased ANE-induced ADAM17 production by GKs (P > 0.05) ( Figure 5A). Pretreatment and co-incubation by pp2 and U0126 inhibited ANE-induced ADAM17 production by GKs ( Figure 5B, 5C). Moreover, pretreatment and co-incubation by α-naphthoflavone and aspirin also attenuated ANEinduced ADAM17 production by GKs ( Figure 5D, 5E).
Role of JAK signaling in ANE-induced effects on GKs
Because JAKs are important signaling molecules responsible for EGFR-mediated events, we further tested and found that AG490 (a JAK inhibitor) could not prevent ANE-induced EGF production by GKs ( Figure 6A). By contrast, AG490 attenuated ANE-induced ADAM17 and IL-1α production by GKs ( Figure 6B, 6C). Accordingly, ANE inhibited keratin 5, keratin 14, cdc2 protein expression, whereas ANE stimulated the protein expression of mature ADAM9 (84 KD) but had no marked effect on precursor ADAM9 (105 KD). ( Figure 6D). AG490 may prevent the inhibitory effect of ANE on keratin 5 and keratin 14. Additionally, AG490 suppressed the stimulatory effect of ANE on the protein expression of mature ADAM9, with an increase in precursor ADAM9 expression ( Figure 6D).
Role of EGF and JAK on ANE-induced COX-2 expression and PGE2 production by GKs
To understand the role of EGF and JAK in mediating ANE-induced COX-2 expression and PGE 2 production, anti-EGF aby and AG490 were used to suppress the effect of EGF/EGFR and JAK signaling. Interestingly anti-EGF aby effectively inhibited ANE-induced PGE 2 production and COX-2 expression in GKs ( Figure 7A, 7C). Similarly, AG490 also markedly suppressed ANE-induced PGE 2 production and COX-2 expression in GKs ( Figure 7B, 7D).
Effect of catalase, anti-EGF antibody, IL-1 receptor associated kinase (IRAK) inhibitor, AG490, pp2, U0126, α-naphthoflavone, ZnPP and aspirin on ANE-induced 8-isoprostane production by GKs Generally, 8-isoprostane is considered an oxidative stress marker and product. In this study, catalase effectively prevented ANE-induced 8-isoprostane production by GKs ( Figure 8A). We further tested whether co-incubation by catalase on ANE-induced EGF production in GK (n=12). C. Pretreatment and co-incubation by GM6001 on ANEinduced EGF production in GK (n=5). D. Pretreatment and co-incubation by anti-TNFα neutralizing aby on ANE-induced EGF production in GK (n=3). E. Pretreatment and co-incubation by pp2 on ANE-induced EGF production in GK (n=6). F. Pretreatment and co-incubation by α-naphthoflavone on ANE-induced EGF production in GK (n=27). G. Pretreatment and co-incubation by ZnPP on ANE-induced EGF production in GK (n=18). H. Pretreatment and co-incubation by aspirin on ANE-induced EGF production in GK (n=10). *denotes significant difference when compared with solvent control. #denotes statistically significant difference when compared with ANE-treated group (P < 0.05). I. Effect of anti-EGF neutralizing aby on the ANE-induced alterations of ADAM9, keratin 5, keratin 14, cdc2 and GAPDH (control) protein expression as analyzed by western blotting. One representative western blot picture was shown. the induction of EGF by ANE is important for this event. Anti-EGF neutralizing aby evidently attenuated the ANEinduced 8-isoprostane production ( Figure 8B). However, IRAK inhibitor (inhibitor of IL-1) showed little preventive effects on ANE-induced 8-isoprostane production ( Figure 8C). To elucidate the role of JAK (a downstream molecule of EGF/EGFR) signaling, AG490 pretreatment and co-incubation almost completely inhibited ANE-induced 8-isoprostane production by GKs ( Figure 8D). Similar inhibitory effects of pp2 ( Figure 8E) and U0126 ( Figure 8F) on ANE-induced 8-isoprostane production were also noted. Moreover, to clarify the role of various metabolic enzymes in 8-isoprostane production, α-naphthoflavone could attenuate ANE-induced 8-isoprostane production Figure 4: A. Pretreatment and co-incubation by catalase on ANE-induced IL-1α production in GK (n=11). B. Pretreatment and co-incubation by anti-EGF neutralizing aby on ANE-induced IL-1α production in GK (n=4). C. Pretreatment and co-incubation by PD153035 on ANEinduced IL-1α production in GK (n=11). D. Pretreatment and co-incubation by U0126 on ANE-induced IL-1α production in GK (n=8). E. Pretreatment and co-incubation by α-naphthoflavone on ANE-induced IL-1α production in GK (n=21). *denotes significant difference when compared with solvent control. #denotes statistically significant difference when compared with ANE-treated group (P < 0.05). Pretreatment and co-incubation by pp2 on ANE-induced ADAM17 production in GK (n=4). C. Pretreatment and co-incubation by U0126 on ANE-induced ADAM17 production in GK (n=5). D. Pretreatment and co-incubation by α-naphthoflavone on ANE-induced ADAM17 production in GK (n=8). E. Pretreatment and co-incubation by aspirin on ANE-induced ADAM17 production in GK (n=10). *denotes significant difference when compared with solvent control. #denotes statistically significant difference when compared with ANE-treated group (P < 0.05). www.impactjournals.com/oncotarget B. Pretreatment and co-incubation by AG490 on ANE-induced ADAM17 production in GK (n=5). C. Pretreatment and co-incubation by AG490 on ANE-induced IL-1α production in GK (n=47). *denotes significant difference when compared with solvent control. #denotes statistically significant difference when compared with ANE-treated group (P < 0.05). D. Effect of AG490 on ANE-induced alterations of ADAM9, keratin 5, keratin 14, cdc2 and GAPDH (control) protein expression as analyzed by western blotting. One representative western blot picture was shown.
Recently, we have found the stimulation of EGFR phosphorylation and activation by ANE [25], possibly due to the induction of EGF production by ANE as found in this study. This event is inhibited by catalase, but not by GM6001, anti-TNFα aby, pp2, α-naphthoflavone, ZnPP and aspirin, suggesting that ANE-induced EGF production is correlated to ROS, but not by TNFα production, proteinase cleavage, Src, CYP1A1, HO-1 and B. Pretreatment and co-incubation by AG490 (15 and 30 μM) on ANE-induced PGE2 production in GK. Results were expressed as Mean ± SE. *denotes significant difference when compared with solvent control. #denotes statistically significant difference when compared with ANE-treated group (P < 0.05). C. Pretreatment and co-incubation by anti-EGF neutralizing aby on ANE-induced COX-2 protein expression of GK. D. Pretreatment and co-incubation by AG490 on ANE-induced COX-2 protein expression of GK. One representative western blotting picture was shown.
COX. EGF can be an early response signaling molecule for ANE-induced cellular events in GKs. Moreover, anti-EGF aby attenuates ANE-induced ADAM9 maturation, but not the ANE-induced decline of cytokeratin 5, 14 and cdc2, indicating the presence of differential signaling pathways responsible for different downstream effective molecules. Anti-EGF aby and AG490 suppress the ANE-induced COX-2 expression, PGE 2 and 8-isoprostane production, but not cdc2 expression of GK. During BQ chewing, ROS may be generated by auto-oxidation of BQ components in saliva or via their intracellular metabolic activation [1,2]. BQ-induced ROS overproduction is correlated to DNA/cell damage, tissue inflammation, cell cycle regulation, apoptosis and gene expression with incubation by anti-EGF neutralizing aby on ANE-induced 8-isoprostane production in GK. C. Pretreatment and co-incubation by IRAK inhibitor on ANE-induced 8-isoprostane production in GK. D. Pretreatment and co-incubation by AG490 on ANE-induced 8-isoprostane production in GK. E. Pretreatment and co-incubation by pp2 on ANE-induced 8-isoprostane production in GK. F. Pretreatment and coincubation by U0126 on ANE-induced 8-isoprostane production in GK. G. Pretreatment and co-incubation by α-naphthoflavone on ANEinduced 8-isoprostane production in GK. H. Pretreatment and co-incubation by ZnPP on ANE-induced 8-isoprostane production in GK. I. Pretreatment and co-incubation by aspirin on ANE-induced 8-isoprostane production in GK. *denotes significant difference when compared with solvent control. #denotes statistically significant difference when compared with ANE-treated group (P < 0.05). associated lipid peroxidation, protein modification and DNA damage. Recently, we have found the activation of ROS, CYP1A1, EGFR, Ras, Src and HO-1 signaling by ANE to induce COX-2 expression/PGE 2 production in GK [25]. Moreover, EGF can activate EGFR to stimulate cell proliferation, differentiation, invasion and metastasis via stimulation of downstream JAK, Src, Ras/ MAPKs and PI3K/Akt signaling [14,[16][17][18]. GW2974, a dual inhibitor of EGFR and ErbB2 tyrosine kinase, may attenuate the 7,12-dimethylbenz[a]anthracene (DMBA)induced hamster cheek pouch tumor with concomitant reduction of tissue PGE 2 , indicating the presence of crosstalk between EGFR and arachidonic acid metabolism [45]. Studies also reveal the upregulation of COX-2 and EGFR in oral leukoplakia and oral carcinogenesis [46]. In this study, ROS-EGF/EGFR-and JAKs-COX-2 signaling pathways are shown to contribute to oral mucosal inflammation and carcinogenesis in BQ chewers. ANE-induced ADAM9 maturation and decrease of cytokeratin expression are correlated to JAK. ANE has been shown to PI3K/Akt, EGFR and COX signaling and contribute to BQ carcinogenesis [25,47,48]. However, additional signaling molecules are present to downregulate cdc2 by ANE.
ROS are critical molecules for stimulation of ANE-induced PGE 2 production in GK [25,31]. To know more about the role of ROS in BQ carcinogenesis, we interestingly found that ROS is necessary for the ANEinduced EGF, IL-1α, and 8-isoprostane production. However, ANE at lower concentrations partly inhibited the IL-1α and 8-isoprostane production, possibly because ANE also contains some anti-oxidative components. IL-1α is involved in tissue inflammation, immune modulation and carcinogenesis via binding to IL-1 receptor to trigger signal transduction pathways such as IL-1 receptor (IL-1R)-associated kinase (IRAK) and TGFβ-activated kinase-1 (TAK1) [49,50]. 8-Isoprostane has been used as a disease marker for obesity, ischemiareperfusion injury, and cancer [51]. It may activate thromboxane receptors in response to oxidative injury [52]. Exposure to ANE may stimulate ROS and thereby downstream signaling pathways such as EGF/EGFR, IL-1α/IL-1R and 8-isoprostane/receptor to stimulate oral carcinogenesis. This may explain why ROS may activate receptors, receptor-activated protein kinases and nuclear transcription factors, including growth factor receptors, JAK, Src kinase, Ras signaling, MAPKs, PI3K/Akt pathway, NF-kB [16][17][18]. In addition to catalase, the ANEinduced IL-1α production is prevented by anti-EGF aby, PD153035 and U0126, but enhanced by α-naphthoflavone. These results suggest that ANE-induced IL-1α production of GK is mediated by ROS, EGF/EGFR and MEK/ ERK activation. Similarly IL-1α production and nuclear localization are correlated to ROS levels, EGFR activation and MEK/ERK in fibrosarcoma, skin keratinocytes and in cerebral ischemia injury [53,54]. Furthermore, IL-1α and TNF-α are important mediators involved in carcinogenesis and fibrosis of many organs via activation of receptor activation/TAK1 signaling [55,56]. GK expressed various types of CYP enzymes mainly CYP1A1, 2C8/19, 2E1, and 3A3/3A4, and may involve in ANE-induced COX-2 expression and PGE 2 production in GK [25]. Interestingly α-naphthoflavone by itself stimulates IL-1α production. This may partly explain the inhibition of CYP1A1/ CYP1A2 by α-naphthoflavone enhanced the ANE-induced IL-1α production. The involvement of CYP1A1/CYP1A2 and its inhibition by α-naphthoflavone on ANE-induced events suggest that possibly metabolic activation of ANE components is necessary for some of the ANE-induced carcinogenic events [25] and increase the risk of OSF and oral cancer [57,58].
Since EGFR ligands can be shed from plasma membrane by metalloproteinases and sheddases -a disintegrin and metalloproteinases (ADAMs). ADAM10, 12, 17 are the major sheddases of EGFR ligands in response to stimuli such as G-protein coupled receptors, growth factors, cytokines, wounding and phorbol ester etc. [59]. Over-expression of ADAMs (ADAM9, 10, 12, and 17 etc.) is popularly noted in epithelial inflammation and carcinogenesis [60] and increased expression of certain ADAMs may enhance tumor cells invasion, proliferation in vitro and promote tumor formation in vivo. ADAM17 may enhance the invasion of oral cancer [43]. An increased expression of ADAM10 is found in OSCC of Taiwan [11] and expression ADAM17 in head/neck SCC in Germany [12]. MMP2 and MMP-9 also contribute to BQ-related oral carcinogenesis by promotion of cancer invasion and metastasis [26,27]. In this study, we further found the stimulation of ADAM9 maturation and ADAM17 secretion by ANE, suggesting the involvement of ADAM9 and ADAM17 in BQ carcinogenesis. ANE-induced ADAM17 secretion can be suppressed by pp2, U0126, α-naphthoflavone and aspirin, indicating this event is associated with Src, MEK/ERK, CYP1A1 and COX signaling. Src is a non-receptor tyrosine kinase that is activated by metals, ROS and UV irradiation [17]. Src overexpression has been found in head/neck cancers. Activated Src may induce downstream signaling of MAPKs, NF-kB and PI3K. Moreover, BQ components can stimulate Src activation and ERK to promote cancer cells' migration and motility [61].
Previous studies show the association between tissue inflammation and cancer/fibrosis with an elevation of COX-2 expression and prostanoid production in oral cancer and precancer [30]. AN components may induce tissue injury and inflammation, COX-2 expression and PGE 2 production in GK via ROS, EGFR, Src, and MEK/ERK signaling [25,31,32]. In this study, ANE is further found to induce 8-isoprostane production. PGE 2 is involved in oral carcinogenesis by induction of sustaining epithelial hyperplasia, angiogenesis, immunosuppression and tumor metastasis. The 8-isoprostane has been suggested as an oxidative stress marker during chemical carcinogenesis and may induce vasoconstriction but inhibit angiogenesis [62][63][64]. 8-Isoprostane levels in serum, urine and exhaled breath condensate are used as the disease marker in tissue fibrosis, prostate and lung cancer [65][66][67], suggesting the potential use of 8-isoprostane as a marker of oral cancer and OSF. ANEinduced PGE 2 production is related to ROS, EGFR, Src, MEK/ERK, CYP1A1 and HO-1 [25], and in addition, EGF and JAK signaling in this study. Interestingly, ANEinduced 8-isoprostane production of GK is prevented by catalase, anti-EGF aby, IRAK inhibitor, AG490, pp2, U0126, α-naphthoflavone, ZnPP and aspirin. These results demonstrate that ANE-induced 8-isoprostane production in GK is related to ROS, EGF and IL-1 production and the downstream signaling via IRAK, JAK, Src, MEK/ERK. CYP1A1, HO-1 and aspirin are also associated with these processes.
Based on this study and other prior reports [1,2,6,25,32], we conclude that AN components play crucial roles in the pathogenesis of BQ-induced oral cancer and OSF possibly via induction of ROS, EGF/EGFR, JAK, Src, MEK/ERK, IL-1α, ADAMs, CYP1A1, HO-1 and COX signaling pathways, as well as the aberration in cell cycle-and differentiation-related proteins of oral keratinocytes (Figure 9). Auto-oxidation or metabolic activation of ANE components by CYP1A1 may generate ROS and reactive intermediates. ROS may induce multiple signaling pathways such as EGF/EGFR,
Culture of gingival keratinocytes (GKs)
GKs were cultured as described previously [25,32]. With the approval of the Ethics Committee of National Taiwan University Hospital, human gingiva (with a gingivitis index < 1) was obtained during clinical crown-lengthening procedures with proper written informed consent by the patients. Most of the subepithelial connective tissue of the gingiva was first removed using a surgical knife, and then tissues were cut into small pieces, placed onto culture dishes and cultured in KGM-SFM with supplements. The cell passages of GKs ranging from 1 to 3 were used through this study.
Effect of ANE and arecoline on 8-isoprostane, EGF, IL-1α, and ADAM17 production by GKs Near-confluent GKs in 6-well culture plates were exposed to 2 ml of fresh medium containing various concentrations of ANE and arecoline. Cells were further incubated for 24 h. Culture medium was collected for the analysis of 8-isoprostane, EGF, IL-1α, and ADAM17 levels by ELISA.
Statistical analysis
Four or more separate experiments were performed. The results were expressed as the mean ± SE and analyzed by paired Student's t-test. A P value < 0.05 was considered to indicate a statistically significant difference between 2 study groups. | 4,965.8 | 2016-02-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
A new assessment of the elastic thickness ( Te ) structure of the Indian shield , and its implications
The elastic thickness (Te) of continents is a matter of much debate. Recent studies have shown that a number of factors control the continental Te, including age, heat flow, and lithospheric thickness. Here, we estimate the Te structure of the whole Indian shield using an improved isotropic fan wavelet land ocean deconvolution methodology, and we compare these results with the global published Te estimates in the Archean, Proterozoic and younger geological provinces. Our study reveals low (0-45 km/0-35 km), intermediate (45-70 km) and high (70-100 km) Te values in the Archean/Quaternary, the Proterozoic, and the Tertiary provinces, respectively, of the Indian shield. This is in contrast with global estimates of Te in similar geological provinces. In the absence of any correlation of Te with any controlling parameters, we propose that the mantle properties, rather than the tectonic history, are responsible for influences on the Te values within the Indian shield. The global positioning system horizontal velocity vectors yielded a locking depth of ca. 20 ±4 km, and the aseismic creep beyond correlates well with the high strength of ca. 70 km to 100 km in the central Himalayan foreland.
Introduction
Although different methods yield different absolute values, it has been convincingly shown that the elastic thickness (Te) in cratons is always controlled by age and heat flow [Pérez-Gussinyé andWatts 2005, Simons et al. 2000].Several studyies have suggested that the Te of continents cannot be described by any relationship with only a single parameter [Watts and Burov 2003, McNutt et al. 1988, Watts 1992].
Earlier investigations around the globe using spectral analysis have suggested that the Te of cratonic regions is greater than ca.60 km; e.g., for Africa [Djomani et al. 1995], South America [Pérez-Gussinyé et al. 2009], Canada [Wang and Mareschal 1999], and Australia [Simons et al. 2000].Recently, it was argued that the recovery of high Te values is dependent on the window sizes [Pérez-Gussinyé and Watts 2005].Using several window sizes, as 400 km 2 × 400 km 2 , 600 km 2 × 600 km 2 , 800 km 2 × 800 km 2 and 1,000 km 2 × 1,000 km 2 , Pérez-Gussinyé and Watts [2005] showed that smaller window sizes improve the resolution, but may not be large enough to recover the maximum value of Te.
The first attempts to estimate the Te in the Indian craton were made by Karner and Watts [1983] and Lyon-Caen and Molnar [1985].Using forward modeling between the Bouguer anomaly and the topography they obtained Te values of 80 km to 110 km in the Ganges basin.Free-air admittance by McKenzie and Fairhead [1997] yielded the lower Te value of 24 km, which was found to be correlated with seismogenic thickness (Ts).However, subsequent studies by Handy and Brun [2003] suggested that the Te can easily exceed the Ts as well.Using multitaper spectral analysis, Rajesh et al. [2003] characterized the relative variations in the Te in the India-Eurasia collision zones.In another analysis, Rajesh and Mishra [2004] used the transitional coherence wavelength to characterize the tectonic provinces.Jordan and Watts [2005] used both forward and inverse flexural and gravity modeling techniques, and they obtained spatially varying Te structures that varied from 0 km to 125 km in the India-Eurasia collision zones.However, none of these studies could demonstrate any convincing correlations between the Te values and any of the controlling parameters, such as age and temperature.One possible reason for this might be that most of the forward modeling techniques were only along some one-dimensional profiles, and hence they failed to capture the spatial variation of the Te.Hence, a reappraisal of the spatial variation of the Te with an improved methodology would add new dimensions to the study of the lithospheric strength within the Indian shield.
Recent studies have shown that the fan wavelet method has been relatively successful in deriving the Te structure in continental areas; e.g., in the Australian shield [Kirby and Swain 2011], the Canadian shield [Audet and Mareschal 2004], and the South American shield [Tassara et al. 2007].Highresolution, spatially varying Te maps can resolve regional scale features that can be correlated to surface geological structures.In the present study, we look at the discrepancies in the estimation of the Te within the Indian shield using the isotropic fan wavelet, which uses large window sizes and estimates the Te by merging the databases of the land [Rajesh et al. 2003, Rajesh andMishra 2004] and the ocean [Anderson andKnudsen 1998, Anderson et al. 2008].
The tectonic settings
The Indian subcontinent has a complex geological and tectonic setting that consists of Precambrian cratons of Archean age and rift zones filled with Proterozoic and Phanerozoic sediments [e.g.Biswas 1999].A geological map of the whole Indian shield, along with the major tectonic features, is shown in Figure 1.The geological structure of the Indian shield can be subdivided into three main units [Krishnan 2006]: (a) The Himalayan front in the north, which results from the Mesozoic subduction and the collision between the Indian and Eurasian plates.
(b) The Indo-Gangetic plains, which are located between the abruptly rising Himalayas in the north and the Indian peninsula in the south, and which extend from east to west.
(c) The Indian peninsula in the south, which comprises the Indian shield with the Deccan traps and the Dharwar cratons.
The Precambrian rock in India [e.g.Naqvi 2005, Sharma 2009] is subdivided into two categories: the Archean system, and the Dharwar system.The Archean rock is mainly found in the Dharwar, Singbhum, Bundelkhand, and Bastar cratons.The Aravalli and Eastern Ghat cratons are of late Proterozoic age.The Indo-Gangetic plains are mostly filled with Quaternary sediments that come from the erosion of the Himalayas [e.g. Bir Singh 1996].An important feature in the southern Indian peninsula, the Deccan traps, evolved as flood basalts in the late Mesozoic and early Tertiary period.The origin of these traps is still not unanimous; hot-spot volcanism and mantle plumes are the most discussed theories at present [e.g.Sen 2001, Sheth 2007].
The data
The topography, bathymetry and Bouguer anomaly data are shown in Figure 2. The bathymetry data were extracted from the Digital Atlas of the General Bathymetric Chart of the Oceans (GEBCO; http://www.gebco.net/),which was published by the British Oceanographic Data Centre on behalf of the International Oceanographic Commission of UNESCO (http://ioc-unesco.org/)and the International Hydrographic Organization (http://www.iho.int/srv1/).The GEBCO data were made available by the National Oceanic and Atmospheric Administration [2003].We merged the gravity data for the land and the ocean using the land-ocean deconvolution scheme.The gravity and topography data source for the land is the same as that used by Rajesh et al. [2003], Rajesh and Mishra [2004], and Rajesh [2004].The cumulative error in the Bouguer gravity field including the errors in the elevation is estimated to be 1 mGal to 2 mGal [Rajesh and Mishra 2004].The free-air anomaly data for the oceanic regions come from the global marine gravity field from the ERS-1 and GEOSAT geodetic mission altimetry of Anderson and Knudsen [1998] and Anderson et al. [2008].The marine free-air gravity anomaly data, DG f , is converted to the Bouguer gravity anomaly, DG b , using the formula: (1) where Dt = 1,820 kg.m -3 is the density contrast between the surface rock and the water, H is the bathymetry (in m), and G is the gravitational constant.The crustal thickness data used was taken from the CRUST 2 model [Bassin et al. 2000].
Methodology
The fan wavelet technique [Kirby and Swain 2004, Audet and Mareschal 2007, Kirby and Swain 2011] uses the superposition of the rotated two-dimensional (2D) Morlet wavelets arranged in a 'fan' geometry to obtain the wavelet coherence or admittance, by computing the co-spectra and cross-spectra of the gravity and topography data.
The continuous wavelet transform of a 2D spatially distributed signal g(x) is estimated by taking the convolution of the signal with the complex conjugate of a wavelet, given by: (2) where is the complex wavelet coefficient, s is the width of the wavelet, i is the rotation parameter, k is the 2D wavenumber, F -1 is the inverse 2D Fourier transform, is the 2D Fourier transform of the signal g(x), is the complex conjugate of , where: ( is the 2D Fourier transform of the 'daughter' wavelets derived by dilating, translating and rotating the mother wavelet, and X(i) is the rotation matrix.For the calculation of isotropic coherence function, the wavelet coefficients must be complex quantities.The problem with real wavelets is that they are not complex, although they are isotropic, whereas complex wavelets are not in general isotropic.To overcome this problem, superposition of Morlet wavelets is used, which produces isotropic and complex wavelet coefficients.The 2D wavelet coherence can then be estimated by summing the wavelet co-spectra and cross-spectra over different azimuths: (4) where and are the complex wavelet coefficients of the Bouguer anomaly and topography, respectively.
The wavelet method has advantages over the windowed Fourier transform method, such as the multitaper method, because unlike the windowed Fourier transform, where the window size is the same for all scales, the wavelet method uses an optimal window size for each scale.The only drawback is that the spatial localization is not good for large-scale wavelets.This results in a decrease in the mapping quality of the Te for thicker plates, compared to thinner plates, as higher Te values correspond to higher transitional wavelengths.
Results
We computed the Te for the whole Indian shield and the adjoining regions using the isotropic fan wavelet method.To avoid edge effects, we used the land-ocean deconvolution
G G 2 GH
As the main aim of this study is to discuss the spatial variations of the Te within the Indian shield and not in the adjoining oceanic regions, we blanked out the portions outside the Indian shield.However to explain the global positioning system velocity vectors in the Nepal Himalaya, we only considered the Te values for that particular region outside India.We ensured that the window size was large enough to capture the flexural signatures.The Te values obtained are quite low for peninsular India (<50 km), except for the Central Himalayan foreland, where they are 70 km to 100 km (Figure 3a).Moreover the geological provinces of the different ages have different Te values i.e., the Archean (0 to 45 km), Proterozoic (45 to 70 km), and Tertiary and Quaternary provinces (70 to 100 km).Thin Te values (0-45 km) are unusual for continental regimes of generally low heat flow and other indicators of low temperature, such as old geologic age.The transitional wavelength is shown in Figure 3b.We also computed the flexural loading ratio (Figure 4a), which shows a value within 0-9.The error in the estimation of the Te is shown in Figure 4b.The estimated average errors for the Archean, Proterozoic, Tertiary and Quaternary regions are 6 km, 8 km, 9 km and 5 km, respectively.
Discussion
The Te values of continental cratons are normally high (>60 km).In Table 1, we have listed the results of recent studies of the Te in continental shields.In most cases, the Te is greater than the depth of the crustal thickness, which suggests that in the upper mantle as well.Moreover, there is no clear correlation of Te with geological age (Table 1).The correlations between the Te and its controlling factors have always been complex and nonuniform.Although several studies have computed the Te values for the Indian shield, none of these, except Jordan and Watts [2005], captured the spatial variation of the Te.Jordan and Watts [2005] used gravity and flexural modeling, and they reported high values of the Te (80-125 km) in the central Himalayan foreland.Our estimated Te values are in good agreement with those of Jordan and Watts [2005].We found that the Te values within the Indian shield vary for the different geological ages: 0 km to 45 km for Archean, 45 km to 70 km for Proterozoic, 70 km to 100 km for Tertiary, and 0 km to 35 km for Quaternary.Our rigidity structure is shown in Figure 5, and it indicates that the older sections of the Indian shield are indeed weaker, with respect to the younger ones.The Te patterns demonstrate a nucleus of variable high strength (50-100 km) along the north-central part of the Indian shield.In the north-eastern part, the rigidity structure is uniform.The southern shield demarcates a variable low-strength variation (0-50 km).
The reconstructions of the lithospheric geotherm carried out by Artemieva and Mooney [2001] and Artemieva [2006] showed that the southern Indian shield (chiefly the Archean) is colder than the north Indian region.These above-mentioned results are in contrast to our present estimates of the long-term lithospheric strength using the rheological profiles for the Indian region.Manglik and Singh [1999] showed that the stronger zones are mostly located in India, and are characterized by the lowest surface heat flow.Taking into account the above considerations, our present results suggest the possibility of the coupling/decoupling of the crust-mantle interface that can considerably increase or decrease the elastic strength [Burov and Diament 1995].This possible causal relationship needs to be investigated further to improve our understanding of the results obtained for the Indian shield.Burvo and Diamond [1995] provided several pieces of evidence around the globe that suggest that the lower crust of most continental plates has a low temperature activation rheology that can result in crust-mantle decoupling.Our present data show that the decoupling of the crust and the mantle might be one of the important properties that determine the present apparent values of the Te.The decoupling can be attributed to variations in the composition and thickness in the crust, due to a low-temperature-activated lower crust or a thick crust.In contrast, coupling can be due to basic high-temperatureactivated crust with mantle lithosphere.Gupta et al. [2003] used receiver function analysis at several sites of Archean and Proterozoic terrains in south India and they obtained anomalous crustal thicknesses of 42 km to 51 km beneath the mid-Archean.These reported high crustal thicknesses favor the possibility of the crust-mantle decoupling mechanism.
To check the plausibility of the Te results thus obtained, we conducted several tests with other approaches, to estimate the Te and the loading ratio simultaneously at each These tests ensure that the present Te estimates are real features and not artifacts due to the methodologies used.In the present Fan-wavelet method used, the computation is performed at each grid point of an initial surface and subsurface loads.These tests confirmed that the method is robust and the pattern of the Te variation is realistic and interpretable.The spatial variations in the Te values thus obtained were retrieved using continuous wavelet transform instead of the Fourier approach.The problem of using smaller window sizes in the Fourier approach is thus bypassed by using larger window sizes in our present estimation.Tassara et al. [2007] used a similar approach based on fanwavelet and obtained a causal relationship of rigidity structure with respect to the seismogenic zone along the seduction fault in South America.
The flexural loading ratio (Figure 4a) varies from 0 to 9. The loading ratio is <1 for the region with the high Te values, whereas it has a high value for regions with low Te values, contrary to the observations of Tassara et al. [2007], who reported a loading ratio >1 for the Te <15 km.This correlation between the Te and the loading ratio explains the subsurface mass distribution.
The transitional wavelength (Figure 3b) is the wavelength where the coherence between the Bouguer anomaly and the topography changes from high to low, or the coherence approaches the value of 0.5.Thus, it is a quality check for the estimation of the Te and it is seen to correlate well with our Te values obtained.The transitional wavelength is high (ca.200-220 km) for the regions with high Te, whereas it is low The seismicity of the region was taken from NEIC (http://earthquake.usgs.gov/regional/neic)and GCMT (http://www.globalcmt.org/).The geodetic velocity vectors for the Indian subcontinent are shown with respect to Eurasia (arrows) [Calais et al. 2006, Meade 2007, Thatcher 2007].
(km) (km) (km) (ca. 75-90 km) for the small pockets with low Te values.The north-south strip that shows a moderate Te value through central India (ca.45 km) also finds its resemblance in the map of the transitional wavelength.The error in calculating the Te values is within acceptable limits (Figure 4b), and hence supports the Te values obtained.
Te versus Ts:
A seismotectonic map is shown in Figure 5, with the earthquake epicenters and some of the major faults and lineaments shown.In Figure 5, the spatial Te variation is shown, and for three windows 5, boxes W1, W2 and W3) the hypocenters are projected onto the vertical planes.For windows 1, 2 and 3, the majority of the earthquakes are within 50 km, 40 km and 100 km in depth, respectably.From the analysis of these windows and their corresponding Te values, we can see that for windows with greater Te values, this means having more Ts.Hence, from the analysis, we can conclude that Te is proportional to Ts.For windows 1 and 2, most of the events occurred within the Moho.However in window 3, the crust and the upper mantle are seismically active, as subduction is still active.Thus, there is an obvious correlation between Te and Ts (Te ≈ Ts), except for a few isolated events in these regions.These data agree well with those obtained by McKenzie and Fairhead [1997], who on account of their free-air admittance method, obtained Te~Ts and suggested that the strength of the lithosphere resides in the seismogenic layer only.Maggi et al. [2000] obtained Te<Ts, and therefore suggested that the strength resides in the seismogenic layer, while the lower part of the seismogenic layer does not contribute to the strength.However, Watts and Burov [2003] suggested that Te>>Ts for the continental lithosphere, especially in the case of cratons, convergent zones, and rifts.They attributed this result to the multilayer rheology of the continental crust below cratons.From the global positioning system velocity vector in the Himalayan region, Bilham et al. [1997] suggested a locking depth of 20 ±4 km and aseismic creep beyond that depth.In Figure 5, the geodetic velocity vectors for the India-Eurasia collision zone are shown [Calais et al. 2006, Meade 2007, Thatcher 2007].One possible interpretation of our data might be that the aseismic part of the Te has the major role in defining the lithospheric strength, whereas the upper seismogenic layer is weak over a geological time scale.This result appears to be quite reasonable if we consider the Ts to be indicative of the frictional instability, which leads to the release of strain energy rather than strength.This also dismisses the concept of a weak mantle in active subduction zones in continental regions.
Conclusions
Our results infer that the cratonic Te of the Indian shield is significantly lower than the global averages.A north-south high Te is hypothesized, due to the effects of the Indian shield that ploughed into the mantle during its movement.This enhanced travel might be due to the high root strength of the Indian shield.We propose that the perfect coupling and high root strength at the Central Himalayan zone is due to competent rheology with the absence of wet crustal rheology.A large accumulation of stress connected with a network of faults might result in localization of the stress and incompetent layering, with a wet crustal rheology interpreted for the Indian peninsular shield and therefore relatively weak, although a craton.In addition, the presence of a crustal root in the peninsular shield without expression of the topography might have resulted in reduced strength.Although the Indian plate had an eventful tectonic history, including its movement over various hot spots, the present Te structure is suggestive of the dominant signatures of the last major tectonic episode of the formation of the Himalayas, and as a result, of crust-mantle coupling and decoupling.These results also support the inverse correlation between geological age and Te values.
Figure 3 .
Figure 3. (a) Te map of the Indian shield and the adjoining oceanic regions.(b) Transitional wavelength (km).
Figure 4 .
Figure 4. (a) Flexural loading ratio.(b) Error the estimation of the Te.
Figure 5 .
Figure 5. Spatial variation of the Te within the Indian shield shown with the tectonic features in India, with some of the major active faults and lineaments [Dasgupta et al. 2000].Abbreviations: MDF, Mahendergarh-Dehradun fault; NSL, Narmada Son lineament; TL, Tapti lineament; SSF, Shan-Shagaing fault; MBT, Main Boundary thrust; KMF, Kutch Mainland fault.Epicenters of the different earthquakes with magnitude Mw ≥4 are shown.Their hypocentral depths are projected onto the vertical plane as longitude versus depth for three windows (W1, W2, W3), as shown on the right-hand side.The seismicity of the region was taken from NEIC (http://earthquake.usgs.gov/regional/neic)and GCMT (http://www.globalcmt.org/).The geodetic velocity vectors for the Indian subcontinent are shown with respect to Eurasia (arrows)[Calais et al. 2006, Meade 2007, Thatcher 2007].
Table 1 .
ELASTIC THICKNESS STRUCTURE OF THE INDIAN SHIELD Estimation of the elastic thickness (Te) at the continental cratons and the average crustal compensation (Tc). | 4,853.8 | 2012-06-05T00:00:00.000 | [
"Geology",
"Physics"
] |
An adaptive approach for simultaneous classification of remote sensing scenes including rural and urban targets
ABSTRACT In this paper, an automatic adaptive image classification framework designed to operate in multiresolution scenes including rural and urban targets is proposed and tested. Traditional image analysis is commonly aimed to classify images using a single strategy and source of data over the entire scene. Ideally, urban targets should predict specialized classification systems using high spatial resolution images, such as object-based image analysis and non-parametric classifiers. Conversely, rural targets should be handled with the high-spectral resolution, pixel-based classification approaches, and parametric techniques. The formulation proposed in this study starts by performing an prior separation of rural and urban areas by assuming Central Limit Theorem (CLT) establishments. Then, both kinds of targets are labelled in an automatic adaptive fashion, each one with proper data and method previously selected. One experiment performed using set of data composed by a high spatial resolution true-colour image and a multispectral image, as well as preselected classification techniques particularly adjusted for each case. Visual and quantitative assessing by two accuracy metrics testing the proposed approach versus traditional classification confirm the soundness of the proposed framework.
Introduction
The increasing development of sophisticated remote sensing instruments has conferred considerable improvements in the quality of images acquired from space (Gholoobi & Kumar, 2015). Public administration has been facing a growing dependence of rapid and reliable monitoring of an increasingly dynamic and complex scenario. High spatial resolution imaging sensors onboard satellites are one of the examples which have allowed detailed land use and land cover mapping at a high efficiency and relatively low cost (Fisher, Eileen, James Dennedy-Frank, Kroeger, & Boucher, 2017), mainly over urban areas and other complex environments. Missions that brought advances in this direction are, in chronological order, IKONOS, QuickBird, RapidEye, Geoeye, WorldView (Chuvieco, 2016), as well as aircraft and the recent unmanned aerial vehicle (UAV) images. The presence of complex ground targets is common in this type of high spatial resolution images but can be adequately classified by modern computational techniques like Support Vector Machines (SVM), Random Forests (RF), and, more recently, by Convolutional Neural Networks (CNN) (Jensen, 2009). Indeed, these are the most common image classification techniques recently found in the literature.
Automatic image classification can be performed at the pixel level (pixel-based), where each pixel is analyzed individually for labelling, or at the object level (object-based), where a set of pixels is previously merged receiving a single label (Moosavi, Talebi, & Shirmohammadi, 2014). The pixel-based classification is restricted to use only the spectral information of pixels as the unique attribute, not considering any other aspect in the process (Weih & Riggan, 2010). For many applications, this approach is able to retrieve a thematic map showing the elements of interest throughout the image with reasonable precision. However, in more complex applications, e.g. involving small structures or well elaborated shapes, like urban areas or data including detailed targets like some agricultural areas or lithological mapping, the object-based classification is more suitable (Zhou, Troy, & Grove, 2008). The reason is that object-based approach is performed in two basic steps: image segmentation, that aims to group similar pixels in objects, and classification, that aims to label the resulting objects (Whiteside, Boggs, & Maier, 2011). Working with objects allows the analyst to explore not only radiometric information, as in pixel-based approach, but also attributes like texture, shape, size, and context, improving the classification process (Duro, Franklin, & Dublé, 2012). Indeed, the object-based approach is able to take advantage of surrounding and circumstantial characteristics like roughness, neighborhood, size, and morphology of resulting objects.
Due to the above-mentioned reasons, the latest consensus of the specialized literature indicates that urban targets (buildings, roads, trees, small waterbodies) present in high spatial resolution images should be classified by objects (Bhaskaran, Paramananda, & Ramnarayan, 2010;Ma et al., 2017), since detailed information about shape, texture, and context are very important attributes to differentiate among targets these standard attributes (Mather & Tso, 2016). Object-based approach along with hierarchical and nonparametric classifier (no assumption of probability distribution of classes), is expected to be more effective in the urban scenes. Conversely, rural targets (fields, forests, medium and large waterbodies, rocks, and varied soils) oughta be better classified at pixel-level, with images including as many as spectral bands are possible (Aguirre-Gutiérrez, Seijmonsbergen, & Duivenvoorden, 2012;Ferreira, Zortea, Zanotta, Shimabukuro, & Souza Filho, 2016). Rural or natural targets present large areas and subtle radiometric variations along distances, which allows correct description by even very low spatial resolution images. Due to this reason, sensors designed to monitor these areas usually have many spectral bands (Fisher et al., 2017), enabling detailed description about the chemical composition of targets, which is crucial for lithological or vegetation mapping (Herold, Roberts, Gardner, & Dennison, 2004;Lillesand, Kiefer, & Chipman, 2014). Furthermore, for rural targets, parametric classifiers (which assume wellknown probability distribution of classes) might estimate the classes with greater efficiency, since it is designed to work with only radiometric information, which is very abundant in multispectral images of medium to low spatial resolution (Whiteside et al., 2011).
There are several strategies for improving and refine the classification of high spatial resolution and hyperspectral images (Zhao, Du, & Emery, 2017;Zhong, Ma, Ong, Zhu, & Zhang, 2017). Despite the recent improvements, the classification of scenes simultaneous including urban and rural targets remains a challenge for classifiers traditionally used for remote sensing image recognition. As can be noted this condition is very common and brings many challenges when dealing with mapping heterogeneous areas. Analysts usually rely on the timeconsuming and labor-intensive prior separation of the different targets by visual interpretation followed by independent classification of each area. An alternative is the selection of a unique method retrieving the best trade-off using the high spatial resolution image available. However, the use of only one type of image data and classification strategy in these mixed scenarios hinders the optimization of the accuracy of the results.
The present study suggests an automatic adaptive framework to overcome the classification of scenes simultaneously including rural and urban classes. It assumes the use of two different images covering the same area: one true colour (RGB) low-spectral/highspatial resolution and another high-spectral/lowspatial resolution. The proposed technique, which will be thoroughly described in what follows, is based on the automatic prior separation of urban and rural targets through the well-known Central Limit Theorem (CLT). The above mentioned prior identification relies on the fact that most rural or natural elements (i.e., fields, forests, soils, rocks) present classes with normal (Gaussian) probability density function, whereas urban classes (i.e., buildings, roads, and other city structures) generally present indescribable probability density function (Billingsley, 1995). Once the two primary targets were parameterized, a maximum likelihood classifier can be used to identify urban and rural pixels in the low spatial resolution image available. Then, in an adaptive fashion, our strategy follows by assigning pre-identified urban areas to classification using high spatial resolution image data associated with object-based analysis, whereas rural areas are assigned with low spatial resolution images and associated to pixel-level analysis.
The combination of two simultaneous approaches and images of different spatial and spectral resolutions aims at producing a robust tool for optimizing classification of the scenes covering complex and heterogeneous environments by mainly two important reasons: (1) consideration of multi-source data, attributes, and classification strategy for separate areas, and (2) limiting the number of classes for each classification problem, reducing overlap among classes.
Materials and methods
First stage: prior identification of urban and rural areas The hybrid classification proposed here assumes the existence of two images covering the same region: a low spatial resolution multispectral image, which will be used to classify the rural part of the scene, and a high spatial resolution (not necessarily multispectral) to classify the urban portion of the scene. For rural areas, it is very convenient to have a multispectral image, with as many channels as it needs for a correct characterization of the natural targets involved (Fisher et al., 2017). This assumption relies on the fact that the chemical composition is very important to characterize the subtle differences among spectral signatures. At the same time, for the urban area, the mandatory image parameter is the pixel size, not the number of channels (Myint, Gober, Brazel, Grossman-Clarke, & Weng, 2011). For example, for buildings and roads, the shape and texture of objects are crucial information for recognizing them.
We assume images with fine spatial registration (geometrical alignment) and radiometrically corrected. The first stage is the core of the proposed technique. It uses the low spatial resolution image to separate the hereafter called primary classes: urban ω U and rural ω Rj . It is important to stress that the aim at this stage is not to find the definitive classes of targets, but only to identify the nature of targets on the scene, whether urban or rural. We understand as rural those classes representing natural elements like waterbodies, fields, rocks, bare soils, forests, crops, etc. Therefore, even in the early stages, rural class ω Rj can assume more than one subclass j.
We employ the Central Limit Theorem (CLT) to perform the task of separating ω U and ω Rj by considering these to primary classes as two different populations. CLT states that, given a set of sufficiently large samples collected from a population with a finite level of variance, the mean of all samples from the same population will be approximately equal to the mean of the population . Furthermore, the set of samples will approximate to a Normal/Gaussian distribution pattern, with variances approximately equal to the variance of the whole population divided by each sample's size n. Following this theorem, images with sufficiently large pixel sizes (low spatial resolution data) including contributions from many targets together are expected to have classes presenting Normal/Gaussian probability density distributions, which allow us to determine, in a prior fashion, the main nature of the pixels: whether urban or rural.
To better understand the proposed method and the adequacy of CLT to the related problem, let x 1 ; x 2 ; . . . ; x n be a randomly selected sample of size n, a sequence of independent and identically distributed random variables drawn from distributions of expected values given by µ and finite variances given by σ 2 . Consider we are interested in the samples average X n of these random variables.
The theorem implies that sample averages converge to the expected value µ as n → ∞ and, for large enough size n, the distribution of X n is closer to the Normal distribution with mean µ and variance σ 2 /n. Asn approaches to infinity, the random variables Thus, spectral generalization caused by pixels with large sizes is important and contributes to the proper operation of the proposed framework. To adapt the CLT for the real problem approached here, we transfer the concept of one-dimensional sample average X n to the multispectral response r i of each pixel over the low spatial resolution image. We assume each pixel's response r i as a linear combination of targets included inside it (Blaschke, Lang, & Hay, 2008), where fp 1 ,p 2 , . . ., p m } are the proportions occupied by m targets. Then, the spectral response r k for each channel k can be depicted as: where r k is the spectral response of the pixel in the kth spectral band, r km is the pure spectral response of each target present in the pixel in the kth spectral band. The above demonstration shows the multi-source nature of large pixels under images including varied targets. Exploratory experiments have indeed proved our initial assumption regarding the expected Normal distributions of primary classes included in low spatial resolution images. Since the aim of the proposed method is not to estimate the proportions of pixel's compounds (unmixing), the number and nature of targets in each pixel along the image can vary, but causing any loss of validity.
The proposed method proceeds by collecting samples of pixels corresponding to the primary classes directly over the low spatial resolution image (i.e., urban ω U and as many rural as exists ω Nj ). The statistical information with normal distribution assumed for these classes (e.g., vector means μ c and covariance matrix AE c ) can then be derived and used to feed parametric classification rules. As stated before, data presenting this behaviour show a high level of differentiation by probabilistic classifiers, such as maximum likelihood, which can be expressed by the following membership function (Eq.4), derived from Bayes theorem: where Φ c x i ð Þ is the probability density function of a pixel x i belonging to class ω c , which can initially assume urban (ω U ) or rural (ω Rj ), d is the dimensionality of the data, x i the spectral response of pixel i, μ c is the mean vector and AE c the covariance matrix of class ω c . These samples are then used to train the supervised classifier, which will later be used to determine the primary classes over the entire scene studied. The expected result is a mask separating rural and urban zone able to direct what kind of classifier is applied in each area. The suggested mask can be built from the following rule: where M i ð Þ corresponds to the position of the pixel x i in the mask M and Φ x i ð Þ is the membership vector containing the Φ c x i ð Þ values for each class. The urban (ω U ) and rural (ω Rj ) classes proceed to the second stage of the methodology. At this point, the user can consider performing a morphologic dilation of few pixels along the urban area perimeter to guarantee to enclose of urban targets. The rationale behind this procedure is the assumption that it is even worth including rural classes inside the urban area by mistake, instead of letting urban elements outside it to be wrongly classified as rural.
Second stage: adaptive classification system As the literature suggests, due to the high-frequency spectral behavior verified in urban areas, these sites have shown better classification results when classified by specialized techniques (Zanotta, Haertel, Shimabukuro, & Renno, 2014), which are able to take into account many parameters and specificities of targets (Lu, Hetrick, & Moran, 2010). At the same time, to prospect the ability to handle information from many kinds simultaneously and to avoid multilabelling of unique objects formed by groups of pixels, the most recent studies have suggested using objectbased approaches for urban classification (Blaschke et al., 2008). Conversely, rural environments are more appropriately classified using detailed multispectral information, instead of data about the shape, texture, or spatial context of targets. Therefore, the large is the number of available spectral channels, the better is the recognition of the target. Many kinds of land cover classes like vegetation, rocks, and soils present very similar characteristics, which are often differentiated only by detailed inspection of spectral signatures (Dinis et al., 2010).
The portion of the high spatial resolution image M recognized as the urban area is then segmented and directed to the complementary classification step. The segmentation process intends the aggregation of similar neighboring pixels to produce objects with improved attributes (Jensen & Lulla, 1987). For sake of simplicity, we chose the widely used region growing segmentation technique available in many image processing packages. Region growing starts by merging individual pixels using spectral similarity criteria, which can be more adequately classified by using not only their spectral attributes, but also texture, shape, and spatial context features (Blaschke, Kelly, & Merschdorf, 2015). Resulting objects are then classified using one of the techniques suitable for urban environments. The most popular approaches for this type of application are those which can handle many classes at the same time, while avoiding overfitting and making optimized usage of the available attributes. Some modern examples are the hierarchical Random Forests (RF) (Jiang, Wang, Yang, Xie, & Cheng, 2010), since it can manage different attributes according to the importance of each one to the specific problem addressed. Traditional decision trees classifiers tend to learn highly irregular patterns, which frequently causes overfitting of the training samples. RF is an alternative that avoids overfitting by averages multiple deep decision trees trained in different parts of the same training dataset (Hastie, Tibshirani, & Friedman, 2009).
The area in M recognized as rural can keep the original classes received at the primary stage, or can be classified again using the low spatial resolution image by a pixel-based approach and generalist classification technique, such as Linear or Quadratic Discriminant Analysis (LDA, QDA) or Maximum Likelihood. The generalist classification technique to operate on the low spatial resolution image is defined as G, whereas the classification technique chosen to operate on the segmented high spatial resolution image is defined as H. The adaptive classification of the entire scene proceeds obeying the following rule: where one pixel x i is expected to be classified by high spatial resolution image, as well as technique H, only if it is recognized as urban area in the first stage M i ð Þ 2 ω U ð Þ . Conversely, if the pixel x i is recognized as rural in the first step (M i ð Þ 2 ω Rj ), then this element is expected to be classified by the low spatial resolution data, operated by technique G. A flowchart of the proposed technique is presented in Figure 1.
The resulting classification map is expected to present an improvement according to traditional methods applying a single rule throughout the scene, ignoring the fact that it contains targets of distinct natures. As said before, the core of the proposed technique is to automatically exploit the advantages of each source of data and the potentials of proper classification tools for every specific environment. Furthermore, the reduction in the number of classes available for each classification system is expected to avoid overlaps and confusion among classes, improving overall classification results.
Data description
In order to test and exemplify the performance of the methodology suggested in this study, we performed one experiment with and area located in Cape Town, Western Cape Province, South Africa. The images are geometrically and radiometrically/atmospherically corrected. Standing for low spatial resolution data we have a Landsat 8-OLI, acquired on 3 September 2013 (Figure 2(a)). The image has 30 m spatial resolution for the spectral bands used in the experiment (1-7). The high spatial resolution image came from GeoEye-1 acquired on 31 July 2013, with 1.65 m spatial resolution (Figure 2(b)). Two images with different resolutions covering the same area were used: one low spatial resolution multispectral image covering the whole study area, and one high spatial resolution image covering at least the urban spots.
Validation dataset
Reliable ground truth was prepared by expert visual interpretation to allow accuracy assessment, which was made in two steps: first, based only on the low spatial resolution image to test prior identification of primary targets by CLT (rural targets and generic urban), and second, based on the low and high spatial resolution images simultaneously to assess the final classification result (rural and urban targets in fine detail). It is important to stress that the second ground truth was built by drawing only suburban classes (roofs, roads, trees, soils, etc.) directly over the high-resolution image, but avoiding areas considered as rural according to the first ground truth data. Then, this second ground truth related only to the detailed urban area was merged to the first (only rural areas) in order to produce the absolute ground truth, which was finally used to assess the performance of the entire classification. Ambiguous areas (black areas) were not labelled and consequently disregarded during the accuracy assessment.
Experiment with Landsat 8-OLI combined to Geoeye-1
The selected area includes rocks surrounded by vegetation and some portions of the urban area spread. Primary samples of forest, field, rocks, and urban areas were collected directly on the image. Based on the CLT , the image received the primary : the same subset imaged by GeoEye-1 in 3 2 1 true colour composition (c): Ground truth made with information from both images simultaneously (ground truth second stage). The colour palette refers to the ground truth image. Black areas refer to very ambiguous or too much mixed areas and were not considered in the accuracy assessment. classification by maximum likelihood considering Normal statistical distribution of classes to separate the primary targets selected on the scene and using OLI-Landsat 8. The initial supposition of Normal distribution of primary classes was confirmed by analyzing the histograms of Figure 3, which also over plots the estimated probability density functions (dotted lines). For this experiment, bands 4 and 7 were sufficient to separate all the four primary classes.
The resulting mask M(i) separating the primary classes is showed in Figure 4(a). As can be seen, considering the trade-off mentioned at the end of section 2, we have performed a morphologic dilation of few pixels along the perimeter of the urban area to expand it (rounded features at the edges of the urban area).
Proceeding to the second stage of the method (adaptive classification), the high-resolution GeoEye-1 image was segmented by a basic region growing technique only over the identified urban regions (grey colour in Figure 4(a)), and then classified using RF, the technique selected in this experiment to operate on the high-resolution image. The areas identified as rural (forest, grass, and rocks) were kept with the class resulting from the first stage.
Therefore, representative samples of roofs (clay, concrete, and fibrocement), forests and paved roads were collected over the high-resolution GeoEye-1 image and used for training purposes. The RF classifier was trained by using the C4.5 algorithm (Jensen & Lulla, 1987) with the available spectral image features. Finally, the classification maps were merged to produce one final image for each classifier RF (Figure 4(c)).
Analysis
The classification map resulting from the first stage (Figure 4(a)) was validated using the reference data built by vectorization from the Landsat 8-OLI image, shown in Figure 4(b). It is important to stress that this initial map aimed to test only the ability to separate between rural (forest, grass, and rocks) and urban area in terms of overall and average accuracies. This result retrieved an overall accuracy of 82.9% and an average accuracy of 84.0%. Qualitatively, it is also possible to notice spatial correspondence between Figure 4(a,b). Most importantly, the pre identification of the urban area resulted in a high classification confidence, which was greatly aided by the post-classification morphological dilation process. It is worth noting that, to avoid further classification errors from the first to the second stage, it is preferable to obtain an excess than an absence of an urban area resulting from the first stage.
The final classification map was then obtained by refining the classification of the urban area through the high-resolution image (GeoEye-1). The end result was validated using the fully ground truth present in Figure 2c. The performance of the proposed methodology was compared with the traditional classification, i.e., using only a single image and one classification technique. To allow comparison, we selected the same RF classification technique used to test the proposed technique, as well as an identical set of class samples. The results obtained in terms of overall and average accuracies are presented in Table 1. We see that for overall accuracy the scores achieved for the proposed method were significantly higher than those found by the traditional approach. Tables 2 and 3 present the confusion matrices computed for both tested techniques. Figure 5 shows details in the urban area of the classified images compared to the same parts in the Geoeye image.
We can also visually verify that the changes provided by the proposed approach caused punctual increases in the classification performance over the entire image and for all classes. However, due to the large difference between the number of pixels for each class, the most important measure in this scenario is the average accuracy, which was also higher for the proposed method. We can also see through the maps of Figure 4 that much of this result was due to the generalization caused by the classification at the first stage, when providing fine separation between forest and grass, which was not that efficient using image data with limited spectral range (traditional approach).
Discussion
We can certainly expect that using only the low spatial resolution image to classify the entire area, the results over the urban area would be very poor and inaccurate due to the presence of small parcels and many kinds of materials in the cities. Conversely, by using only the high spatial resolution image over the entire region would certainly prevent the rural area to be optimally classified. This is due because rural/natural areas do need as many spectral bands as it is possible to obtain the best representation of the spectral signature of the targets, which is the key to achieve the best classification result. The comparison of results produced by the proposed method with results generated using only the object-based classification by RF has shown that the proposed approach presented encouraging qualitative and quantitative results, especially in regions where pixel-based classifiers tend to fail: urban areas or areas with high radiometric variability and areas where the object-based approach is more suitable. Conversely, rural areas including fields, soils, minerals, rocks, and trees could be correctly classified due to the sufficient number of spectral bands available in the low spatial resolution image.
Using the predictions of CLT, the elements corresponding to primary targets could be effectively separated using the probabilistic Gaussian maximum likelihood classifier. The union of partially produced maps for each environment achieved an optimized result, matching the advantages of both methods in a single scene containing heterogeneous targets. As can be seen in Figure 5, the urban area present a similar classification provided by the traditional approach. However, the absence of rural classes in the problem could allow classification with some improvements, not showing rural targets in these areas. On the other hand, the rural area, previously 4037 1058 381 441 8 22 0 5947 Forest 38,938 59,350 2346 1548 60 282 33 102,557 Soil 20,180 2295 27,829 480 134 231 74 51,223 Asphalt 786 362 11 2096 27 206 79 3567 Clay roof 0 76 0 21 480 2 0 579 Concrete roof 236 1363 37 682 180 2439 85 5022 Fibrocement roof 44 85 0 7 2 432 183 classified by using the multispectral image and parametric (maximum likelihood classifier) was more stable, showing less noisy like pixels, when compared to the traditional object-based approach. Maybe one of the major drawbacks of the proposed method is the first stage, when prior separation between rural and urban targets is proceeded. This crucial point can affect greatly affect the final results, once some urban areas can be wrongly confused with rural. This issue has encouraged us to investigate alternative procedures to find the urban region with even more precision. Nighttime images as auxiliary data is one of the possibilities. Nighttime data register artificial light coming from the surface of the Earth, potentially indicating areas covered by urban spots. Other possibilities are fixed maps and Synthetic Aperture Radar (SAR) images.
The proposed method is a practical way for mapping heterogeneous areas reaching soundness classification results, overcoming sensor limitations regarding spatial and spectral resolution. The main advantages achieved by the proposed method include: (1) The optimization of the classification process by automatically selecting the most appropriate base image and classification technique to different areas in the same problem (high spatial resolution for urban and high-spectral resolution for rural).
(2) The independence of available classes for different primary targets when the classification process is made in separate. As the rural areas do not present the diversity of classes found in urban sites, the classification tends to produce more consistent results, since each problem is posed with a separate set of samples. | 6,542.2 | 2019-12-26T00:00:00.000 | [
"Mathematics",
"Environmental Science"
] |
Asynchronous group learning in learn from the learner approach
Learning with the learner in an asynchronous group learning approach is a promising method of education that provides a rich, interactive, and socially mediated education. As online learning became more prevalent, and more users are adopting this approach, innovative and theory-based educational activities became necessary. In this article, we introduce and describe a novel form of asynchronous, interactive, and socializing educational activity using educational technology. The educational session is based on a small group learning activity that is made available for all learners anywhere and anytime. The approach avoids the trap of using educational technology for mere simulation of in-person learning. Based on learning theories, learning with the learner enhances interactive, self-directed, experiential, and social learning. Future development and enhancement with ongoing discussions through online chat platforms open the door for the continuous evolution of the concept.
Introduction
Medical education faces continuous challenges in the process of facilitating learning and training. Learning is a complex, multifactorial, and evolving process. There are still challenges that require improvement and innovation to identify better opportunities. Therefore, efforts to accommodate learners' needs continue to find more efficient learning approaches and tools to acclimatize the growing demands. One of these approaches is facilitating interactive learning. Interactive and group learning approaches have been widely used in medical education to enhance learning. Sharing facilitates learning in various ways. Sharing ideas, perceptions, understandings, ways of thinking, and other higher mental skills and making this sharing available for all learners is a valuable venue for better education. Group studying enhances interactive learning and sharing. However, enhancing interaction and transforming educational activities into interactive sessions is often challenging for educators.
The traditional group study takes place in a face-to-face fashion. Nonetheless, with the advanced technology and communication, distant asynchronous studying in group fashion became more possible and practical, especially during the COVID-19 pandemic. Educational technology instruction can be integrated with group learning activities and experiential learning to enhance learning significantly [1]. The aim of this article is to introduce and describe a new approach to interactive, asynchronous group learning using simple educational technology. Learners can share their experiences to enrich the educational process and exchange learning skills asynchronously and practically.
Technology provides the tools and concept
Educational technology provides more than just storage of digitalized learning materials or information in books and journals. Technology provides the communication, mobility, and instructional designs to facilitate learning. Educational technology in medical education has evolved from basic use of resources to advanced web-based multimedia instruction, to provide self-directed learning opportunities for learners [2]. Enhancing self-directed learning through distance education or e-learning opens a broad concept of learning with minimal synchronous instructor involvement.
E-learning provides a practical and accessible technology learning tool to overcome various challenges and limitations in learning activities among various health care courses [3]. Furthermore, the blended learning approach to combine e-learning with face-to-face interactions to improve interprofessional competencies has been increasingly used [2]. Blended learning refers to using computer-mediated learning in-person interactions [2]. Blending virtual and in-person simulations can optimize learning efficiency [4]. Various educational activities such as didactic teaching, discussions, and self-directed learning can be achieved with e-learning. While other activities such as simulation of psychomotor skills may require in-person learning. Hence, blended learning provides a promising alternative approach for medical education because it combines the advantages of both traditional learning and e-learning [5].
Sharing learning
Sharing learning experiences in the asynchronous type of group learning where learners can learn from other learners' experiences can be structured to provide important learning objects. Educational technology facilitates video recording of a small group learning activity focusing on learners' interactions, various inputs, and conclusions. These interactions that involve learning skills, thinking talents, analysis abilities, and conclusions competencies make the core elements of the recorded activity.
The educational activity is structured by educators for a small group independent learning. Two or 3 students use the educational activity to learn in an interactive group setting involving enhanced and diverse interactions. This learning activity that includes all interactions is recorded and made available to other learners in the form of vodcast, podcast, or other digital media versions. other learners can use this recorded experience as an adjunct or as a frame to learn the same topic. Using this recorded activity by other learners to study the same topic will enrich the learning experience with all the interactions of the original participants. It simulates studying with a group in a distance or blended learning that is available anytime and anywhere. The next step in developing this approach is building ongoing discussion and comments from all viewing learners. A chat platform that contains the originally recorded activity can be made available for all the comments and textual input of other learners or educators. This ongoing input and asynchronous discussion will augment the sharing process, provides review and feedback to the ideas, and stimulates considering alternatives.
Outcomes
The advantages of this learning approach include: 1) creating a unique educational activity that records the valuable learners' interactions and contributions and making it available to all learners at anytime and anywhere; 2) sharing the diverse and valuable learning skills, analysis, and critical thinking; 3) facilitating self-learning in a simulated socially-rich environment, and 4) transforming unidirectional teaching to shared learning and semi-interactive education.
Education has already been more influenced by social interaction [6]. Younger generation learners commonly use social interactions in a 'digital natives' style [6]. This learning approach exemplifies a simulation of the social learning environment. It allows learners to compare themselves to others, facilitates understanding, widens the scope of critical thinking, and enhances attentiveness in a social environment. It also provides a valuable company for learners to stimulate and enhance learning. Group learning enhances active and self-directed learning, reflection upon learning activities, self-regulatory skills, testing and comparing own thinking, hypothesis, deep learning and higher-order activities, adult style of learning, and acceptance of responsibility for own progress [7]. It improves transferable skills such as organizational, and time management skills, leadership, prioritization, and problem-solving [8].
Conclusions
Learning from learners can provide rich, stimulating, facilitating, social, interactive, and entertaining educational activities with easy structure and low cost. It provides a better alternative to the classical video lectures or other unidirectional delivery of information activities. It is based on how students like to learn conveniently. Evaluating and validating the use of this adjunctive method of learning is important. Various settings can be structured to assess the method and to develop it further depending on the feedback.
Provenance and peer review
Not commissioned, Editor reviewed.
Ethical approval
This a perspective in medical education. No ethical approval was needed.
Sources of funding
There is no source of funding for this study.
Author contribution
Faiz Tuma conceived the main ideas of the article and wrote the first draft of the manuscript. Jafar Aljazeeri critically revised the manuscript and provided additional points. The final version approved by both authors. | 1,610 | 2021-07-01T00:00:00.000 | [
"Education",
"Computer Science"
] |
Ensuring rational natural use through effective taxation
The important article substantiates the peculiarities of environmental tax administration in Ukraine and defines its importance in ensuring an ecologically oriented economy. It has been proven that from 2016 to 2020 there is an increase in emissions of pollutants into the atmosphere by more than 10.5%. A thorough analysis of the payment of the environmental tax to the relevant budget was carried out and a close relationship between the number of emissions of polluting substances and the rates of this tax was determined. It has been proven that in Ukraine the environmental tax does not ensure the performance of one of the main functions, namely the preservation of the natural environment. This situation is due to the relatively low rates of this tax, as well as the lack of an effective mechanism for controlling emissions of pollutants. One of the main directions of solving this problem is to use the experience of European countries in diversifying the tax rates under study depending on the type and volume of pollutant emissions.
Introduction
In recent years, the issue of environmental pollution and its negative impact on people's health has become more and more relevant in the world.This caused a constant search for levers able to reduce the load on the ecosystem.The world is actively developing effective mechanisms for stimulating the reduction of pollutant emissions and increasing the level of environmental protection.It is possible to prevent the negative consequences caused by the deterioration of the environmental situation due to coordinated systemic actions of the state, society and business entities.During the last decades, programs that stimulate an ecologically oriented economy began to be actively developed and implemented in Europe.The growth of environmental risks and the significant economic consequences of natural disasters in the world have forced a significant number of countries to develop an effective mechanism for the payment of environmental taxes in order to reduce the anthropogenic burden on the planet's ecosystem.The increase in greenhouse gas emissions leads to a reduction and shortening of the stratosphere.At the same time, the troposphere, that is, the lower layers of the atmosphere, is warming rapidly.As a result, the stratospheric thickness has decreased by 400 m in the last 40 years [1].
2 Decarbonization, greening, and eco-modernization of the economy is an urgent problem today, which requires the development of new ways of solving it.As one of the options, there is a refusal to produce energy from coal, but on the condition that other energy sources are developed.Also, one of the directions is to increase the rates for greenhouse gas emissions, as well as to change the methodology for calculating the amount of greenhouse gas emissions.In Ukraine, the ecological situation is quite complex, which is due to the lack of regulatory legal acts regarding pollutant emission standards.Russia's war with Ukraine significantly worsened the situation regarding environmental pollution, in order to improve it, it is necessary to ensure the implementation of a significant number of environmental protection measures.One of the main directions of their financing is the environmental tax, but currently its rates are quite low and do not ensure the achievement of environmental security of the state
Material and methods.
The basis of this research is the fundamental provisions, approaches and principles of economic theory, theory of taxation, works of leading scientists on the problems of environmental tax administration, legislative and regulatory acts of Ukraine that regulate environmental taxation, data of the State Treasury Service of Ukraine, the State Tax Service of Ukraine.
To study the effectiveness of the environmental tax, we will analyze the income of the environmental tax in terms of constituent pollutants, the volumes of the main components of the tax base, the environmental protection costs of enterprises and the index of the comfort of life by the regions of Ukraine.The choice of indicators is due to the fact that, theoretically, the Pigouva tax should stimulate the financing of environmental protection costs in order to achieve a socially effective level of environmental pollution, as we have raised.We took the index of comfort of life, which is calculated by the State Statistics Service, as a conventional measure of the socially effective level of pollution.This index characterizes not only the state of the environment, but also the development of social infrastructure [2].The purpose of the study is to analyze the current state of environmental taxation, to study its regulatory role in ensuring the preservation of the natural environment, and to develop effective measures to increase the effectiveness of environmental taxation.
Presenting main material
Ecology is probably one of the areas to which Ukraine does not pay enough attention, although the consequences of the Chernobyl disaster should have taught the country that environmental issues should become one of the priorities.The consequence of this was a rapid increase in the number of cancer patients in Ukraine, whose number has already exceeded one million inhabitants [3,4].The system of environmental taxation is an important component of the economic mechanism of nature management, as it operates with a fiscal instrument, which is an environmental tax.In the conditions of the Ukrainian reality, the environmental tax is charged for the emissions and discharges of harmful substances, as well as for the placement of waste.Based on the fundamental principles of tax regulation of reproductive proportions, the ecological tax should perform regulatory, stimulating and fiscal functions.A critical study of modern approaches to the formation of a system of ecological regulation of nature use, that the basis of the system of ecological taxation is the promotion of rational nature use.The main components of this system are, first of all, the environmental tax, and the rent and some types of excise tax also play an indirect role.Environmental tax performs two main functions: regulatory (related to the preservation of the environment) and fiscal (filling the budget).However, the dynamics of the payment of this tax and the financing of environmental protection measures show the unsatisfactory dynamics of the development of the environmental tax [5, p. 110].The problems of the development of environmental taxation are studied by many modern scientists, in particular Y. Dziadykevich [6], N. Novytska [7], D. Serebryansky [7], A. Burlya [8], K. Kanonishenoi-Kovalenko [9] and others.
The peculiarity of the tax ecological and economic toolkit is that the funds are directed to the financing of environmental problems.This tax should ensure the preservation of the natural environment by establishing a dependency between the payment of the tax and the negative impact on the ecological situation in the country.Based on the main functions of taxation, environmental payments are usually divided into two main groups: compensatory payments and regulatory payments.Payments made for any negative impact on the environment are called regulatory.Such payments are one of the main economic incentives that force nature users, whose activities are associated with a negative impact on the environment, to independently take measures to reduce such impact.Regulatory payments should prevent actions that harm the surrounding natural environment (for example, payments for environmental pollution, waste disposal, etc.).When determining the size of the rates of these payments, indicators of technical capabilities and economic profitability of business entities whose activities have a negative impact on the environment are taken into account.The rates of regulatory payments correspond to a large differentiation, and therefore reducing the negative impact on the environment is more profitable for business entities than paying these taxes.In addition to the stimulating effect, environmental taxation should have a compensatory nature.Environmental payments form environmental protection funds, as funds are accumulated in these funds to finance environmental protection measures and other purposes related to these activities.That is why in the scientific literature, a second group of ecological payments is distinguished -compensatory payments, which are aimed at collecting money and accumulating it in special ecological funds (for example, a fee for the special use of natural resources).Such payments are often called financing.When determining the rate of these payments, indicators of profitability of enterprises and continuity of financial income are taken into account.Unlike regulatory payments, these payments are aimed at financing environmental protection measures and are not directly related to the amount of negative impact on the natural environment and resources [10,11].In Ukraine, for a considerable period of time, environmental tax rates were quite low, which made it impossible to fully implement environmental protection measures and stimulate enterprises to reduce emissions of pollutants into the air and the surrounding natural environment.Understated environmental tax rates for emissions, discharges of harmful and dangerous substances, as well as for the placement of waste did not make it possible to form sufficient financial funds for the reproduction of the natural environment.The analysis of environmental tax receipts clearly shows that during the period under study, its gradual growth from EUR 117.24 million in 2016 has been observed to EUR 135,89 million in 2021.However, despite the growth of this indicator in dynamics, its specific weight in the revenues of the consolidated budget is being followed.Thus, in 2016, its share was 0.6% and by 2021 it had decreased to 0.33% (Fig. 1).It is worth noting that during the entire period of the study, the revenues from the environmental tax amounted to less than 1% of the revenues of the Consolidated Budget.With such dynamics of environmental tax revenues to the budgets of Ukraine, it is quite problematic to form a reliable source of investment support for the restoration of the natural environment.The sums of expenses of the consolidated budget of Ukraine for nature protection measures must be adjusted in accordance with the tax revenues from the environmental tax of individual regions of Ukraine.In Ukraine, the unreasonableness of environmental tax rates is observed, in particular, in regions where the level of pollution is high, rates should be increased in order to perform environmental protection functions.The amount of the paid tax is evidence of the damage caused to the environment, the main task of this tax is to improve and restore the ecological situation, therefore the costs of environmental protection measures cannot be less than the amount of the environmental tax.
Having analyzed the structure of revenues from various components of the environmental tax (Fig. 2), it is worth noting that the largest share was in 2021 environmental tax for emissions of polluting materials into the atmosphere -38% of the total amount of environmental tax paid in the specified period.Almost the same values have indicators of revenues of the consolidated budget of Ukraine for such types of environmental tax as: revenue from waste disposal (21%); environmental tax for carbon dioxide emissions into the atmosphere (19.3%) and environmental tax for temporary storage of radioactive waste (19.1%).The smallest specific weight is occupied by receipts from the discharge of pollutants into water bodies, only 3% in 2021.The economic essence of any tax is manifested through the implementation of two main functions: regulatory and fiscal.Regarding the environmental tax, the essence of the regulatory function is manifested through the mechanism of stimulating the reduction of emissions of polluting substances, in relation to the fiscal role of this tax, it is insignificant, since the environmental tax makes up a rather small share of budget revenues.Therefore, in order to study the effectiveness of environmental taxation in Ukraine, it is advisable to conduct an analysis of the dynamics of pollutant emissions, that is, to conduct an analysis of the regulatory function, since it is the main one for this tax.The conducted analyzes of the dynamics of pollutant discharges clearly demonstrate the low efficiency of the environmental tax and, as a result, the system of environmental taxation in Ukraine in general.The slight decrease in emissions of polluting substances is due, first of all, to the reduction of aggregate volumes of industrial production in Ukraine, and not to the impact of environmental taxation in general.In particular, a slight decrease in emissions from stationary sources of pollution is observed.Meanwhile, there is an increase in emissions of nitrogen oxide by 3.32% and sulfur dioxide by 7.7% from mobile sources, which indicates the insufficient impact of the said tax on the ecological situation in the country as a whole.The worst situation is observed in terms of carbon dioxide emissions, both from stationary and mobile sources.During the research period, its decrease was only 3.31 and 7.44%, respectively.The conducted analysis shows that the current mechanism of environmental tax does not stimulate business entities to reduce the volume of emissions of pollutants into atmospheric air and water bodies, to comply with their standards and limits.The essence of environmental taxation is to determine the methods of charging individuals and legal entities with fees for negative impact on the environment, as well as a set of taxes and fees, the collection of which is aimed at stimulating the rational use of nature.The main idea of environmental taxation is to establish a relationship between the deductions of business entities to the budgets and the degree of damage caused by them to the environment and natural resources.However, in Ukraine, despite the growth of environmental tax rates, in relation to emissions of certain polluting substances, they remain quite low, and this causes the tax to fail to fulfill one of its functions, namely environmental protection.
Dynamics of pollutant emissions into atmospheric air during 2016-2021, thousand tons
In order to stimulate the reduction of environmental pollution and to fully cover the damages caused, it is proposed to fully return to the Ukrainian legislation the system of limits with the application of a progressive scale of rates for over-limit pollution and the implementation of quarterly indexation of environmental tax rates taking into account the inflation index as of the end of the reporting period quarter It should be noted that in Ukraine, in addition to the environmental tax, the nature protection function is also indirectly performed by some other taxes, in particular: -rent; -excise tax [12].In foreign countries, the system of environmental taxation is more developed.
In the developed countries of the world, the totality of environmental taxes consists of the following groups: -environmental taxes and fees of a fiscal nature (aimed mainly at maximizing budget revenues); -environmental taxes and fees of a compensatory nature (aimed directly at covering losses due to environmental damage caused by business entities); -environmental taxes and fees of a stimulating nature (aimed primarily at stimulating the greening of production) [13,14].Improving the system of environmental taxation in Ukraine is possible by: introduction of a progressive scale of environmental tax taxation (tax rates should increase for larger volumes of emissions and waste); today, the differentiation of environmental tax rates is limited only to classes of pollutants; providing tax benefits to business entities that use resource-saving technologies; introduction of a tax on products containing environmentally harmful substances (this tax exists in most European countries); introducing an environmental protection fee for business entities that are environmental tax payers; at the same time, payments from this fee must go to a special fund with a target direction for environmental protection measures.Taking into account the positive European practice of the functioning of the system of financing environmental protection measures, the proposals of the Ministry of Environment and Natural Resources of Ukraine regarding the creation of a special fund are appropriate.To which 100% of the environmental tax on CO2 emissions should be directed, and the main purpose of its operation should be the introduction of tools to stimulate the restoration of the natural environment, in particular with regard to climate change [15].Most European countries have developed programs aimed at decarbonizing the economy and developing a strategy for adapting the financial sector to prevent climate change.Considering that Ukraine is an agrarian country, the question of implementing the experience of the EU countries regarding the tax on fertilizers and pesticides is gaining relevance [16, p. 630, 17-18].The use of fertilizers has negative consequences, as fertilizers can enter groundwater and make it unfit for consumption.In addition, in case of violation of fertilizer application technology, nitrogen and other pollutants enter the atmosphere, and soils may become unsuitable for agricultural use.In the case of the use of pesticides, the atmospheric air is polluted by the entry of toxic substances into it.In addition, pesticides kill microorganisms in the soil, which also affects soil fertility.
Conclusions
Environmental taxation is an important tool for effective environmental policy of the state.Today, despite its significant role, it is an ineffective and ineffective regulatory tool.The analysis shows a low level of ecological and economic indicators in Ukraine.The analysis of pollutant emissions per person shows that only sulfur dioxide emissions significantly decreased from 25.7 kg in 2016.to 14.4 kg in 2021 The situation has practically not changed for the rest of the investigated pollutants.In Ukraine, there is no effective model of environmental tax management, which is due to the inconsistency of the mechanism of distribution of revenues from the tax under study to the budgets of different levels.A significant deficit of the state budget makes it impossible to use financial resources from environmental taxes for the implementation of environmental projects due to their accumulation mainly in the revenue part of the general fund of the budget, which is used to solve other socioeconomic problems.The formation of an effective mechanism for the tax administration of the environmental tax with a simultaneous increase in the costs of decarbonization will contribute to the reduction of greenhouse gas emissions and have a positive impact on climate change.Increasing the tax burden is one of the tools to achieve carbon neutrality, but increasing rates is an important, albeit small, step toward decarbonizing the economy.Only the consistency and effectiveness of the implementation of the principles of decarbonization, including taxation of emissions will provide an opportunity to improve the ecological situation in the country.
Fig. 1 .
Fig. 1.Receipt of environmental tax to the Consolidated Budget of Ukraine Source: calculated according to the data of the State Treasury Service of Ukraine.
Fig. 2 .
Fig. 2. The structure of revenues of the consolidated budget of Ukraine by types of environmental tax in 2021 Source: calculated according to the data of the State Treasury Service of Ukraine.
Fig. 3 .
Fig. 3. Emissions of major pollutants per person in 2016-2021, kg Source: calculated according to the data of the State Statistics Service of Ukraine | 4,205.6 | 2023-11-01T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
ASE Noise Characterization of an All-Fiber Sagnac Interferometer via LAN for Remote Sensing
The spectral noise characteristic and relative intensity noise of an all-fibre Sagnac interferometer consisting of pump source, a WDM, a piece of Er-doped fibre, a fibre Bragg grating (FBG), an optical circulator and a 50/50 coupler, were studied over a 75C-degree range. At the probing end, a high-birefringence piece of fibre and a Peltier were employed for temperature variation. Spectral and temperature response of the noise reduction due to temperature variation was performed remotely using an Arduino micro-controller and a DS18B20 digital sensor and fed into a local area network. Optical and thermal characterization of the system has also been undertaken.
Introduction
Light propagation through optical fibers does not only have applications in optical communications for data transmission.It also has different applications such in Medicine, Industry and other areas as pressure, temperature, stress and torsion sensors [1] [2].Therefore, remote control of optical characterization is very important both at system and component levels.
Groups of fibres that connect to different optical components are called optical arrays.In this research work, two optical arrays were employed: An Erbium Doped Fibre Amplifier (EDFA) and a Sagnac interferometer (SI).The two optical arrays have different optical components and these arrays were characterized optically as they are inserted into the system.Characterization results on amplified spontaneous emission (ASE) noise of the separate components were investigated.Due to harsh weather conditions in our labs, remote temperature characterization through a local area network (LAN), is very important as the user could be located far from the experiment and in this way, the birefringent fibre temperature can still be known.The proposed LAN works under a client-server architecture in order to reduce the time employed for users during the component and system characterization of temperature.
Characterization of Optical Arrays
The first optical array (seen in Figure 1) was the main setup for temperature characterization.It consists of an Erbium-doped fiber amplifier (EDFA), an optical circulator for measuring the reflected and transmitted power, and a Sagnac interferometer (SI).A QFBGLD980-250 laser at 980 nm with 51 mA of threshold, a WDM with ≤0.3 dB of insertion loss, 0.22 of numerical aperture (NA), an Erbium doped fibre with a 980 and 1480 nm pump wavelength and 1530 -1610 emission on C and L bands and a FBG at 1548.4 nm conform the EDFA.The optical circulator was used to propagate the light on the clockwise direction.Finally, the SI was built with a 50/50 coupler with 21.6 dB/0.4 dB of insertion loss, ±40 nm of bandwidth, a Hi-Bi SHB1500 optical fibre used for the thermal sensor with a 0.13 -0.16 NA, a Peltier board used to increase the temperature of the Hi-Bi fibre.
The main purpose of this optical array is to study the operation of each component with respect to their data sheets via spectral characterization with an optical spectrum analyser (OSA).The main optical array is shown in Figure 1.The setup that includes a Sagnac interferometer is shown in Figure 4.It consists of a 50/50 coupler that has two input ports and two output ports.In port number one the input power is produced by the EDFA and the 50/50 coupler sends 50% to port three and 50% to port four.Signal from port three is propagated in the clockwise direction and the signal from port four is propagated in the anti-clockwise direction.The signal power is propagated into the 0.22 m high birefringence fibre (HI-BI).Finally, the signal power arrives to the 50/50 coupler and reflected power can be measured at port number one.The EDFA generates ASE noise that is measured at port number two, since it is an effect that is always generated when a pump of 980 nm is applied to an Erbium doped fibre [7] [8].The photons that enter it are absorbed and create a transition from ground level to the excited level.Moreover, since the lifetime of photons in the excited level is around one μs and the metastable level of its lifetime is about 10 ms, there is a significant difference between the two lifetimes, and the electrons return from excited level to metastable level after one μs, but light is not emitted.
On the contrary, as the metastable level has a longer lifetime; if the pump signal is constant, population inversion is produced, and energy is stored between the metastable energy level and the ground level.When this energy relaxes, it produces both signal amplification at 1550 nm via stimulated emission and spontaneous emission [9] [10].This ASE level is amplified, producing the ASE noise, as sketched in Figure 2. Furthermore, in order to remove the ASE noise, the SI optical array was setup as explained above.
Results on Figure 3 show that all wavelengths from 1500 to 1600 nm have been propagated from port 1 to port 2. By comparison from Figure 9 to Figure 10, one can note that power levels at 1530 nm are very similar in both, but power is slightly higher at 1548.4 nm, at the same current in the pump.One explanation is that power at this wavelength is reflected a few times by the Bragg grating, From the previous figure, spectral characterization of port 2 at the 50/50 coupler and port 3 of the optical circulator has been performed.Power transmitted through the SI is obtained at port 2 of the coupler, while at port 3 of the circulator, the reflected power from the interferometer is obtained.Such interferometer consists on a 50/50 coupler and 0.22 m of Hi-Bi fibre, both operating at room temperature.As it could be seen in Figure 12, the transmitted power of the SI shows 4 valleys due to the Hi-Bi fibre length.It can also be observed that ASE noise level is relatively high at 1530 and that at 1548.4 nm the power is considerably high too.The aforementioned figures were obtained at pump current values ranging from 10 to 200 mA, in all cases.ASE noise can be reduced through an SI, which at first indicates the need for temperature characterization of the aforementioned SI.As shown in Figure 4, a piece of Hi-Bi fiber spliced between port 3 and 4 of the SI for temperature modulation purposes using a Peltier.
Characterization with Respect to Temperature
The optical array presented in Figure 1 shows characteristics that affect the transmission power via ASE noise.This generates relatively high losses.In order to reduce most of the noise generated by the EDFA and in order to increase the transmission power, the optimization of optical array was done first, this process was also called "signal amplification at 1550 nm and ASE noise reduction" and then a temperature study of Sagnac interferometer, was included.At the output port of the coupler, the array was reduced to two splices and the Hi-Bi fibre was reduced to 16cm to eliminate most ASE noise, this length of Hi-Bi fibre is calculated using the following equation [11] [12]: where: L = Hi-Bi fibre length, in m. λ = Wavelength of transmitted power, in nm.
Δλ = Period of valleys in the transmitted power curve, in nm.
Δn = Difference between the slow and fast axes in the Hi-Bi.
1 λ = Wavelength of a transmitted power valley, in nm.
The second step relies on the controlled temperature for the SI, in order to reduce most of the ASE noise, for this reason it is necessary to characterize the temperature in the SI, when temperature control is added to Hi-Bi fibre it changes its characteristics for contracting or dilating [13] [14].A Peltier plate was employed for changing the fibre temperature characteristics.A variable voltage source is inserted into the plate for applying hot and cold periods into the Hi-Bi fibre.The ASE noise reduction occurs when the signal to be filtered is introduced into the SI, which delivers precisely a filtered signal at a certain wavelength and rejects other signals at different wavelengths.In Figure 5 the improved optical array is shown, with the Peltier plate and the voltage source.
Temperature Measurement
The SI with a DS18B20 temperature sensor and an Arduino MEGA2560 plaque inside a LAN is observed in Figure 6.In order to induce temperature cycles to the Hi-Bi fibre, a Peltier plaque and a voltage source were employed.These temperature changes are transmitted to the DS18B20 temperature sensor, which was then connected to the Arduino MEGA2560 and a server.The temperature data are displayed in the computer screen by the user.Then, these data are sent to the computer, where the user does the measuring inside the LAN via remote access.Therefore, it is not necessary for the user to be physically in the optical array in order to characterize the temperature.In order to do perform the remote measurement of temperature in an SI via LAN, the schematic shown in Figure 7 was employed.
First, the connectivity between the client and server inside the LAN of UNACAR is verified.If there is not connectivity, then the characterization of temperature from a remote form will not be done.In order to check the connection inside of LAN, a set of data packets were sent via internet protocol (IP) inside the LAN.
In order to secure the client-server connection and for taking the temperature measurement, the following devices were used: two computers (the first working as a client and the second as a server) the optical array called "signal amplification at 1550 nm and ASE noise reduction", an DS18B20 digital temperature The next step is to use different applications to configure the communication servers between client and server and the temperature characterization.The free software used was the following: Team Viewer for the remote connection, Xampp to up the database servers, a server of Protocol Transfer File (FTP) and apache server to visualize web pages.NetBeans IDE 8.0.2 was used to programme an app in Java.With this app the user can see the temperature characterization in a friendly screen.Sublime text2 was also used to make or edit a web page, this was used in order to make an advanced search engine, for temperature characterization data.Arduino 1.6.5 was also employed as an interface to programme the controlled board with the temperature sensor.In order to connect the DS18B20 temperature sensor to the Arduino board, a PCB circuit had to be built for the sensor to work.The Arduino sensor shown in Figure 8, and the print circuit sensor design are explained.
In order to allow the user to characterize the temperature data, Java-based app was created.The temperature data are saved in database created for this purpose.
An advanced search engine based on HTML5, PHP programming language, and MySQL database server are employed for visualization of results.Finally, LAN-based remote measurements within UNACAR campus were performed along with final tests and verification procedures.
Print Circuit Sensor Design
In order to perform the temperature measurement in the Hi-Bi fibre within de Figure 8. Arduino sensor connection.SI, a print circuit was made.The circuit is connected to an Arduino MEGA2560 and a DS18B20 sensor.The circuit can be connected to up to six temperature sensors, the circuit sensor connection is presented in Figure 9. Afterwards, the circuit was designed into the Printed Circuit Board (PCB).This PCB is shown in Figure 10 and Figure 11.This circuit plaque is called "Temperature Shield".
As it can be seen in Figure 9, up to 6 temperature sensors could be connected, although only 2 were employed in this experiment.The driver works as an intermediate connection between the Arduino controller and the temperature Figure 9. Electronic circuit for connect the DS18B20 temperature sensor to Arduino MEGA2560.
Figure 10.Template for the circuit electronic in PCB.Open Journal of Applied Sciences sensors.The PCB is shown in Figure 10 below, on which the sensors are connected under the labels "sensor 1" and "sensor 2".
An example of the temperature measurement in the Arduino display using the temperature shield and sensors are shown in the result section.
Results
In brief, our results include: 1) EDFA ASE noise at the reflected power measurement, 2) Optical circulator spectral characterization via ASE noise measurement at the reflected power, 3) Sagnac interferometer + optical circulator spectral characterization (with 0.22 m of Hi-Bi fibre at 27˚C) at both reflected and transmitted power, 4) Sagnac interferometer + optical circulator spectral characterization (with Hi-Bi fibre at 27˚C, 30˚C, 47˚C, 87˚C, 103˚C and 104˚C) at both reflected and transmitted power, 5) General programming and electronics design for the sensors and Arduino microcontroller and finally, 6) Local area network characterization.
Figure 13 and Figure 14 show the ASE noise and power spectrum characterization with and optical circulator added to the EDFA.It should be noted that it is possible to obtain higher power transmission from 10 mA to 200 mA at port number 2. Also, the power reflection was lower, thus reducing the loss of power due to reflection in the optical array.
The general optical array is shown in Figure 15 and Figure 16. Figure 15 shows the characterization of the optical array with the transmitted signal at a minimum power of −35.62 dBm, a minimum power of 27.4 μW and a pump power of 30 mA, also at a maximum power of 1.93 dBm, a maximum power of 1.56 mW and a pump power of 200 mA.
As it can be seen in Figure 14, very low power is reflected from port 2 of the circulator.Furthermore, an attenuation of approximately 27 dB is found at both 1530 and 1548.4 nm when comparing Figure 13 and Figure 14.
Figure 16 shows the characterized reflection power of the general optical array with a minimum power of −30.61 dBm, a minimum power of 86.89 μW and a pump power of 30 mA, also a maximum power of 6.14 dBm, a maximum We will now show the results of the optical array after it was characterized and after the temperature was applied.Such characterization was made in the output port two of the transmitted power of SI at room temperature, with power ranges from 70 mA to 100 mA and with a voltage from 1 V to 10 V. Also, the characterization is made at the output port three of the optical circulator for characterizing the reflected power at room temperature and with same power and voltage ranges.
By comparing Figure 15 and Figure 16, one can note that the IS reflected power in Figure 16, does not show the noticeable valleys of Figure 15.
In Figures 17-19 the characterization shows the transmitted power in the output port number two of the SI.
Figure 17 shows that for a shorter Hi-Bi fiber a wider separation in between valleys is obtained from the spectral characterization (see Equation ( 1)).Therefore, there are fewer valleys in the range between 1500 and 1600 nm, in comparison with Figure 15.It can also be observed that the maximum transmitted power is reached around 1548.4 nm with an observed minimum power value at 1530 nm, where the ASE noise level is maximum.
The highest power in the system was obtained with a pump power of 100 mA and 8 V at 87.4˚C, as it is shown in Figure 20.The highest power when the Hi-Bi fibre is at its highest temperature was measured after 100 mA of pump power at 47.4˚C, as shown in Figure 24.
Conclusions
The optical array used in this investigation is operated with a pump laser diode of 980 nm, optical array EDFA with a Bragg grating of 1548.4 nm, circulator and SI with a Hi-Bi fibre.
The results of the ASE noise characterization were made to learn about the power of the general optical array, with the graphs the user can do comparisons as needed.
When optimizing the splices of the optical array and reducing the length of Hi-Bi fibre to 0.16 m, most of the ASE noise is removed, but if it is compared when the Hi-Bi fibre had 0.22 m, the power transmitted in 0.16 m is less, so the ASE peak noise is at 1531.8 nm.Furthermore, when the Hi-Bi fibre was reduced to 0.16 m and then only signal at 1548.4 nm was let through by the SI.
Also, in this investigation the hardware and software of a system was implemented to work with detection, measurement, storage and remote acquisition of the variable temperature in a Hi-Bi fibre of an SI into the optical array.In order to make the detection and measurement of the variable temperature into the SI, it was needed to use different approaches and platform for its programming.
This investigation can be continued by removing most of the ASE noise in the EDFA, which is possible by changing the length of erbium doped fibre and parameters used in the SI.Also changing the temperature heating or cooling in the SI to find the taller valley in order to have more power and less ASE noise in the transmission, is desirable.The measurement of the system is possible by adding more sensors and editing the code.This system can work in different applications because it can be used on different surfaces.
Figure 1 .
Figure 1.Signal amplification at 980 nm in blue, optical circulator in black and ASE noise reduction with temperature in red.
Figure 2 .
Figure 2. Spectral ASE noise measurement of reflected power.
2 λ
= Wavelength of an adjacent transmitted power valley, in nm.
max n = number of peaks between 1 λ
and 2 λ .Equation (2) was used to calculate the transmitted power period, as shown in Equation (3): ( ) ( ) 1548.4 nm 1530.3 nm * 2 was used to calculate the Hi-Bi fibre length, as shown in Equa-
Figure 6 .
Figure 6.Schematic of remote LAN connection for temperature measurement.
Figure 7 .
Figure 7. Remote temperature measurement of a Sagnac interferometer via local area network.
Figure 11 .
Figure 11.PCB of the temperature shield.
Figure 13 .
Figure 13.Characterization of ASE noise with an optical circulator in the port number 2.
Figure 14 .
Figure 14.Characterization of ASE noise with an optical circulator in the port number 3.
Figures 18 -
Figures 18-20 show a fine tuning process of the main optical array via heating of the Hi-Bi fibre.After heating the fibre, the valleys shift towards shorter wavelengths, by which one could tune maximum and minimum transmitted power levels, in order to lower the ASE noise level in the whole set up.In Figures21-23the characterization was made in the output port number three of the reflection power of optical circulator with a temperature of 27˚C and adding current from 70 mA to 100 mA and with a voltage of 1 V to 10 V.
Figure 17 .
Figure 17.Spectrum of transmitted power at 27˚C temperature.
Figure 18 .
Figure 18.Spectrum of transmitted power with 0.16 m of Hi-Bi fiber.
Figure 19 .
Figure 19.Spectrum of transmitted power with 0.16 m of Hi-Bi fiber.
Figure 20 .
Figure 20.Spectrum of the maximum transmitted power with 0.16 m of Hi-Bi fiber.
Figure 21 .
Figure 21.Spectrum of reflected power of the circulator in the port output three with a room temperature at 27˚C.
Figure 22 .
Figure 22.Spectrum of reflection power of the circulator in the port output three for 1 V and 30˚C.
Figure 23 .
Figure 23.Spectrum of reflected power of the circulator at output port three.
Figure 24 .
Figure24.Spectrum of the maximum reflected power at output port three for −47.4˚C.
Figure 27 .
Figure 27.Advanced research motor made how web page.
Figure 28 .
Figure 28.File transfers from the FTP server to client computer inside local area network. | 4,512.2 | 2018-12-07T00:00:00.000 | [
"Physics"
] |
Anatomy of maximal stop mixing in the MSSM
A Standard Model-like Higgs near 125 GeV in the MSSM requires multi-TeV stop masses, or a near-maximal contribution to its mass from stop mixing. We investigate the maximal mixing scenario, and in particular its prospects for being realized it in potentially realistic GUT models. We work out constraints on the possible GUT-scale soft terms, which we compare with what can be obtained from some well-known mechanisms of SUSY breaking mediation. Finally, we analyze two promising scenarios in detail, namely gaugino mediation and gravity mediation with non-universal Higgs masses.
Introduction
Recent results from ATLAS [1] and CMS [2] show intriguing hints for a Higgs boson with a mass around 124-126 GeV. While the reported excess of events is only at the level of about 2-3 σ per experiment, the consistency between the already excluded mass range, the excess in the remaining small window, and theoretical expectations provides a strong motivation to take these hints seriously and investigate their implications. 1 If the cause of the observed excess were the lightest Higgs boson of the MSSM, it would be rather heavy, requiring large radiative corrections to its mass from the top-stop sector. In this case either the average stop mass must be large, M S ≡ √ mt 1 mt 2 3 TeV, or the stop mixing parameter |X t | must be around twice M S [4]. The latter, known as the "maximal mixing scenario", is the subject of this paper.
Several recent papers, including [5][6][7][8][9][10][11][12][13], have explored the implications of a heavy MSSM Higgs from a bottom-up perspective, prescribing the MSSM soft parameters at the JHEP08(2012)089 TeV scale. 2 In this approach, the soft terms can simply be chosen by hand to yield maximal mixing. However, one should keep in mind one of the key motivations for low-energy supersymmetry: The supersymmetric Standard Model can naturally be extrapolated up to a very high fundamental scale. Indeed, gauge coupling unification in the MSSM points to the GUT scale M GUT ∼ 10 16 GeV as the scale where it should be embedded into a more fundamental theory. It is therefore worthwhile to investigate if maximal mixing can result from some reasonable choice of GUT-scale parameters, what further relations between the GUT-scale soft terms this would imply, and which classes of models of high-scale SUSY breaking mediation can (or cannot) accommodate maximal mixing. Furthermore, it is clearly of interest how the GUT-scale conditions for maximal mixing affect the physical spectrum and the low-scale observables, such as Higgs cross-sections and decay rates. These subjects are addressed in the present paper.
The implications of a 125 GeV Higgs for GUT-scale MSSM scenarios have been investigated previously in [19][20][21][22][23][24][25]. 3 It was observed [19] (see also [31]) that, in the CMSSM and in NUHM models with sizeable m 0 , large |A 0 | ≈ 2 m 0 is preferred to obtain a heavy Higgs. Our work goes beyond these studies by providing a thorough discussion of the prerequisites for maximal mixing in more general models, accompanied by a detailed numerical analysis.
This work is organized as follows: In section 2 we give a brief review of the maximal mixing scenario, and explain why it is non-trivial to obtain maximal mixing when running from the GUT-scale. Using semi-numerical solutions of the renormalization group equations, we derive some necessary conditions for maximal mixing. In section 3 we comment on the possibility of realizing maximal mixing in several well-established classes of models of SUSY breaking mediation, namely gaugino mediation, models with strongly-coupled nearconformal hidden sectors, radion mediation in 5D models, and gauge mediation. Section 4 contains a detailed numerical analysis of a gaugino-mediated and a simple gravity-mediated model. Moreover, we comment on the case of very heavy 1st/2nd generation sfermions. Conclusions are contained in section 5. In three appendices, we give more details about the method we use to solve the MSSM renormalization group equations, and comment on fine-tuning and on the danger of introducing charge-and color-breaking minima in the scalar potential.
JHEP08(2012)089
Factorizing the trilinear couplings into Yukawa matrices y and soft coefficient matrices A, as in eq. (2.2), is convenient for our purposes. In fact, for much of what we have to say, the only relevant trilinear soft terms are those for the third generation, in particular A t ≡ A u33 . For simplicity we will mostly assume flavor-universal soft terms in the following, in which case A u,d,e = A 0 1 at the GUT scale.
In the decoupling limit m A m Z , taking into account the one-loop corrections from the top-stop sector, the mass of the lightest MSSM Higgs is Here as usual GeV, m t is the running top mass at the scale m t , and M 2 S = mt 1 mt 2 with mt 1,2 the stop masses. X t is the stop mixing parameter, defined at the scale M S as The tree-level bound m h < m Z quickly saturates for tan β 5. To further lift m 2 h from m 2 Z = (91 GeV) 2 to around (125 GeV) 2 , radiative corrections nearly as large as the treelevel value are required. In eq. (2.3) the contribution from the logarithmic term can be increased by simply raising M S , but naturalness demands that the soft mass scale should be not too far above the electroweak scale. The X t contribution is easily seen to be maximized at |X t /M S | √ 6 = 2.45, although some studies taking two-loop effects into account suggest a maximal mixing contribution closer to |X t /M S | ≈ 2 (see e.g. [33] for a detailed discussion). Values larger than about √ 6 will however induce dangerous chargeand color-breaking minima in the scalar potential, as detailed in appendix 8. For these reasons, in the present paper we will focus on the range which we take to define maximal stop mixing.
If maximal mixing is to be obtained from a GUT-scale model, this places non-trivial restrictions on the GUT-scale soft terms. The A t parameter generically changes drastically during renormalization group (RG) running, because it receives large radiative corrections from gluino loops. At one loop, the dominant terms in its RG equation arė Sizeable gluino masses, which are now favored in the light of direct LHC search bounds, will drive A t towards large negative values at the electroweak scale (in a phase convention where M 3 is positive). The soft-breaking masses m 2 Q 3 and m 2 U 3 entering M S will also receive large radiative corrections, because they carry color and because of the large top Yukawa JHEP08(2012)089 coupling. The corresponding one-loop RGEs arė It is evident that a sizeable M 3 will also have a large effect on the scalar soft masses.
To better quantify the effects of RG running, let us assume universal GUT-scale gaugino masses M 1/2 as predicted by many GUT models, and universal trilinears A 0 . The most relevant parameters are actually M 3 , M 2 and A t , so we are effectively imposing M 2 (M GUT ) = M 3 (M GUT ). 4 We can solve the two-loop RGEs [34] semi-numerically and express the electroweak-scale mixing parameter as a function of the GUT-scale soft terms (see appendix 6). For M GUT = 2 · 10 16 GeV, M S = 1 TeV and tan β = 20 we obtain Terms with coefficients < 0.2 have been suppressed. Similarly, we have (2.10) The coefficients in these equations vary at most by about 10% in the range 5 < tan β < 40. The only exception are the coefficients involving µ in eq. (2.9), which become important for lower values of tan β. This is because, as is evident from eq. (2.4), µ enters X t at the tree level with a 1/ tan β suppression. Explicitly, for the same parameters but with tan β = 5 we obtain and (2.12) From these expressions one can read off some conditions on the GUT-scale soft terms under which maximal mixing results. We will make the extra assumptions that no soft term is hierarchically larger than the gaugino mass (such that all terms which we neglected in JHEP08(2012)089 • If the largest GUT-scale soft parameter is M 1/2 , then maximal mixing is excluded. Specifically , as is depicted in figure 1.
• If the sfermion masses are universal, specifically m 2 Q 3 = m 2 U 3 ≡ m 2 0 , then maximal mixing does not allow for m 0 to be the largest soft parameter. However, maximal mixing is possible if either m 2 Q 3 or m 2 U 3 is large and the other is small. In that case a necessary condition on the spectrum is and sizeable negative A 0 is strongly preferred (even though there remains a tiny slice of parameter space where A 0 can be zero, if all other parameters are chosen optimally). When deviating from the optimal case of sizeable m Hu and negligible m U 3 (Q 3 ) , the required m Q 3 (U 3 ) : M 1/2 ratio can grow very large. This is illustrated in the top row of figure 2.
5 Very large scalar soft masses for the first two generations may be an interesting and natural alternative scenario for maximal mixing, see e.g. [35][36][37] and our section 4.3. 6 The relatively large coefficients of the M 2 1/2 m 2 U 3 and M 2 1/2 m 2 Q 3 terms in eqs. (2.10) and (2.12) have led the authors of [38] to propose tachyonic GUT-scale soft masses for the scalars of the third generation. This would render MS small and thus allow for a sizeable |Xt/MS| ratio. In this paper however we prefer to restrict ourselves to models where sfermion masses are positive at all scales, to avoid possible complications from introducing additional vacua to the scalar potential. • |A 0 | can easily be the largest soft parameter if A 0 is negative. In the limit that all other soft terms are negligible, we find that values of are generic in situations with maximal mixing, particularly if the scalar masses are unified or negligible. This is easily understood in the light of the large coefficients of the A 0 terms in eqs. (2.9) and (2.10). It is illustrated in figure 3 for the case of negligible m 2 U 3 and m 2 Q 3 , and in the bottom row of figure 2 for the case of dominant sfermion masses m Q 3 ,U 3 > M 1/2 .
• If m 2
Hu is positive, and m Hu is the largest GUT-scale soft parameter, then maximal mixing seems possible, at first sight, even without significant A 0 contributions as can JHEP08(2012)089 be seen from figure 3. This is because of the negative-sign M 2 1/2 m 2 Hu contribution in eq. (2.10). However, maximal mixing in this case requires a moderate hierarchy, which becomes more pronounced if m Q 3 and m U 3 are non-negligible or if A 0 is positive, and weaker if A 0 is negative. Such a hierarchy is in conflict with electroweak symmetry breaking, as can be understood from the equivalent formula for the electroweak symmetry breaking (EWSB) order parameter m Z (here quoted for tan β = 20): In RGE language, the gaugino masses (specifically the gluino mass) are typically responsible for driving m 2 Hu negative at the electroweak scale, thus triggering electroweak symmetry breaking. If the GUT-scale value for m 2 Hu is too large, and M 1/2 is too small, electroweak symmetry is not broken, because the r.h.s. of eq. (2.16) remains negative. Sizeable m Q 3 or m U 3 , or sizeable negative A 0 can remedy this, but only the latter is favorable for maximal mixing. The situation remains the same qualitatively also for smaller tan β. For tan β ≈ 5, relatively large µ can however slightly widen the allowed regions for maximal mixing, since X t = A t − µ/ tan β has a stronger µ-dependence if tan β is small. The effect of µ on maximal mixing at smaller tan β is also illustrated in figure 4. We do not consider values of tan β < 5, since they no longer maximize the tree-level Higgs mass eq. (2.3). To summarize, generically, maximal mixing in a GUT-scale model with unified gaugino masses at tan β 5 requires
JHEP08(2012)089
This is a non-trivial requirement on any GUT-scale model. In addition, it is beneficial but not strictly necessary for maximal mixing if there are positive up-type Higgs soft masses m 2 Hu , and small third-generation soft masses m 2 as compared to the gaugino mass.
The resulting X t /M S at the electroweak scale will necessarily be negative. All this we have deduced from the semi-numerical solution of the RGEs underlying eqs. (2.9), (2.10), (2.11), and (2.12); see appendix 6 for more details. The picture is confirmed by parameter space scans performed with SoftSusy3.2.4, whose results are shown in section 4.
Models
We assume F -term SUSY breaking in some hidden sector, mediated to the visible sector by messenger states which can be supersymmetrically integrated out at some high scale M . If X = F θ 2 is the Goldstino background field which parametrizes SUSY breaking, the lowest-order operator inducing a gaugino mass takes the form As before we assume gaugino mass unification, i.e. a GUT-preserving F -term VEV.
JHEP08(2012)089
If the hidden sector couples to the Higgs, we can have the Giudice-Masiero operator which induces a µ term, and the operators which contribute to the Higgs soft masses. The operators are often neglected because they can be absorbed by a holomorphic field redefinition In order to cleanly separate hidden and visible sectors, we instead keep these operators explicit, noticing that they also induce soft Higgs masses, and (together with the Yukawa couplings) flavor-universal trilinear A-terms, and (together with L µ ) a B µ term. If c Au and c A d were set to zero by the above field redefinition, these soft terms would instead arise from the induced superpotential terms and the change in the c H u,d coefficients in eq. (3.3). Finally, there are the operators which also induce a B µ term. If the hidden sector couples to matter fields, the equivalents of eq. (3.3) and (3.4) with Higgs fields replaced by matter fields can be present.They will induce soft masses and flavor non-universal A-terms. The latter can also arise from superpotential operators of the form We will assume that the µ/B µ problem is solved; in particular there are no unacceptably large contributions to µ and B µ as would arise from the renormalizable terms From the discussion in section 2 it is clear that, in order to realize maximal mixing, models with vanishing or suppressed direct couplings to matter fields are preferred. The operators of eqs. (3.1)-(3.5) are then all that is needed to fix the high-scale soft masses. Furthermore, the model should allow for c 1/2 and c Au to be chosen such that the resulting A t : M 1/2 ratio is around −(1-3). SUSY breaking scenarios with suppressed couplings of the hidden sector to matter fields are not generic, because the scalar masses cannot easily be forbidden by symmetry (by contrast, suppressed gaugino masses are easily obtained but phenomenologically undesirable). We will list a number of well-known examples, and comment on the implications of our study.
Gaugino(-Higgs) mediation
Gaugino-mediated supersymmetry breaking in its minimal form is defined by the gaugino masses being the only non-vanishing terms at the mediation scale. In other words, only the operators of eq. (3.1) are present at the scale M ; the only MSSM fields to couple directly to the hidden sector are the gauginos. This was originally motivated by 5D models [39][40][41] in which the hidden sector and the chiral (Higgs and matter) superfields of the MSSM were separated in an extra dimension, with only the MSSM gauge fields coupling to both. Deconstructed models [42,43], models with Seiberg duality [44], and models with strongly coupled near-conformal hidden sectors (see below) may also give rise to gaugino mediation.
Independently of maximal mixing, the minimal scenario is not viable phenomenologically because it is missing a µ term. Realistic extensions therefore require that also the Higgs fields should be directly coupled to the hidden sector. In the 5D picture, the Higgs and gauge fields would be bulk fields, while the MSSM matter fields would be localized on one brane and the hidden sector on the other. Generically, all operators in eqs. It should be mentioned that the mediation scale M can be parametrically lower than M GUT in gaugino-mediated models. For significantly lower M , running effects become less important, and the condition for maximal mixing eventually approaches eq. (2.5) with the boundary values of A t and µ at the scale M substituted, and with m Q 3 and m U 3 well approximated at leading-log order.
Scalar sequestering
Scalar sequestering [45,46] is a more restrictive version of gaugino-Higgs mediation. The hidden sector is assumed to be close to a strongly-coupled conformal fixed point over a large range of energies, with X a composite operator satisfying ∆ X † X − 2∆ X > 0. This implies that [46,47], 7 at the scale M where the theory exits the strong coupling regime, The operators responsible for matter soft masses also end up being suppressed. Eventually the dominant soft terms at the scale M are M 1/2 , A 0 , m 2 H u,d and µ, which are all of the same order, while the combination and the matter soft terms are suppressed. Our analysis is applicable for M close to M GUT . The phenomenological consequences of the condition eq. (3.9) were discussed in detail in [49]. It was shown that for µ > 0, B µ almost always has the same sign as A t (and the opposite one if µ < 0). Moreover, small or vanishing B µ requires also small A t , in apparent conflict with maximal mixing, cf. figure 3 in [49].
Radion mediation/Scherk-Schwarz SUSY breaking
In 5D models, supersymmetry can be broken by giving an F -term to the radion multiplet. This multiplet hosts the 5D gravitational degree of freedom which corresponds to the compactification radius. Radion mediation is equivalent to breaking SUSY by the Scherk-Schwarz mechanism, and generalizes to modulus-mediated SUSY breaking in superstring models. Of particular interest are models whose compactification scale is close to M GUT , since in that case boundary conditions can be used to break the grand-unified symmetry down to the MSSM. Minimal models of this kind give rise to specific GUT-scale soft term patterns, so it is natural to ask if they can also accommodate maximal mixing.
The role of the operator X in eqs. (3.1) to (3.5) is then played by X = T M/(2R), where M is identified with the 5D Planck mass, and T is the radion multiplet with T = R + F T θ 2 . For the simplest model with a flat S 1 /Z 2 extra dimension, the gaugino masses are (see e.g. [50]) and the A-terms depend on the localizations of the matter fields and on the origin of the Higgs field. They are given by the sum of the contributions from the Higgs and the matter fields involved in the respective trilinear coupling. Roughly speaking, in gauge-Higgs unified models where the Higgs originates from the 5D gauge multiplet, the Higgs contributes but bulk matter fields Q 3 and U 3 originating from 5D hypermultiplets each contribute leading to A t = +M 1/2 in the most naive model with unlocalized Q 3 and U 3 . Localizing Q 3 and U 3 towards one of the branes brane allows to reduce their contribution, but in the potentially interesting limit in which they are completely brane-localized (which would leave us with A t = −F ω = −M 1/2 from the Higgs contributions), the Yukawa couplings vanish. In models without gauge-Higgs unification, with the Higgs coming from a bulk hypermultiplet, it gives a wrong-sign contribution In short, maximal mixing does not occur in minimal radion-mediated models. While in realistic models the exact sfermion soft terms depend on the modelling of the matter sector, and more elaborate examples may allow for maximal mixing in principle, we were unable to find an example. For instance, for the model of [51] we find that A t /M 1/2 is bounded such that it can never become large and negative, A t /M 1/2 −0.72. Furthermore, the third-generation soft masses are typically comparable and of the order of the gaugino mass, so large ratios which might still allow for maximal mixing (see the upper row of figure 2) do not appear. In conclusion, maximal mixing seems not to be a generic feature of radionmediated SUSY breaking in 5D.
Gauge mediation
Gauge-mediated supersymmmetry breaking, by definition, encompasses models whose hidden sector decouples from the MSSM as the MSSM gauge couplings are switched off [52]. Similar as in gaugino-mediated models (with which there is indeed some overlap), µ is missing in pure gauge mediation, and needs to be generated by additional Higgs-hidden sector couplings. This generically induces a too large B µ , a problem which needs to be solved in realistic models, or overcome with very special soft mass patterns [53,54].
Pure gauge-mediated models also predict vanishing A-terms at leading order at the mediation scale. A priori it is not clear that this prediction is maintained when the model is extended such as to solve the µ/B µ problem, but it was shown in [55] that generically this is indeed the case. Therefore gauge-mediated supersymmetry breaking does not lead to maximal mixing.
The mediation scale M is often taken to be far below M GUT in gauge-mediated models (for exceptions, see e.g. [56,57]). However, this generically does not improve the prospects for achieving maximal mixing (see also [9]). The implications of a 124-126 GeV CP-even Higgs boson for minimal gauge mediation have been investigated in detail in [58], one of the conclusions being that "the majority of the sparticle masses are in the several to multi-TeV range".
Numerical analysis
Let us now illustrate the impact of maximal mixing by means of parameter space scans of two models. The first is gaugino mediation, and the second is a more generic gravitymediated model with non-universal Higgs masses (NUHM) and universal sfermion masses m 0 . In the NUHM case, we will distinguish between m 0 < M 1/2 and m 0 > M 1/2 ; gaugino mediation can be regarded as a special NUHM scenario with m 0 = 0. In addition, we will also briefly survey a NUHM scenario where maximal mixing is generated from large sfermion masses for the first two generations.
We furthermore compute the dark matter relic density Ωh 2 and the SUSY contribution ∆a µ to the muon anomalous magnetic moment, but do not impose any restrictions on them a priori.
JHEP08(2012)089
We will also discuss the implications of maximal mixing for a possible Higgs signal near a mass of 125 GeV. Taking into account a ≈ 2 GeV theoretical uncertainty in the Higgs mass calculation [67], the mass range of interest is actually between 123 and 127 GeV. We approximate the signal strength for a given final state X, relative to the Standard Model expectation for the same Higgs mass, as This is justified because differences in σ(gg → h) versus Γ(h → gg) should largely cancel out when taking the taking the MSSM/SM ratio. It is important to note that SUSY contributions can lead to modifications in both Higgs production and Higgs decays as compared to the SM. The effective ggh coupling is dominated by the top-quark loop, while the effective hγγ coupling is dominated by the contribution from W bosons with a subdominant contribution of the opposite sign from top quarks. Both couplings can receive a large contribution from third-generation sfermions, in particular from stops [69]. In case of no stop mixing, the light stop loop interferes constructively with the top loop, while the interference is destructive in the case of large mixing. In the first case hgg rate will increase, while in the latter case it will decrease. The contrary is true for the hγγ couplings. Moreover, weakly interacting particles such as charginos and sleptons will contribute only to the hγγ coupling. In particular, light staus that are strongly mixed can enhance the h → γγ rate [10]. Another important effect is the enhancement or suppression of h → bb, which can significantly change the overall branching ratios. In fact an enhancement of the γγ signal, i.e. R(γγ) > 1, is often due to a suppression of BR(h → bb).
Gaugino mediation
In the gaugino-mediated model the non-vanishing soft masses at the GUT scale are M 1/2 , A 0 , µ, B µ , m 2 Hu and m 2 H d . Given m Z , the last four parameters can be traded against tan β, µ, and the pseudoscalar mass m A . Our input parameters are therefore M 1/2 and A 0 at M GUT , together with tan β, µ and m A at the electroweak scale. We choose µ > 0 and perform flat random scans letting M 1/2 , µ and m A vary up to 2 TeV; tan β is allowed to vary between 1 and 60. Regarding A 0 , we scan over two different intervals, |A 0 /M 1/2 | ≤ 3 and A 0 ∈ [−4, 0] TeV. Of 256k valid points from these scans, 158k remain after the basic mass limits and the flavor constraints listed above. Of these, 10k points have m h = 123-127 GeV. 750 GeV. The highest h mass is achieved for A 0 /M 1/2 ≈ −1.5, but stops are heavy in this case, above 2 TeV. Although large m h still allows fort 1 masses below 1 TeV in case of maximal mixing, overall a Higgs in the 123-127 GeV mass range points towards a heavy spectrum. The correlations between gluino, stop and 1 st /2 nd generation squark masses are shown in figure 8. Remarkably, the latest LHC bounds of mg ,q 1.4 TeV [70,71] for CMSSM-like scenarios with mg mq are automatically evaded. In fact, requiring m h > 123 GeV, we find mt 1 715 GeV, mq 1.5 TeV and mg 1.7 TeV in gaugino mediation. Moreover, for tan β ≤ 50 we find a maximal h mass of around 125 GeV (which is however subject to an ≈ 2 GeV theoretical uncertainty). 750 GeV (i.e. mg 1.7 TeV). Bottom row:t 1 masses below 1 TeV can still be reconciled with large m h , but only at maximal mixing. If the gluino is light enough to be seen at the LHC, maximal mixing is also strongly favored. Owing to the vanishing sfermion soft masses, m 0 = 0, over most of the gaugino mediation parameter space, the LSP is the lighter stauτ 1 . 8 (92% of the points satisfying mass limits, flavor constraints and m h = 123-127 GeV have aτ 1 LSP, while 7% have a neutralino LSP.) Hence one might expect that the gg → h → γγ rate be enhanced by the lightτ 1 contribution [10]. While the h → γγ partial width is indeed enhanced for light staus 9 (and light stops with large mixing), this is mostly compensated by the suppression of the h → gg partial width due to thet loop contribution, and by the larger total h width (mostly due to larger Γ(h → bb)). As a result, the gg → h → γγ signal strength is typically 80-90% of that in the SM. A similar suppression arises for the ZZ final state. This is illustrated in figures 9 and 10. The few points with a signal strength R > 1 at m h > 123 GeV feature very large tan β = 52-60, and a reduced h → bb rate.
JHEP08(2012)089
Finally, in figure 11, we illustrate the implications of maximal mixing and a heavy CPeven Higgs for the GUT-scale Higgs soft masses m Hu , m H d , and for the weak-scale Higgs mass parameters µ and m A . (We use the convention m H u,d ≡ sign(m 2 H u,d ) × |m 2 H u,d |). As can be seen, positive up-type Higgs soft masses m 2 Hu are preferred, in particular in case of small µ (as preferred by fine-tuning). This confirms our expectations from section 2. Moreover, the pseudoscalar Higgs mass m A prefers to be large in the maximal mixing case, well above current limits. 8 More precisely, to obtain a consistent cosmological picture the stau should in that case be the next-to-LSP, and the true LSP a gravitino or axino [68]. 9 Note that m h > 123 GeV leads to mτ 1 > 99 GeV in our dataset.
NUHM model
Let us now turn to the gravity-mediated model with non-universal Higgs soft masses (NUHM). The parameters are the same as in the gaugino-mediation case, but for nonvanishing soft masses for squarks and sleptons at M GUT . For simplicity, we take the latter two to be universal at the GUT scale. (As discussed in section 2, the scalar masses which actually affect maximal mixing are mainly m Q 3 and m U 3 .) We scan over M 1/2 , A 0 , tan β, µ and m A as in the previous subsection, allowing however |A 0 | up to 6 TeV. In addition, we let m 0 vary from 0 to 5 TeV. Over most of the parameter space, the non-vanishing m 0 makes the sleptons heavier than theχ 0 1 . We thus keep only points with a neutralino LSP, without however restricting Ωh 2 . We will discuss theχ 0 1 relic density at the end of this subsection. Figure 12 shows a projection of the scanned NUHM parameter space, analogous to figure 5 in the gaugino mediation model. As discussed in section 2, it makes a difference whether M 1/2 or m 0 is the largest soft mass. In figure 12 and following figures, we hence distinguish between the two cases M 1/2 > m 0 and m 0 > M 1/2 . As expected, maximal mixing requires a ratio between A 0 and max(M 1/2 , m 0 ) of about −1 to −3. Moreover, a heavy (m h > 123 GeV) MSSM Higgs requires large mixing with a large negative A 0 -the tip of the scatter plot being again around A 0 ≈ −2 TeV -or an overall heavy spectrum. A difference to the case with vanishing m 0 is that now much larger negative values of A 0 still give a valid spectrum. Another interesting difference is that with increasing m 0 , larger values of |X t /M S | become consistent with a heavy h. Indeed, for large m 0 , the lowest M S giving m h = 123-127 GeV is found for X t /M S ≈ −3 to −3.5. On the other hand, as previously mentioned such large values of |X t /M S | give rise to dangerous chargeand color-breaking minima in the scalar potential, so although SOFTSUSY gives a valid spectrum the corresponding points should not be trusted to be phenomenologically viable. The correlations between m h , M S and amount of stop mixing are shown in figure 13. Furthermore, figure 14 shows the dependence of m h on tan β and mt 1 , with X t /M S again JHEP08(2012)089 indicated by a color code. We see that at large m 0 , maximal mixing is possible also for large tan β. As a side remark we note that the maximal h mass in these scans is about 128 GeV, consistent with the current 95% CL limit.
Consequences for LHC SUSY searches are illustrated in figure 15. For M 1/2 > m 0 , we still find mq mq 1.5 TeV. For m 0 > M 1/2 , however, gluinos can be as light as 500-600 GeV (with stops being light and maximally mixed, and m h in the desired range). First and second generation squarks need to be heavy in this case, around 2-3 TeV, as can be seen in the bottom-right panel in figure 15. Expectations for the Higgs signal in the γγ and ZZ channels are shown in figure 16. We observe that for a Higgs mass in the desired range, R(γγ) and R(ZZ) 0.9, rather independently of the stop mass (a decoupling effect of heavy stops can however be seen in the lower boundary of R at a given X t /M S ). Higgs signal strengths close to or above JHEP08(2012)089 1 occur for m 0 M 1/2 ; they require heavy stops with small mixing, combined with large tan β and large m A , such that the h → bb rate is suppressed.
JHEP08(2012)089
Finally, although it does not directly have to do with maximal mixing, let us consider the question of neutralino dark matter. The relic density of the neutralino LSP is plotted versus the neutralino mass in figure 17. Interestingly, for M 1/2 > m 0 , a large fraction (45%) of the points have Ωh 2 < 0.135, and overall the relic density does not exceed 20. The points with very low Ωh 2 typically feature a higgsino-like LSP, which makes the scenario difficult to detect at the LHC [72,73]. In the remaining cases, when µ is large, Ωh 2 is low because of co-annihilations. For m 0 > M 1/2 , the situation is quite different, and we find the "usual" MSSM picture with Ωh 2 ranging from 10 −5 to 10 3 . Roughly 2% of the points have 0.09 < Ωh 2 < 0.135. An example of a "perfect" point the sense of light stops, maximal mixing, m h = 125 GeV and Ωχ0 1 h 2 = 0.1 is given in table 1.
Split generations, inverted sfermion-mass hierarchy
So far we have considered only the case that there is no large hierarchy between the three generations of sfermions. An interesting alternative, motivated also by the absence of any signal of new physics in the flavor sector, is the case of an inverted sfermion-mass hierarchy, JHEP08(2012)089 with squarks and sleptons of the first two generations being very heavy (in the multi-TeV range) while the third generation and the gauginos are light, of the order of 1 TeV. Together with the requirement of small µ, this is often referred to as "effective SUSY" or "natural SUSY" in the literature.
To illustrate this case, we perform a scan over the NUHM parameter space as before, but setting the soft masses of the first two generations m 01 = 10 TeV. For the third generation, we assume a universal soft mass m 03 , which we let vary between 0 and M 1/2 . As also shown in [35], in this setup a Higgs near 125 GeV with light stops is possible even for small |A 0 |. This is because, during RG running, M S is driven down by the first two generation squarks being very heavy [74]. Therefore maximal mixing now occurs also at smaller A 0 , see figure 18. Scenarios with maximal mixing, mt 1 below 1 TeV, and a Higgs near 125 GeV can now be found for A 0 /M 1/2 ratios of around −0.2 to −2. 10 10 The large splitting between the first two generations of squarks and and the third generation can induce a sizable off-diagonal (2,3) entry in the left squark mass matrix from RGE running when taking the full flavor structure into account. In this work, we have neglected the Yukawa couplings of the first two generations, but we have checked that the effect from minimal flavor violation (MFV) on the B-physics observables which we consider is small enough to be neglected in this study. A detailed investigation of MFV effects for split generations is left for a future work.
Discussion and conclusions
If the MSSM is to accommodate a Higgs boson around 125 GeV, maximal stop mixing is the only way to avoid multi-TeV stop masses. Since the Higgs sector and the top/stop sector are coupled to each other by the large top Yukawa coupling, multi-TeV stop masses generically imply multi-TeV mass parameters in the Higgs potential. A relatively small electroweak scale can then only arise through delicate cancellations, which requires considerable finetuning. In the context of phenomenological models which prescribe the MSSM parameters at the TeV scale, it has therefore been argued that, in the most natural remaining regions of the MSSM parameter space, the stop masses should be as small as possible (see, for instance, [75,76]). To be compatible with a 125 GeV Higgs, the stops will then have to be maximally mixed.
However, as we have argued, one of the most appealing aspects of the MSSM is that it can be valid up to very high energies. It is not clear a priori if a given set of TeVscale MSSM parameters can result from some healthy UV completion at a high scale, or if instead it is a point in the "swampland". For example, and of immediate relevance to maximal mixing, it is not possible to obtain X t /M S ≈ +2 from a GUT-scale model (barring hierarchically large GUT-scale trilinear terms), because of the negative gluino contribution to A t during its RG evolution. Indeed, we have shown in some detail that in models where all GUT-scale soft parameters are of the order of the gaugino mass or smaller, the trilinear coupling must be large and negative at M GUT for maximal mixing to result. In that case one has X t /M S ≈ −2 at the electroweak scale.
In models of high-scale SUSY breaking mediation, the fine-tuning required to obtain a small electroweak scale from a large soft mass scale also involves the gluino mass and the top trilinear. The electroweak scale is very sensitive to the UV-scale M 3 , since gluino loops strongly affect the stop masses and thus indirectly the Higgs mass parameters and the electroweak scale. This is evident e.g. from the large coefficient of M 2 3 in eq. (6.2).
JHEP08(2012)089
The sensitivity of the electroweak scale with respect to A t is less pronounced. Overall, the least fine-tuned remaining parameter regions of the MSSM are characterized by low M 3 (or low M 1/2 if the gaugino masses are universal), correspondingly low M S , and maximal mixing; see appendix 7. Even in these regions the fine-tuning is at the permille level or worse [5,77]. Maximal mixing does not single out any particular scenario for SUSY breaking mediation; it follows from a parameter choice in models where the UV-scale trilinear soft terms are free parameters. It would be highly interesting to identify models which actually predict large and negative A-terms. Among the classes of models we studied, only gauge mediation and to some extent radion mediation predict the trilinear terms, and in these cases the prediction disfavors maximal mixing.
In models where maximal mixing is allowed, it becomes an interesting question what its phenomenological consequences are. We have studied three examples, with particular attention to the parameter regions which give a Higgs boson around 125 GeV. In each case all experimental constraints can be satisfied for suitable parameter choices. Squarks and gluinos typically turn out to be heavy, beyond the reach of the 2011 LHC run at √ s = 7 TeV. They may however be within reach of the √ s = 8 TeV run in favorable cases. The gg → h → γγ signal strength consistently turns out to be some 10-40% below that of the Standard Model. Table 1 lists the properties of four representative points, two for gaugino mediation and two for the NUHM case, with stops below 1 TeV and maximal-mixing. GM-1 is the point with lowest M S , while GM-2 is the point with lowest M S and µ < 500 GeV from the gaugino mediation scan. Both these points have a stau as the lightest sparticle with a relic abundance of the order of 10 −2 . 11 NUHM-1 is a low-stop-mass point from the NUHM scan with M 1/2 > m 0 . It features a neutralino LSP with large higgsino component and a relic density which is too low, so that theχ 0 1 would provide only about 20-30% of the dark matter. NUHM-2 has a large m 0 of order 2 TeV and A 0 ≈ −2m 0 . It exactly matches the desired Higgs mass (125 GeV) and has a bino-like neutralino LSP with a relic density of Ωh 2 = 0.1. The SLHA files of these maximal mixing benchmark points are included as ancillary files in this preprint. Table 1. Sample points with light stops, maximal mixing, and a Higgs near 125 GeV in the gaugino mediation (GM-1, GM-2) and NUHM (NUHM-1, NUHM-2) models. Ωh 2 ('LSP') is the relic abundance of theτ 1 for the GM points, and of theχ 0 1 for the NUHM points. σ SI and σ SD are the spin-independent and spin-dependent scattering cross sections off protons; for NUHM-1 these should be rescaled by a factor ξ = Ωh 2 /0.1123 for comparison with the experimental limits (ξσ SI = 6.7 × 10 −8 and ξσ SD = 3.5 × 10 −5 for NUHM-1).
JHEP08(2012)089
6 Semi-numerical solutions of the MSSM renormalization group equations The one-loop RGES of the MSSM can be integrated analytically when keeping only the top Yukawa coupling nonzero [78]. 12 The procedure can be summarized as follows: As a first step, one writes down the closed-form solutions for the gauge coupling and gaugino mass RGEs, which are of course easily found and well known. The top Yukawa RGE is a Bernoulli equation whose solution, given the solutions for the gauge couplings, can be expressed as a simple integral. Then the A t RGE becomes a linear ODE with known inhomogeneous term and known coefficient functions, and is easily solved in terms of an integrating factor. Finally, having solved all of these RGEs, the coupled RGEs for third-generation scalar masses and for m 2 Hu can be integrated in a suitable basis. In this paper we are using a somewhat refined approach to improve precision. We keep all third-generation Yukawa couplings, and we are using two-loop RGEs for the gauge couplings, Yukawa couplings, and gaugino masses. Boundary values for the gauge and Yukawa couplings are matched to SOFTSUSY GUT-scale values, in order to properly take threshold corrections into account. With the more complicated coupled two-loop system to solve, the solutions can no longer be expressed by simple integrals, but this is unnecessary to extract the information we need. We will now briefly describe our method.
As a first step, we fix some value of tan β and some soft mass scale M S . Using appropriate boundary values for the gauge and Yukawa couplings, we solve their RGEs numerically between M GUT and M S . What we are eventually interested are however the SUSY breaking mass parameters. Quite generally, from the structure of the RGEs and from dimensional analysis, it follows that their values at M S take the form Here hatted quantities denote boundary values at M GUT as in the main text, and the α, β, γ coefficients are functions of tan β and of M S . They are obtained by numerically solving the RGEs with special boundary conditions. For instance, setting all GUT-scale masses except M 1 to zero allows to read off the α a1 , β x1 , and γ φ11 coefficients from the numerical solution, and similarly for the others. The same method is also applied to the µ and B µ RGEs.
Fine-tuning
To find the least fine-tuned regions of parameter space, we quote again eq. (2.16), valid for M S = 1 TeV and tan β = 20, A common measure of fine-tuning [83] is derived from the logarithmic sensitivity of the electroweak scale with respect to parameter variations, where a ∈ {M 1/2 , A 0 , m Hu , m H d , m Q 3 , m U 3 , m D 3 , µ, . . .} runs over all independent dimensionful GUT-scale parameters. The fine-tuning is then estimated as 1 fine-tuning = max a C a . (7. 3) The worst offender in high-scale mediation scenarios is generically the gaugino mass. This remains true even if |A 0 | is large enough to allow for maximal mixing: From eq. (2.16) we find C M 1/2 = 2.9 For |A 0 | < 3 M 1/2 , C M 1/2 dominates. Furthermore, from the fine-tuning point of view, it is favorable to go to maximal mixing in order to raise the Higgs mass, rather than to raise the overall soft mass scale. For the least fine-tuned regions compatible with a Higgs mass m h 0 > 123 GeV, with M 1/2 /m Z ≈ 10 and maximal mixing, we find a fine-tuning measure of around a few permille. It is clear that the LHC Higgs mass results, when firmly established, will substantially raise the fine-tuning price of the MSSM.
JHEP08(2012)089
8 Charge-and color-breaking minima We briefly review a criterion for charge-and color-breaking minima [84][85][86] in the context which is relevant for us. Consider the direction | U 3 | = | Q 3 | = |h u | ≡ X in field space, with all other VEVs vanishing. The potential energy along this direction reads V (X) = 3 y 2 t X 4 + 2 A t y t X 3 + (m 2 U 3 + m 2 Q 3 + m 2 Hu + |µ| 2 ) X 2 + D-terms . (8.1) It is minimized at 2) and is negative at X 0 if A t satisfies For m Z M S , the r.h.s. of this inequality is 3(m 2 U 3 + m 2 Q 3 + m 2 Hu + |µ| 2 ) ≈ 6 M 2 S . The potential energy of the realistic electroweak vacuum is If V (X 0 ) < V 0 , there is a charge-and color-breaking vacuum. It is easily checked that, for soft terms in the TeV range, the domain V (X 0 ) < V 0 is reached quickly once A t starts exceeding the bound eq. (8.3): The electroweak vacuum is quite shallow, compared to the CCB vacuum whose scale is set by the soft terms. We therefore conclude that points with |X t /M S | √ 6 (8.5) lead to charge and color breaking. Note that this is only a sufficient criterion, and that generally stronger constraints can be found by exploring other directions in field space [87]. Furthermore, we are neglecting D-terms and loop corrections. Finally, we have evaluated all running quantities at the scale M S here. Demanding the absence of CCB vacua at other scales may, again, lead to more restrictive bounds [87]. On the other hand, a CCB vacuum may be viable phenomenologically if the lifetime of our false vacuum is long on cosmological timescales.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 11,188.8 | 2012-08-01T00:00:00.000 | [
"Physics"
] |
A Bayesian approach to estimating COVID-19 incidence and infection fatality rates
Summary Naive estimates of incidence and infection fatality rates (IFR) of coronavirus disease 2019 suffer from a variety of biases, many of which relate to preferential testing. This has motivated epidemiologists from around the globe to conduct serosurveys that measure the immunity of individuals by testing for the presence of SARS-CoV-2 antibodies in the blood. These quantitative measures (titer values) are then used as a proxy for previous or current infection. However, statistical methods that use this data to its full potential have yet to be developed. Previous researchers have discretized these continuous values, discarding potentially useful information. In this article, we demonstrate how multivariate mixture models can be used in combination with post-stratification to estimate cumulative incidence and IFR in an approximate Bayesian framework without discretization. In doing so, we account for uncertainty from both the estimated number of infections and incomplete deaths data to provide estimates of IFR. This method is demonstrated using data from the Action to Beat Coronavirus erosurvey in Canada.
SUMMARY
Naive estimates of incidence and infection fatality rates (IFR) of coronavirus disease 2019 suffer from a variety of biases, many of which relate to preferential testing.This has motivated epidemiologists from around the globe to conduct serosurveys that measure the immunity of individuals by testing for the presence of SARS-CoV-2 antibodies in the blood.These quantitative measures (titer values) are then used as a proxy for previous or current infection.However, statistical methods that use this data to its full potential have yet to be developed.Previous researchers have discretized these continuous values, discarding potentially useful information.In this article, we demonstrate how
Introduction
As of April 1, 2022, there have been close to 500 million confirmed cases of coronavirus disease 2019 worldwide (World Health Organization, 2022).However, the general consensus is that this number is an underestimate of the true cumulative incidence of the disease, as this estimate is largely dependent on the number of tests being administered, the accuracy of testing (Burstyn and others, 2020a,b), and to whom these tests are being issued.If testing is extensive enough, and a correction is made for underreporting of asymptomatic cases, then a test-based case fatality rate may be a reasonable proxy for the infection fatality rate (IFR) (Luo and others, 2021).However, given that the testing early in the pandemic was sparse, and estimating IFR accurately is of the utmost importance, epidemiologists across the globe are conducting serosurveys that measure immunity of individuals by testing for the presence of SARS-CoV-2 antibodies in the blood (Chen and others, 2021).This quantitative measure (which we will call a titer value) is then used as a proxy for previous or current infection.However, how exactly these data should be used to accurately estimate important epidemiological quantities (like incidence and IFR) is an active area of research.
The standard approach is to label everyone who has a titer value above some threshold as "infected" and consider everyone else not infected.This leads to the problem of selecting the cutoff, which can be made based on known cases/controls and analysis of the receiver operating characteristic (ROC) curve.The ROC plots the true positive rate (sensitivity) versus the false positive rate (1-specificity), and it is typical to select the cutoff that results in the highest Youden Index (sensitivity + specificity − 1) (Krzanowski and Hand, 2009).Gelman and Carpenter (2020) suggest that the uncertainty in sensitivity and specificity can be considered parameters to be estimated in a Bayesian hierarchical model assuming that informative priors are used for the sensitivity and specificity.Although this method accounts for uncertainty in the sensitivity and specificity, it still suffers from the loss of information in the discretization process.Particularly in COVID-19 applications, a subject with an extremely high level of antibodies should have a lower probability of being a falsepositive than someone who is just barely above the threshold.This could be partially remedied by allowing sensitivity and specificity to be a function of covariates, but ideally methods that avoid these issues all together are preferable.
Mixture models are a natural choice to overcome the limitations of using a fixed cutoff, as they allow infection status and associated uncertainty to depend on the magnitude of individuals' titer values.Mixture models have been widely applied when studying the prevalence of infectious diseases in animals (Ødegård andothers, 2003, 2005;Nielsen and others, 2007) and in humans (Vink andothers, 2015, 2016;Kyomuhangi and Giorgi, 2022).There are several other papers that have modeled the COVID-19 antibody levels directly to infer cumulative incidence through the use of mixture models.Bouman and others (2021) showed that mixture models can outperform the methods of Gelman and Carpenter (2020) for estimation of cumulative incidence of COVID-19.Furthermore, Bottomley and others (2021) apply mixture models to Kenyan serosurvey data and show that mixture of skew normal distributions more accurately estimates cumulative incidence than methods based on thresholds.However, the applications of these models thus far has been rather limited.For instance, some unexplored questions include: how do we use these mixture models to account for survey bias and get cumulative incidence rates for the general population?How do we incorporate multiple titer values per person?How do we estimate cumulative incidence in the presence of vaccinated individuals?How do we use these mixture models to estimate IFR while accounting for uncertainty in both the number of infections and deaths?
In this article, we demonstrate how mixture models can be used to estimate cumulative incidence in an approximate Bayesian framework without discretization.Specifically, we apply a mixture of multivariate t-distributions to the log of the titer values, using a logistic regression model for the mixing parameter to account for covariates.We then use post-stratification to obtain estimates of cumulative incidence and its associated uncertainty.Furthermore, we estimate the number of COVID-19-related deaths using partially complete data and use this in combination with incidence estimates to estimate the IFR across Canada.
Data
Dry blood spot (DBS) samples were collected from participants of the Action to Beat Coronavirus (Ab-C) study (https://www.abcstudy.ca/).This article is concerned with the first two phases of the study.In Phase 1, DBS samples from 9123 participants were collected from June to November 2020 and roughly corresponding to the first viral wave (April 1-July 31, 2020).In Phase 2, DBS samples from 7299 were collected from December 2020 to May 2021 and roughly correspond to the second viral wave (October 1, 2020-March 1, 2021).These blood spots were tested for prevalence of immunoglobin G (IgG) antibodies, measured using three antigens: Spike (SmT1), RBD, and nucleocapsid (NP).Two different versions of the SmT1 antigen test were used on the Phase 1 blood spots, while all three were applied to Phase 2 blood spots.All three titers will show larger values for participants who have been exposed to COVID-19, but only SmT1 and RBD will show larger values for mRNA vaccinated individuals.This is because the mRNA vaccines do not contain the nucleocapsid (NP) protein.Therefore, people who received an mRNA vaccine and did not have a history of prior infection, will not develop anti-NP antibodies.Those that were previously infected, regardless of vaccination status, will have anti-NP antibodies (Houlihan and Beale, 2020).This will be helpful for distinguishing between vaccinated and infected individuals in Section 3.3.In Phase 1, 8919 people had one SmT1 measurement, and 8704 had two SmT1 titer measurements, along with complete covariate information.In Phase 2, 7065 had all three measurements, along with complete covariate information.Of those 7065, 624 joined the study in Phase 2 (6441 participants had complete Phase 1 and Phase 2 data).These data have been previously analyzed by Tang and others (2022) using a simpler model.Additional medical details regarding these antigen tests can be found in their paper.Tang and others (2022) also investigated the representativeness of study participants when compared to the Canadian population.They found that the study population tended to be older, more university educated, more likely to be indigenous, etc. See eTable 3 in their paper for further reading.
Although serosurveys are a proven way to accurately measure seroprevalence, the notion of seroprevalence itself has several drawbacks.Firstly, there is a chance that participants got infected and returned their blood spots soon after.Antibodies generally take between 7 and 14 days to be measurable from the onset of infection (Centre for Disease Control and Prevention, 2022).This may cause a slight under-estimation of incidence.Secondly, antibodies wane slowly over time.However, they have been shown to remain elevated for many months after infection.In a study (Alfego and others, 2021) evaluating 39 086 individuals with confirmed positive COVID-19 infection by RT-PCR between March 2020 and January 2021, the anti-NP antibody remained elevated in 68.2% [95% Cl: 63.1-70.8] of participants after 293 days, while anti-SmT1 antibody remained elevated in 87.8% of participants after 300 days.Note that the majority of people in our study were likely infected far less than 300 days prior to submitting their blood spots, so the maintenance percentage in our study was likely higher than those in Alfego and others (2021).At this point, we simply note these limitations of seroprevalence, and examine the potential impact of waning immunity on our results in Appendix F.
Population demographics (age, sex, province, ethnicity, education, and long-term care residency) were obtained from 2016 Census data from Statistics Canada (Statistics Canada, 2016).We are using the 2016 Census data because, at the time of writing, the 2021 Census data pertaining to education and ethnicity was not available.The age/sex/geographic data for 2021 were available and while the total population increased roughly 5% between 2016 and 2021, the age-sex and geographic distributions were nearly identical.This information will be used for post-stratification as described in Section 2.3.The long-term care (LTC) COVID-19 deaths were obtained from https://ltc-covid19-tracker.ca (Samir and others, 2022) between September 2020 and March 2021 for each province.The total deaths for each province by age and sex were obtained from the different provincial governments (Ontario, Alberta, and Quebec).For additional provinces, where deaths by age and sex could not be obtained, we used the distribution of nearby provinces to approximate those deaths.The age/sex distribution of deaths in Alberta was used to infer the distribution of deaths in British Columbia and Saskatchewan.The age/sex distribution of deaths in Quebec was used to infer the distribution for the Atlantic region (New Brunswick, Nova Scotia, Newfoundland, and Prince Edward Island).Manitoba reported different age groups than Ontario but seemed to have a similar distribution.Thus, we used Ontario data to infer Manitoba's age/sex deaths for the different age groups.This means that although the aggregate IFR estimates for the Atlantic region, Manitoba, British Columbia, and Saskatchewan are likely valid, the estimates by age/sex should be treated with caution due to the imputations noted above.
Methods
Our first goal is to estimate the cumulative incidence of SARS-CoV-2 in Canada.We define cumulative incidence in Phase 1 to be the number of SARS-CoV-2 infections up until September 30th 2020, divided by the population size.The cumulative incidence in Phase 1 and 2 has the cumulative number of infections up until March 31st 2021 as the numerator.We define the incidence proportion in Phase 2 to be the number of infections from October 1st 2020 to March 31st 2021, divided by the population size.We recognize that the terms cumulative incidence and incidence proportion are used interchangeably in the epidemiology literature, and we are avoiding the term "cumulative" when presenting estimates of incidence in Phase 2 alone.We estimate incidence in two steps.First, we will fit a Bayesian mixture model to the titer values, relating an individual's infection status, a latent variable, to their measured covariates via a logistic regression model.Second, we will use post-stratification to account for the disparity between the population of survey responders versus the general Canadian population.This will yield an estimate of the number of infections in Canada for each covariate combination, and hence, an estimate of the cumulative incidence.
Our second goal is to estimate the Infection Fatality Rate, which is defined as the number of COVID-19 related deaths divided by the number of infections.This will be estimated in Phase 1, Phases 1 and 2, and Phase 2 alone with the same time periods as mentioned previously.We do this by building a Bayesian model for the number of deaths in Canada by age/sex/province group, and dividing this by the estimated number of infections.This will allow for estimates of IFR in any age/sex/province category that we want, accounting for uncertainty in both the deaths and the infections.
Notation
Lower case Latin letters are used to represent (potentially vector-valued) observed data; x are observed covariates, w is observed titer values, and d is observed deaths.The exception is p, which is an unknown probability of infection.Upper-case Latin letters represent latent variables ("missing data"), such as the unknown number of infections Y , an unknown number of deaths D, and the latent infection status Z of an individual.Greek letters will be used for model parameters.
Mixture models
In this subsection we will introduce three mixture models that will be used to infer cumulative incidence.First, we will introduce a univariate (one titer value), two-component ("not infected" and "infected") mixture model, relating each study participant's covariates to their probability of infection.We will then extend this model to the bivariate case with two titer values in Section 2.2.2.These two models will be fit to the Phase 1 data.We will then present a trivariate, three-component ("unvaccinated, not infected," "unvaccinated, infected," and "vaccinated, not infected") mixture model that will be fit to the Phase 2 data.Note that the "infected" group here contains both vaccinated and unvaccinated people as our titers values are not precise enough to determine vaccination status if a person is infected.This is likely inconsequential as we will explain shortly.
2.2.1.Univariate mixture of t-distributions-Phase 1.The infectivity status, Z i , of an individual i is latent and is measured through an antibody lab test (titer), which is a quantitative measure.The density of the logged Phase 1 SmT1 titer values is shown in Figure 1.Notice that there is an approximately symmetric mound around 0.15 which is likely to be comprised of individuals who never had COVID-19.Previously, Gaussian distributions were used to model the logged titer values in noninfected individuals (Bottomley and others, 2021).However, we expected a heavier-tailed distribution would be needed, and employ a t-distribution for both the negative and positive individuals.
The univariate, two-component version of our mixture model can be written as follows: vector of regression coefficients which will be used for post-stratification as described in Section 2.3, f 1 is the univariate (shifted and scaled) t-density, and p i = logit −1 (β T x i ) is the probability that individual i has been infected with COVID-19.That is, the probability that someone had COVID-19 is a function of their covariates, but the parameters of the t-distributions are not.The covariates used in our mixture models were age (<20, 20-29, 30-39, 40-49, 50-59, 60-69, 70-79, and 80+), sex (male, female), province (Alberta, Atlantic Region, British Columbia, Manitoba, Ontario, Quebec, and Saskatchewan), ethnicity (white, indigenous, not white or indigenous), and education (university degree, college degree, and less than college degree), meaning that m = 18.Since Z i is a latent discrete variable, certain MCMC software programs cannot sample it directly.However, we can marginalize Z i out to obtain the following likelihood: where ξ = {μ 0 , μ 1 , σ 0 , σ 1 , ν 0 , ν 1 } is a vector of parameters which need to be estimated but are not used to infer incidence directly.
For both Phase 1 and Phase 2, we have continuous values for multiple titers and thus will now extend this univariate mixture model to a mixture of multivariate t-distributions.
A bivariate mixture model for Phase 1.
For Phase 1, we have two measurements of SmT1 for each sample.Using both titers should improve our ability to identify individuals who were infected.Our model naturally extends to the bivariate case by replacing the univariate t-distribution by a bivariate t-distribution (f 2 ): where μ k is a vector of length 2, k is a 2 × 2 covariance matrix, and the rest of the parameters are the same as Section 2.2.1.Note that the logistic regression model for Z i in the second level is still univariate.This allows the model to accommodate multiple titer values per person without the number of parameters getting out of control.We fit this bivariate model on the two Phase 1 titer values using MCMC to obtain posterior samples of β which will be used later for post-stratification.
A trivariate, three-component mixture model for Phase 2.
In Phase 1, vaccinations had not yet been made available and Z i could only take on two values: "infected" or "not infected".However, during Phase 2, a non-negligible proportion (≈ 2.5%) had claimed to have been vaccinated.Given that vaccinated people are distinguishable from infected people based on the three titer values that we have available, we now have three mutually exclusive values for Z i : "unvaccinated, not infected," "unvaccinated, infected," and "vaccinated, not infected."We did not include a fourth group "vaccinated, infected," as there were likely to be very few participants in this category.Note that we can differentiate between "vaccinated, not infected" and "unvaccinated, infected" individuals because infected individuals will tend to have high titer values for all three titers, while vaccinated individuals should not have an elevated titer value for NP.That is, if a participant shows a high value of SmT1 and RBD, and a low value for NP, it should predict a small probability of infection.If a participant has a large value for all three, then the model should predict a large probability of infection.Furthermore, we decided not to use self-reported vaccination status as data, as only about half of the participants who claimed to be vaccinated were showing large values of SmT1 and RBD.This may be because they had only received one dose, or perhaps they had provided their blood spot less than 2 weeks since their second dose.Either way, we want the data (titer values) to determine SARS-CoV-2 incidence, rather than rely on self-reported claims of vaccination.
In addition to having three infected statuses, we also now have three titer values which we can use to define a mixture of three trivariate t-distributions (f 3 ).The likelihood for this trivariate model is: where ρ = Prob(y i = 2|y i = 1).Here, Prob( We fit this trivariate model to Phase 2 data using Bayesian MCMC to obtain posterior samples of β which will be used for post-stratification.
Estimating incidence using post-stratification
Incidence is defined as the number of people with an infection in a given time frame, divided by the population.We estimate incidence of COVID-19 in a subgroup of Canadians G by taking posterior samples of I G where h is ethnicity/education, is age/sex, j is province, p h j is the probability of COVID-19 infection (as in Equation 2.2) for a person with covariate combination h j, Y h j is the number of people in Canada with covariate combination h j who were infected with COVID-19, and n h j is the number of people in Canada with covariate combination h j.To obtain samples of I G we first fit the mixture models presented in Section 2.2 to obtain T posterior samples of p h j .We then use post-stratification (Little, 1993) to generalize these results to the Canadian population.That is, we draw one sample from for each t = 1...T.We then compute h j h j∈G n h j for t = 1...T, which are then used to obtain point estimates and credible intervals for cumulative incidence in Phase 1 and Phases 1 and 2 combined.The incidence proportion in Phase 2 is estimated by computing these two cumulative incidence estimates for each t, then taking the difference.
Estimating infection fatality rates outside of long-term care homes
The infection fatality rate (IFR) is a measure of the deadliness of a disease.It is defined as The methods described in Sections 2.2 and 2.3 provide estimates of the denominator with associated uncertainty, but we still need to estimate the number of deaths in the numerator.The number of COVID-19 related deaths in Canada are publicly available, but include long-term care (LTC) residents.Our target of inference is the IFR for the "community-dwelling" Canadian population and does not apply to people living in LTC homes.The spread of COVID-19 is substantially different in LTC homes than in the general population and residents of LTC homes are particularly vulnerable to severe illness and death from infection; see Danis and others (2020).Indeed nearly 80% of the reported deaths from COVID-19 prior to September 2020 in Canada were in LTC homes (Samir and others, 2022).Modeling the spread and mortality of COVID-19 within LTC homes will require unique approaches and should be considered in a separate analysis; see the recommendations of Pillemer and others (2020).The Ab-C study excludes residents of LTC and thus we need to exclude this population from our numerator as well.To do this, we will extend our post-stratified mixture models to estimate the deaths outside of long-term care homes, using publicly available COVID-19 deaths data and long-term care deaths data described in Section 1.1.
In the rest of this section, we describe the extended mixture model and algorithm used to estimate IFR in this article.We start by displaying the full model with a description of each component.We then provide a directed acyclic graph (DAG) that displays the relationship between all quantities in the model.We then provide a full factorization of the posterior distribution and explain how our algorithm approximates this posterior.
2.4.1.The complete model.The full model is shown in Equations 2.3a-2.3h,followed by a description of each component.Equations 2.3a-2.3crepresent the mixture model and post-stratification described previously, and will be referred to as "Module 1" of our IFR model.Equations 2.3d-2.3hrepresent the model extension to estimate the number of deaths outside of long-term care and will be referred to as "Module 2." Left aligned are the model components, right aligned are the nomenclature used in the posterior factorization in Section 2.4.2. ) • Indices: h, , and j represent education/ethnicity, age/sex, and province groups, respectively.Subscripts 1 and 2 are used to distinguish between quantities outside and within long-term care respectively.
• 2.3a: The log of the titer values, w i , of individual i follow a (shifted and scaled) multivariate t-distribution, with parameters that depend on the infectious status Z i = k of that individual.k = 0: "unvaccinated, not infected," k = 1: "unvaccinated, infected," k = 2: "vaccinated, not infected" (for Phase 2 only).
• 2.3b: an individual's infection status, Z i , depends on the infection probability corresponding to that individual's covariate combination, p h j[i] .
• 2.3c: The number of infections in Canada with covariate combination h j is determined by the number of people in Canada with that covariate combination, n h j , and the probability, p h j , that a person with that covariate combination was infected.
• 2.3d: The number of deaths outside long-term care in age/sex/province group j, D 1 j , depends on the number of infections in that group, Y • j , and the infection fatality rate in that group, η j .Note that we do not attempt to estimate the deaths by education and ethnicity, which is why we sum over h in Y • j .
• 2.3e: The total number of COVID-related deaths in age/sex/province group j, d j , has death rate equal to the sum of the death rates outside long-term care, λ 1 j , and the death rate inside long-term care, λ 2 j .
• 2.3f: Outside long-term care, we only know the death rates aggregated by province (the age/sex distribution is unknown).If we assume that the number of deaths outside long-term care in age/sex group and province j follows an independent Poisson process with mean λ 2 j , then the deaths aggregated by province, d 2•j , will be Poisson distributed with mean λ 2 j .Note that if we knew d 2 j , there would be no need for Module 2.
• 2.3g: In each age/sex/province group, the mean number of deaths (death rate) outside longterm care, λ 1 j , is the product of the number of infections outside of long-term care Y j , and the infection fatality rate outside long-term care, η j .
• 2.3h: In each age/sex/province group, the mean number of deaths (death rate) within longterm care, λ 2 j , is the product of the number of people in Canada in long-term care n 2 j , and the COVID-19 death rate in long-term care, θ j .Fig. 2. Directed acyclic graph corresponding to the model presented in equations 2.3a-2.3h,with subscripts omitted.Lower case Latin letters are known, all other terms are unknown.Module 1 is the portion of the model concerned with estimating infections.Module 2 is the portion of the model concerned with estimating deaths.The red arrows indicate a one-directional flow of information, and are the reason we are sampling from the cut distribution as opposed to the Bayesian posterior.β is the effect of covariates, x, on the log(odds) of infection; Z is infection status, w represents titer values from the serosurvey; ξ are the parameters of the multivariate t-distributions; Y is the number of infections outside of long-term care; D is the number of deaths outside long-term care; d is the total number of deaths by age/sex/province; d 2 is the number of deaths inside long-term care by province; η is the population average probability of death given infection; θ is the COVID-19 death rate in long-term care.
2.4.2.Approximating the Bayesian posterior.Figure 2 displays the model represented in Equations 2.3a-2.3has a DAG.Based on this DAG, the full posterior can be factored as follows: (2.4)However, sampling from this posterior poses a computational challenge, as Y and D are both discrete latent variables, and all three terms in π(D|Y , η) are unknown.Instead, we sample from the "cut distribution" (Plummer, 2015), which is the same as Equation 2.4 but the dependence on d in π(Y |β, x, d) is dropped.The removal of this dependence is sometimes referred to as "cutting feedback."Since we are not allowing our deaths data to influence our infection estimates, we are only approximating Bayesian inference when computing IFR.The cut distribution has been shown to give more sensible results than the full posterior in some scenarios where certain portions (modules) of the model are misspecified, or data quality is poor (Lunn and others, 2009).It is important to note that our serosurvey data are very high quality individual level data, but our deaths data are partially imputed and is from an unofficial source.The cut model allows us to base our estimates of incidence solely on the serosurvey data (and census data), while still utilizing all data sources to estimate IFR.We sample from the cut distribution using the following two step algorithm: (1) We first sample from the joint posterior of the parameters in the first module: which is the same as the Module 1 portion of Equation 2.4 but with the dependence of d dropped in the first term.We sample from this distribution by obtaining T (post burn-in) posterior samples of each parameter using π(β, ξ , Z|x, W) = π(W|ξ , Z)π(Z|β, x)π(β)π(ξ ) as a target distribution in MCMC.We then draw a sample, Y (t) , from π(Y |β (t) , x) for t = 1...T.
We used this algorithm for both Phase 1 and Phase 2 data, obtaining T samples of (Y • j , D 1 j ) from π cut (Y , D).We then estimate IFR by computing samples from π cut (IFR G ) for any subgroup of Canadians G outside of long-term care: • j (2.5) for each t = 1...T.We can then compute point estimates with uncertainty for all of Canada, and any age/sex/province combination that we so please.We compute the IFR G for various age/sex/province combinations using univariate and bivariate models to estimate the denominators for the Phase 1 data, and the multivariate model for Phase 1 and 2 combined.We do not attempt to estimate IFR by education/ethnicity, so we sum over h in Y • j .Since individuals who were likely to be positive in Phase 1 were also likely to be positive in Phase 2, estimating incidence and deaths just based on Phase 2 data will also include people who were likely infected in Phase 1.In order to estimate the new infections and deaths (and as a result, IFR) in just Phase 2, we found posterior samples of Y from the multivariate model and subtracted the posterior samples from the bivariate model to get the denominator.The same was done for the deaths D for each posterior sample, allowing us to calculate IFR for any subgroup we desire.
Priors
In all three mixture models, a weakly informative prior of N(0, 1) was used for each β.This will stabilize estimates in groups with a small amount of data, and have little effect on those that have a lot of data.A weakly informative penalized complexity prior was put on the degrees of freedom in all three models (see Appendix A).In the multivariate cases, informative priors were used to overcome well-known computational challenges of fitting Bayesian mixture models as noted in the Stan documentation (Betancourt, 2017).We describe our informative priors and their justifications in detail in Appendix D.1.In the reproducible example that we provide in the Supplementary material available at Biostatistics online, we show that our results are not too sensitive to "mis-specified" informative priors on the mixture components.We also note that it is primarily the estimation of β's that influence the results of this article.A weakly informative prior was used on as recommended by Section 1.13 of the Stan User's Guide (Stan Development Team, 2021).A complete list of priors for all models is presented in Appendix D.
Inference
Each model was run using No-U-Turn sampling, a form of Hamiltonian Monte Carlo that is readily available in the Stan software (Carpenter and others, 2017;Stan Development Team, 2021).Four chains with 1000 iterations, with the first half being warmup, were used for each model component.Traceplots were used to visually assess convergence of Markov chains, alongside values of Rhat < 1.01 confirming an appropriate amount of mixing (Vehtari and others, 2021).Point estimates are taken to be the 50th percentile of the (approximate) posterior distributions, and credible intervals (CrI's) are computed using the 2.5th and 97.5th quantiles.
Univariate model-Phase 1
Estimated cumulative incidence and IFR by age group is presented in Figure 5.Using the univariate model, the overall estimated cumulative incidence in Phase 1 (February-Sept 2020) is 1.79% (95% CrI: 1.21-2.66),which is similar to the estimate presented in Tang and others (2022) of 1.9% (95% CI: 0.7-4.7).Using this model for the denominators in the IFR calculation leads to an estimated infection fatality rate of 0.35% (95% CrI: 0.24-0.52)for all Canadians outside of long-term care homes.This is, again, consistent with the estimates presented in Tang and others (2022) of 0.373 (95% CI: 0.153-1.024).
When we look at the age distribution of cumulative incidence, we see a general downward trend with increasing age, with estimates for the age group 70+ being the smallest at 0.71% (95% CrI: 0.24-1.74).However, the credible intervals all overlap which suggests that incidence is similar between age groups.We see an upward trend in IFR with increasing age, with non-overlapping credible intervals.This is to be expected, as COVID-19 is now known to be much deadlier in older populations (Williamson and others, 2020).
A plot of the two univariate t-distributions is shown in Figure 1.Notice that the density plot for the positive group has mass to the left of the cutoff used by Tang and others (2022), and the negative group has mass to the right of the cutoff.Large values of titers (>2) will show high probability of SARS-CoV-2 incidence from our model, but this is not true for titer values around 0.5.If these values had been discretized using a fixed cutoff, participants with very large titer values would be indistinguishable from those with values of ≈ 0.5, thus would have the same probability of being false positives.Although this univariate case works well to demonstrate our method, we will use the results from the bivariate model when computing estimates for Phase 1.
Bivariate model-Phase 1
Figure 5 presents estimated cumulative incidence and infection fatality rates for the bivariate model in Phase 1 using both SmT1 titers.The overall cumulative incidence for Canada was 1.60% (95% CrI: 1.15-2.23).This point estimate is somewhat consistent (slightly lower) with the univariate results, with a smaller credible interval.This is reassuring, since our uncertainty should decrease as more data is used in the model.Our Phase 1 estimates are comparable with the estimate for seroprevalence in Canada from O'Driscoll and others (2021) of 1.4% (CI: 1.16-1.68,as of September 1st 2020).The estimated overall infection fatality rates for residents outside of long-term care homes was 0.39% (95% CrI: 0.27-0.56),which is also consistent with our univariate results.We will use the bivariate results for Phase 1 going forward.
When broken down by age, we see very similar trends in both cumulative incidence and IFR as with the univariate model.We also see slightly reduced uncertainty in all age groups, which is to be expected since we are adding more information (an extra titer value) into the model.The decrease in uncertainty is small, suggesting that the additional assay did not provide much additional information when predicting infection.We can investigate which titer value had more influence on the probability of infection by computing That is, we compute the probability of infection given the titer values, which are easily computed based on results from (2.2).
Figure 3 shows the probability of infection given each individual's titer values using the Bivariate mixture of t-distributions.Our model seems to "trust" the Sinai titer value more, given that it predicts a high probability when the Sinai value is high, even if the Euroimmune titer value is low.Our model seems to be indeterminate around the cutoff (Sinai titer value ≈ 0.5) that was chosen by Tang and others (2022), which implies some agreement between the two methods.
Trivariate model-Phase 2
Estimates of cumulative incidences and infection fatality rates in Phase 2 are presented in Figure 5(c) and (d).Using a trivariate mixture of t-distributions with three latent groups and post-stratification, the estimated incidence proportion in Phase 2 was 6.81% (95% CrI: 5. 35-8.42).This is obviously much higher than our estimates in Phase 1, which is to be expected.The estimated infection fatality rate in Phase 2 was 0.31% (95% CrI 0.25-0.39),which is slightly lower than Phase 1.This is comparable, but slightly lower than other estimates for Canadian IFR (∼ 0.65% from O'Driscoll and others (2021)), which is unsurprising since our study excluded those in nursing homes.
The incidence proportion in Phase 2 was comparable across age groups, with the IFR again trending upwards with age.In Phase 2, see that each age category had a lower IFR than Phase 1.Our estimates of IFR by age were highly comparable to international estimates (see Table S3 of O'Driscoll and others (2021)).
The cumulative incidence and IFR's for Phase 1 and Phase 2 combined are shown in Figures 5(e) and (f).The cumulative incidence estimate is 8.41% (95% CrI: 7.04-9.92),with an IFR of approximately 0.31% (95% CrI: 0.27-0.37).The patterns in incidence and IFR by age are highly similar to those in Phase 2 alone.The probabilities of infection given the titer values of each participant are shown in Figure 4. Since our outcome is three-dimensional, three separate plots are required.Blue dots in the bottom right corner of Figures 4(a) and (b), and the top right corner of Figure 4(c), identify participants that are likely showing immunity due to being vaccinated, as vaccinated individuals should be low on NP and high on the other two.We see that our model tends to "trust" the NP and SmT1 titers more when predicting infection.People who are high on NP or SmT1 tend to have higher probabilities, while people with only high RBD values tend to have a low probability of infection.
Cumulative incidence and IFR by province
One advantage to the methods presented in this article, is that once we have posterior samples for infections and deaths outside of long-term care, we can break the results down by any covariate combination that we so please.Figure B2 shows the cumulative incidence and infection fatality rates by province in both phases.In Phase 1, Ontario had the highest point estimate for cumulative incidence, and Quebec had the highest IFR.Our estimated IFR in Ontario was 0.27% (95% CrI: 0.19-0.41) in
Probability Infected
Fig. 3. Probability of infection given each individual's titer values using the bivariate mixture of t-distributions in Phase 1.Each dot represents a participant in the Ab-C study.On the x-axis is the titer value that was used in the univariate model.On the y-axis is an second SmT1 protein assay.A red dot indicates that this model predicts a high probability of infection, with blue being a low probability of infection, and purple being indeterminate.
Phase 1, which is much lower than the estimate given by Public Health Ontario at the time (2.8% as of May 17, 2020(Public Health Ontario, 2020)).Although these numbers are not directly comparable, as our estimates do not include people in nursing homes, this likely doesn't account for all of the disparity.Public Health Ontario's number was estimated based on IFR numbers obtained using individual-level data from China (Verity and others, 2020) and was adjusted to match the age distribution of Ontario.We therefore remain somewhat skeptical of the numbers presented in Public Health Ontario (2020).When comparing our overall estimate to the estimate in Verity and others (2020) (0.657%, CI 0.389-1.33),our number is much more comparable.
In Phase 1, Quebec had a very high reported number of deaths, which was not proportional to the number of long-term-care home deaths, resulting in a high IFR.In Phase 2 Quebec's incidence went up substantially, while the IFR dropped significantly.In Phase 2, the credible intervals for both cumulative incidence and IFR overlap between provinces.
Estimates by age group in each province are shown in Figure B1.In all provinces, incidence in Phase 1 was highest in 18-to 39-year-old, and lowest in 70+ year old.With the exception of Alberta, this pattern did not hold in Phase 2, as incidence seems to be less predictable as a function of age.In each province and phase, IFR reliably trends upwards with age.
Estimates of incidence by ethnicity in each province are shown in Appendix C. In both phases, the white and indigenous groups have comparable incidences in each province.The "not white or indigenous" group (NWoI) has relatively high incidence in Ontario and British Columbia in both phases, and low incidence in the Atlantic region and Saskatchewan in Phase 2. Note that estimates of IFR are not reported by ethnicity, as we do not have (even aggregate) COVID-19 deaths data by ethnicity.
Discussion
In this article, we developed an approximate Bayesian approach to estimate cumulative incidence and IFR using a multivariate mixture of t-distributions.We used data from the Ab-C serosurvey to estimate the probability that individuals were infected with COVID-19 based on their titer values and covariate combinations, and used post-stratification to generalize our results to the Canadian population that resides outside of long-term care.Our Phase 1 cumulative incidence estimates were slightly lower than previous estimates based on fixed cutoffs.Our Phase 2 estimate was higher than the one in the literature.Furthermore, our method accounts for uncertainty in both the number of infections and the number of deaths, and is essentially a cut model where we do not allow the deaths data to affect the estimation of the number of infections.
Estimates of incidence by age do not show any noteworthy patterns other than a slight upward trend in Phase 1.In both Phase 1 and Phase 2, IFR increased with age.Furthermore, IFR was higher in Phase 2 than Phase 1 in each age group, although the overall IFR was the same.
The main strength of our approach is that it uses the exact titer values as outcomes in our model, as opposed to a discretized version which discards information.Furthermore, we can leverage multiple titer values in a multivariate model to improve estimated probabilities of infection, while being able to differentiate between previously infected and vaccinated individuals.An additional strength of our study is that error is correctly accounted for in both the calculation of the number of infections and deaths outside long-term care, and consequently, IFR.We have not considered under-reporting of COVID-19 deaths, and we acknowledge this could be a potential issue.One way to accommodate this would be to make an assumption that a known proportion of COVID-19 deaths go unreported and include draws of unreported deaths in each posterior sample of the IFR.In the absence of information of what this proportion should be, we have treated the reported death counts as correct with the caveat that the estimated IFRs only refer to deaths directly attributed to COVID-19.
A methodological limitation of this study is that we are assuming that both the infected and uninfected groups follow a multivariate t-distribution.This may not be the most appropriate distribution for these data, and perhaps a distribution that allows for skewness may be more appropriate.Although our model makes no direct assumption about sensitivity and specificity, these two quantities are directly related to the length of the tails of the t-distributions for any given cutoff.However, the parameters of the multivariate t-distribution are estimated from the data, so our method is analogous to a non-discretized version of the methodology presented in Gelman and Carpenter (2020), where sensitivity and specificity are parameters to be estimated in the model.
A second limitation is that some responses to the survey happened before the end of the survey, such that they could have returned a "negative" dry blood spot sample and subsequently gotten infected.This would lead to slightly underestimating incidence (overestimating IFR).On the other hand, there is a time lag between infection and death, so if we counted infections up until the end of September 2020, then those infected people could experience death several weeks later and not be recorded.However, given that the vast majority of participants returned their blood samples study more than two weeks prior to each Phase's end date (see Figure G1), we figured that accounting for this time lag was not necessary.
A third limitation of our methodology is that we were unable to incorporate information regarding Phase 1 infection probabilities (from SmT1 protein) into our Phase 2 estimates of incidence.Although Phase 1 and Phase 2 SmT1 protein titer values are not directly comparable (due to the assays being calibrated slightly differently), we recognize that there is some potential to treat the SmT1 titer longitudinally from Phase 1 to Phase 2. However, we figured that this would require a drastic reworking of our current model and inference framework, and thus we deemed it out of the scope of this article.The potential consequence of this is a slight underestimate of cumulative incidence at the end of Phase 2, as some "infected" individuals in Phase 1 may be overlooked by solely looking at Phase 2 titer values (see Appendix E for a sensitivity analysis and discussion), with waning being one potential cause.However, Tang and others (2022) show that roughly 80% of people retain their "seropositivity" status from Phase 1 to Phase 2. The exploratory analysis presented in Appendix F suggests that waning may not be a large issue.It is also possible that people who were infected in Phase 1 were reinfected in Phase 2. Reinfected individuals will likely have titer values that are exceptionally high, which would affect our estimates of the parameters for the mixture distributions.This also would make the interpretation of incidence murky, as reinfected people only count as one infection.We suspect this to be more of an issue when estimating incidence/IFR at later dates, as the number of reinfected individuals in our study is expected to be very small.
A direction for future work will be to apply these methods to upcoming Phase 3 and Phase 4 data that includes a much larger vaccinated population, as well as breakthrough infections in people who have been vaccinated.Furthermore, we will have to account for reinfections as the populations' immunity wanes and new variants emerge.This could involve a longitudinal mixture model or Hidden Markov Model.Furthermore, an improved serosurvey design and associated statistical methodology that allowed for estimation of incidence (and consequently, IFR) in real-time would be an ambitious and highly interesting area of future research.
This study only looks at humoral immune response, but cellular immunity also plays an important role in the immune response to SARS-CoV-2.Other studies have evaluated the effects of T-cell response in infected people (Guo and others, 2022;Moss, 2022).An interesting line of future work would be to develop similar methods to incorporate T-cell response data into estimates of incidence and IFR.
Although we focused on SARS-CoV-2 infections and deaths in this paper, the methods presented can be applied to a variety of outcomes for any infectious disease of interest in which serosurvey data are available.There are plenty of potential extensions to this model that can be implemented to suit a variety of problems in epidemiology and biostatistics.
Meaning that the d-dimensional integral can be reduced to one-dimensional integral.Since we are interested in the KLD between a multivariate T and a multivariate normal, we substitute ν = 200, and compute this integral numerically as a function of ν.We then approximate the distance, δ(ν) = √ 2 • D KL with a polynomial.For example, δ(ν) for the bivariate model was δ(ν) ∝ ν −1.3 .We then say that π(δ(ν)) ∼ exp(λ) with λ = − log(α)/δ(U), where α and U are chosen such that our prior belief is that there is a 50% chance that ν is greater than 30.
D. Prior distributions Table D1. Priors used in Phase 1 univariate model
Parameter Prior As mentioned in the main text, we require informative priors for computational reasons.In this Section, we justify our choices of informative priors for the Phase 2 trivariate model.We note that these priors are not very sensitive to • μ 0 corresponds to the means of the "not infected" group.The first element of μ 0 corresponds to the mean NP titer values in "not infected individuals".Alongside the NP titer values collected from the survey, the lab also provided us with "control" samples of known negatives.We found that the vast majority of the control samples fell between −2.5 and −1 on the log scale.Therefore we are very confident that the mean of NP titer values from "not infected" people should be in this range.Therefore, we applied the conservative but informative prior N(−1.75, 0.25).Similar reasoning was used for the prior on the second element of μ 0 , corresponding to the mean of RBD titer values in "not infected" people.
• When setting priors for the "not vaccinated, not infected" and "infected" groups based on Smt1 titer values, we used the corresponding posterior distributions from Phase 1.Although the tests are calibrated slightly differently, and there will be a small amount of waning between phases, we do expect these values to be somewhat similar.
•
To determine the posterior of the mean of the infected group for NP titer values (first element of μ 1 ), we consider the fact that any titer value above mean+3SDs is likely a previous infection (this is how the cutoff was chosen in Tang et al.).We then ensure that the bulk of the prior distribution for the positive N group was above this value, with some overlap.We used similar reasoning for the RBD positive group.
•
To determine the prior for the mean RBD/SmT1 titer values in the vaccinated groups, we used similar reasoning as above, trying to ensure that the prior has most of it's mass above that of the infected group's with some overlap.
•
We used a weakly informative prior for k using the LKJ distribution with shape = 0.5.This provides a roughly uniform distribution across positive-semidefinite 3 × 3 matrices.We then add additional information for each off-diagonal by multiplying by normal densities.For instance, if we suspect that the correlation between two parameters should be positive (i.e., off-diagonal element c of k is positive), we multiply the prior for c by N(c|0.5, 0.2) which gently encourages the correlation to be positive but still has mass below 0.
E. Longitudinal sensitivity analysis
As mentioned in Section 4, there is potential for these data to be used in a longitudinal way, as roughly 6300 survey participants had titer values in both Phase 1 and Phase 2. SmT1 titer values are measured in both phases, while RBD and NP are only available in Phase 2. Thus in this section, we wanted
F. Potential waning immunity
It is well known that antibodies decay over time, but how much this effects our results is unclear.Unfortunately, we cannot simply compare antibody results from Phase 1 to Phase 2, as these numbers are not directly comparable.Instead, we compared the Phase 1 and Phase 2 probabilities of participants who had a high probability of infection in Phase 1.A comparison of these predicted probabilities is shown in Figure F1.It appears that those with large predicted probabilities in Phase 1 still had large predicted probabilities in Phase 2. This is largely because in Phase 2, we see relatively lower parameter estimates for the means of the infected group.This likely will also make estimates of infection noisier, as the variance will also increase.So although our model does not appear to be underestimating Cumulative Incidence due to waning, waning likely does cause more uncertainty when predicting infection.More work needs to be done to confirm this assertion.
Fig. 1 .
Fig. 1.Mixture of t-distributions for the Phase 1 univariate model fit to the SmT1 titer values.The posterior median for each parameter is used.The vertical dashed line represents the cutoff used in Tang and others (2022).Keep in mind that this plot does not display uncertainty in the model parameters of the t-distributions.
Fig. 4 .
Fig. 4. Probability of infection given each individual's titer values using the trivariate mixture of t-distributions in Phase 2. A red dot indicates that this model predicts a high probability of infection, with blue being a low probability of infection, and purple being indeterminate.In theory, participants who have never been infected or vaccinated should have low values for all three titers.Vaccinated, but never infected individuals should have high SmT1 and RBD, but low NP, and infected individuals have high values for all three.
Fig. 5 .
Fig. 5. Incidence/IFR by age (years) for each time period.Posterior medians are used as point estimates, and the 2.5th and 97.5th posterior quantiles define the error bars.
Fig
Fig. B1.Incidence/IFR by age (years) in each province.Posterior medians are used as point estimates, and the 2.5th and 97.5th posterior quantiles define the error bars.
Fig
Fig. B2.Incidence/IFR by province.Posterior medians are used as point estimates, and the 2.5th and 97.5th posterior quantiles define the error bars.
Fig. B3 .
Fig. B3.Incidence by ethnicity in each province.Posterior medians are used as point estimates, and the 2.5th and 97.5th posterior quantiles define the error bars.
Fig. E1 .
Fig. E1.Comparing infection probabilities between the bivariate (Phase 2) and longitudinal (Phases 1 and 2) models.In (a) and (b), blue points indicate low probability of infection, while red indicates a high probability of infection.In (c), blue indicates agreement between the two models, while a more red color indicates a high estimated infection probability from the longitudinal model.
Fig. F1 .
Fig.F1.Phase 1 versus Phase 2 predicted probabilities for participants who had large predicted probabilities in Phase 1. Points above the red line indicate that Phase 1 predicted probability was higher.
Fig. G1 .
Fig. G1.Distribution of dates of samples received for Phase 1 and Phase 2.
Table D3 .
Priors used in Phase 2 mixture model
Table D4 .
Priors used in deaths module (Section 2.4.2) | 12,038.6 | 2023-03-06T00:00:00.000 | [
"Medicine",
"Mathematics"
] |
Accessibility within open educational resources and practices for disabled learners: a systematic literature review
The number of disabled students is rapidly increasing worldwide, but many schools and universities have failed to keep up with their learning needs. Consequently, large numbers of disabled students are dropping out of school or university. Open Educational Resources (OER) and Open Educational Practices (OEP) contain several relevant features, including the possibility of reusing and remixing, which have led researchers to consider using OER and OEP to facilitate meeting the needs of disabled and functional-diverse students in order to increase their accessibility and e-inclusion capabilities in educational settings. The very limited research to date, however, has provided a limited holistic understanding of accessibility within OER and OEP in order to aid researchers in pursuing future directions in this field. Therefore, this paper systematically reviewed 31 papers to provide insights about functional diversity within OER and OEP. The results obtained highlighted that accessibility is still in its infancy within OER and that researchers should focus more on considering the four accessibility principles — perceivable, operable, understandable and robust — when providing OER. Additionally, while several researchers have focused on several issues related to accessibility within OER, limited focus has been given to assistive technologies using OER. Finally, this paper provides several recommendations to increase accessibility within OER and help design more accessible OER for students with functional diversity.
Open Educational Resources (OER), defined as 'teaching, learning and research materials in any medium that may be composed of copyrightable materials released under an open license, materials not protected by copyright, materials for which copyright protection has expired, or a combination of the foregoing' (UNESCO, forthcoming), have the potential to contribute to reaching this objective by increasing access to learning as well as improving the quality of the learning experience (Ehlers, 2011). The OER movement is based on the idea that educational resources (e.g., content or course designs) should be released under licenses that allow anyone to freely access, retain (e.g., download, duplicate, store), reuse, revise (e.g., translate, adapt, modify), combine and-or re-share them (Tlili, Huang, Chang, Nascimbeni & Burgos, 2019). The use of OER for teaching in an innovative and collaborative environment is referred to as Open Educational Practices (OEP). Ehlers (2011), p. 4 defined OEP as 'practices which support the (re)use and production of Open Educational Resources through institutional policies, promote innovative pedagogical models, and respect and empower learners as co-producers on their lifelong learning paths'. Research is coalescing around the fact that these practices can help enhance learning quality, access and effectiveness in universities (Weller, 2014).
Despite the growing number of OER (Hoosen & Butcher, 2019) and the policy attention devoted to OER accessibility, as demonstrated by the presence of guidelines to increase the accessibility of OER within the Ljubljana OER Action Plan (UNESCO, 2017), the extent to which OER are actually accessible is currently being questioned. Accessibility refers to the use of a product, service, framework or resource in an efficient, effective and satisfying way by people with different abilities (ISO 9241-171, 2008). Functional diversity is a key issue in the development of any online resource, including OER, since it is potentially focused on almost every single user. The approach has moved from handicapped users (essentially, those with motor, cognitive or sensorial impairments) through accessibility (improving specific issues to facilitate a better user experience) to functional diversity and e-inclusion (of any feature of any user who requires additional support, like the ones associated with elderly or those on sick leave) (Iniesto, Covadonga, & Moreira Teixeira, 2014;Sanchez-Gordon & Luján-Mora, 2013;Tekleab, Karaca, Quigley, & Tsang, 2016).
The present paper aims to provide a holistic and systematic review of the literature in the field of the accessibility and functional diversity of OER and OEP, as a valuable guide for better designing open educational ecosystems that support inclusive learning, improving the potential effect of OER on twenty-first century teaching and learning for learners with different needs. This is particularly urgent since recent data estimates that 15% of world populationmore than a billion peoplelive with some form of disability (World Health Organization and World Bank, 2011). The structure of the paper is as follows. Section 2 presents the background of the research, section 3 details the research method, section 4 presents and discusses the obtained results, and section 5 concludes the paper with a summary of the findings, limitations and potential future directions.
Background
According to the World Health Organization, disability cover[s] impairments, activity limitations, and participation restrictions. An impairment is a problem in body function or structure; an activity limitation is a difficulty encountered by an individual in executing a task or action; while a participation restriction is a problem experienced by an individual in involvement in life situations. (World Health Organization, 2015).
The Office for Civil Rights (OCR) of the U.S. Department of Education defines 'accessible' as meaning that a person with a disability is afforded the opportunity to acquire the same information, engage in the same interactions, and enjoy the same services as a person without a disability in an equally effective and equally integrated manner, with substantially equivalent ease of use.
In educational contexts, accessibility for disabled students means that, in order for all to have equitable learning experiences, the learning experience, including its learning content and teaching process, should be adjusted according to students' needs, including their disabilities. While people with disabilities have the same educational needs as others, they are less likely to attend schools and graduate, and consequently may face difficulties in finding jobs in future (Ingram, 1971;Iwarsson & Ståhl, 2003;World Health Organization and World Bank, 2011). Various international policies, including the United Nations 2030 Agenda for Sustainable Development (United Nations, 2015) and the UNESCO Education for All initiative (UNESCO, 1990), have highlighted the importance of providing fair learning experiences for all students regardless of their differences. Still, a great proportion of schools and universities fail to properly address equitable access, especially with regard to disabled students (Catlin & Blamires, 2019), partly due to the lack of effective teaching methods and content targeted to these student categories (Virnes, 2008).
In the area of web accessibility, several standards released by the Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3) can be applied to OER. Among these standards, WCAG 2.0 has been widely accepted and adopted (W3C., 2012) and is based on four attributes that lay the necessary foundations for anyone to access and use websites, as shown in Table 1. Based on these four attributes, 12 guidelines and 61 success criteria are provided, categorised into three levels of conformance: AAA (highest), AA or A (lowest) (Crespo, Espada, & Burgos, 2016;W3, 2008). Table 1 shows that OER can increase the accessibility of web-based education in many ways. This potential is mainly connected to the inner OER features of re-using, remixing and redistributing learning content that can help adapt existing materials to disabled students without having to develop resources from scratch. OER can serve the needs of those with diverse abilities for a number of complementary reasons: Permissions granted by an open license remove legal barriers to adapting and customising OER, making it possible to create learning environments that are more flexible and robust for all students. OER offer the opportunity for instructors to curate materials authored by a diverse set of individuals, including those who identify as disabled, normalising and reducing stigma while sharing viewpoints that have historically been marginalised. Unlike commercially published materials, OER that are adapted to meet accessibility requirements can be retained and freely shared with communities, reducing duplicative work at and across institutions. OER adoption can reduce costs, which benefits all students but can be especially beneficial for students with disabilities who may face additional financial pressures.
It is more common for OER to be shared in formats that can be adapted for accessibility, unlike proprietary publisher content, from whom editable files are notably difficult to obtain (Thomas, 2018). Hejer, Khribi, and Jemni (2017) mentioned that despite the fact that the OER paradigm can facilitate inclusive learning by reusing the open resources in a way which caters to the needs of disabled students, limited work has been done to achieve this purpose. Similarly, Iniesto, McAndrew, Minocha, and Coughlan (2017) stated that few Massive Open Online Courses (MOOCs) are fully accessible for disabled students. Undeniably, not enough research is being conducted to support inclusive and equitable learning using OER (Navarrete, Peñafiel, Tenemaza, & Luján-Mora, 2019). Specifically, to our knowledge, only one conference paper has conducted a systematic literature review to investigate the actual accessibility of OER for disabled learners (Moreno, Caro, & Cabedo, 2018), providing only information about the trends of OER and accessibility without summarising and discussing findings related to accessible learning within OER and OEP. In addition, while several literature reviews have been conducted to better understand the use of OER for the general student population, no literature review has focused on investigating the work done on the accessibility of OER and OEP. To fill this gap, this paper presents a systematic literature review to understand how the application of OER and OEP can increase learning accessibility.
Text Alternatives
Provide a variety of forms that people need for non-textual content, such as large print, Braille, and so on.
Time-based Media
Provide access to time-based media.
Adaptable
Ensure that all OER are available in some way to all users.
Distinguishable Make the default presentation easy to perceive by people with disabilities.
Operable OER, including the content and interface, must be operable for users.
Keyboard Accessible
Make all functionalities achievable by using the keyboard.
Enough Time Provide enough time for users to use OER.
Seizures
Do not design OER in a way that might trigger seizures.
Navigable Support navigation and retrieval functions.
Understandable OER, including the content and interface, must be understandable by users.
Readable
Make OER text readable and understandable.
Predictable
Make OER contents display and operate predictably.
Input Assistance
Provide more assistance to avoid and correct mistakes.
Robust
OER must be robust enough that it can be accessed by a variety of types of user agents, including assistive technologies.
Compatible
Increase compatibility with the current and future user agents, especially assistive technologies: i.e., screen reader or Braille display devices.
Methodology
A rigorous literature review is an important step that builds the foundation for knowledge accumulation, which in turn facilitates the expansions and improvements of theories, closes existing gaps in research and uncovers areas previous research has missed (Marangunić & Granić, 2015). This study presents a systematic review based on published papers related to OER and OEP for learning accessibility, with particular reference to disabled students. It follows the steps reported by Okoli and Schabram (2010) as described in the next subsequent sections.
Investigated research questions
To gain insight into the use of OER and OEP for accessible learning, a systematic review is needed. Specifically, this study attempts to answer the following research questions: RQ1. What are the trends in publications on learning accessibility using OER and OEP in terms of time series, country and keyword distribution? RQ2. What kinds of disabilities and issues were investigated in the identified papers? RQ3. Which assessment methodologies were used in the identified papers?
Search strategy and inclusion/exclusion criteria
To answer the above research questions, several keywords were adopted as follows: accessib* AND Open AND Educational Resource*, accessib* AND OER, accessib* AND Open Educational Resource, accessib* AND OEP, accessib* AND Open Pedagogy, accessib* AND Open teaching, accessib* AND Open assessment, accessib* AND Open educational Practices, Inclusive learning AND Open educational resource, Inclusive learning AND OER. The search was conducted in several databases, including ScienceDirect, Wiley Online Library, IEEE Xplore Digital Library, Core Collections of Web of Science and Taylor & Francis Online. ResearchGate, a network for researchers to share, discover and discuss research, was also used to retrieve the related papers. The obtained papers were then filtered based on specific inclusion/exclusion criteria. Specifically, we excluded papers that: (1) were not in English; (2) did not discuss openness using OER and OEP for learning accessibility; (3) did not focus on disabled students; or (4) did not have available full-text online. A total of thirty-one papers were finally included during the review process. Figure 1 presents the selection procedure of papers during this review process.
Data extraction and analysis
Each study was then reviewed and examined based on seven items, as presented in Table 2. These items provide information to answer the above research questions and conduct the synthesis. Finally, a qualitative synthesis was conducted to answer the research questions.
Results and discussion
Trends in publications on learning accessibility using OER and OEP
Distribution by year
As shown in Fig accounted for more than 60% of all the production of the last decade. Additionally, the year 2016 saw a peak in interest in this area, probably connected with the fact that the UN 2030 Agenda for Sustainable Development was launched in 2015, providing an impetus for research in the areas of accessibility and inclusion.
Distribution by country
The distribution of the first author's countries is presented in Fig. 3, showing that authors from only nine countries have led research about OER and OEP for accessible learning. This shows that the use of OER and OEP for inclusive learning is still in its infancy and that more awareness should be raised to encourage further investigation in this field. In particular, authors from Ecuador had 11 papers related to this topic, accounting for more than one third of all papers, followed by Spain, with six papers. Ecuador is indeed considered as a leading country in the field of disability support, since the government proposed in 2007 several policies to address the needs, including educational needs, of disabled persons. Spain has long attached great importance to inclusive education; as early as 1982, Spain passed legislation to integrate disabled youth in schools. In 1985 the decree on special education moved many disabled children from special schools to mainstream schools. In 1994, the United Nations World Conference on Special Needs Education was held in Spain, where the fundamental principle of inclusion at school was declared and widely endorsed. Interestingly, four out of the nine countries present at that conference (Ireland, Italy, Spain and the UK) have adopted the Web Content Accessibility Guidelines 2.0 (WCAG 2.0) noted earlier (W3, 2017).
Distribution by keyword
Finally, the keyword distribution of the 31 research papers in the systematic review was analysed in order to understand the use of OER and OEP for accessible learning more deeply. Keywords with similar meanings, such as 'OER' and 'Open Educational Resources' or 'Learning object' and 'LO', were normalised. The final distribution of the keywords is presented in Fig. 4. It can be seen that accessibility, OER and disability are the most commonly used keywords in the 31 papers reviewed. In particular, disability and accessibility focus on the category of students on which these research papers focus, while OER focus on the category of education that can contribute to improving the accessibility of earning opportunities. Importantly, we discovered that the term Open Educational Practices (OEP), as well as sub-terms, such as open pedagogy, open teaching and open assessment, have not yet been discussed in the literature when it comes to accessible learning. Therefore, in the subsequent analysis we will focus only on accessibility and OER.
Disabilities and issues investigated
As shown in Table 3, when investigating the use of OER, researchers focused on several disabilities, including visual disabilities, hearing disabilities, motor disabilities, speech disabilities, cognitive disabilities and aging-connected disabilities. Researchers paid almost Fig. 4 Distribution of keywords in the reviewed research papers equal attention to different types of disability, including seven studies on visual disabilities and hearing disabilities, respectively, and six papers on motor disabilities and cognitive disabilities. It is obvious that aging also imposes certain limitations on the ability of humans, so researchers have also considered it. It should be noted that some papers discussed more than one disability. For instance, Zervas et al. (2014) developed an online teaching and learning portal for students with visual and/or hearing disabilities. The use of OER to address the above disabilities was discussed from five different angles: system design, personalisation, metadata, authoring tools and OER accessibility framework/architecture. As shown in Table 4, most authors focused on system design to increase accessibility and usability for students with disabilities. For instance, Ngubane-Mokiwa (2016) conducted a literature review and identified several guidelines to facilitate MOOC access for visually impaired students. These guidelines are from three different perspectives: (1) multiple means of representation, which focuses on the strategies to make MOOCs accessible; (2) multiple means of action and expression, which focuses on the strategies that facilitates user actions on MOOCs; and, (3) multiple means of engagement, which focuses on strategies to provide accessible interaction within MOOCs.
Several researchers also analysed personalised learning experiences based on the 'type of disability' or 'user profile' as a personalisation parameter. For instance, Zervas et al. (2014) designed an OER-based educational portal to facilitate learning and teaching for students with different disabilities, including those with visually impairments. Similarly, Navarrete and Luján-Mora (2018) developed an OER website that takes into consideration the disability of students, including visual and hearing disabilities, as a personalisation parameter. This 'disability-personalisation' path is extremely relevant, as recognised by the National Academy of Engineering, which mentioned that personalised learning is one of the fourteen most important challenges of the twenty-first xentury . Other researchers focused on discussing metadata, defined and machine-processable data that describe resources, either digital or nondigital (Haslhofer & Klas, 2010), in inclusive learning using OER and OEP. An accurate metadata set can enhance the retrieval of educational resources and provide a friendly navigation experience. For instance, in order to better describe and identify resources, Navarrete and Luján-Mora (2018) applied a subset of descriptors from the Learning Object Metadata (LOM) standards. Similarly, Navarrete and Luján-Mora (2014) applied other metadata standards, including DCMI (Dublin Core Metadata initiative) and AfA (Access for All). Some researchers have put forward innovative frameworks to improve the accessibility of OER. argued that the development of a framework for improving web accessibility should be based on existing standards, such as WCAG 2.0, and proposed a framework for enhancing the accessibility and usability of open courseware sites. Innovative architectures are also presented by Sanchez-Gordon and Luján-Mora (2016) as ways to improve the accessibility of MOOCs and OER.
Finally, some researchers have focused on developing authoring tools for accessible OER. For instance, Mulwa et al. (2016) developed an OER authoring tool to facilitate the creation of OER for students with visual disabilities by selecting the navigation methods and text sizes. As shown in Table 4, only two papers focused on authoring tools to develop accessible OER. This might explain the limited number of fully accessible OER. Therefore, more focus should be put on developing tools that can help educators create and publish OER for disabled students. Additionally, no reviewed paper discussed the accessibility of OER from the assistive technology perspective. Given that different assistive technologies for disabled persons exist within different Operating Systems (OS), OER designers should try to make their resources compatible with as many assistive technologies and OS as possible in order to ensure high accessibility.
Assessment methodologies used
Based on the review of the 31 identified studies, 16 papers conducted assessments to evaluate the accessibility of OER, while the 15 remaining papers did not conduct any assessment. Specifically, to assess the accessibility of OER, three different methods were used, as shown in Iniesto & Rodrigo, 2014;Navarrete & Luján-Mora, 2015a), enabling the system to better understand the problems and requirements of people with impairments. For instance, the simulator named Sanchez-Gordon & Luján-Mora, 2016 No assessment N/A Coughlan et al., 2016;Hejer et al., 2017;Iniesto and Rodrigo, 2018;Iniesto & Rodrigo, 2016;Iniesto et al., 2017;Iniesto et al., 2019;Kourbetis & Boukouras, 2014;Kourbetis et al., 2016;Morales and Benedi, 2017;Moreno et al., 2018;Navarrete et al., 2016;Politis et al., 2014;Sanchez-Gordon & Luján-Mora, 2015;Yalcinalp & Emiroglu, 2012;Zervas et al., 2014 aDesigner, used by Iniesto and Rodrigo (2014) and , aimed to simulate the use by people with visual disabilities in order to help the designer assess the extent to which a given content is accessible to users with that particular disability. Finally, manual assessment is mostly based on users' questionnaires (Avila Garzon, 2018;Avila Garzon et al., 2016;Caruso & Ferlino, 2009;Mulwa et al., 2016;Navarrete et al., 2019;Navarrete & Luján-Mora, 2015a;Navarrete & Luján-Mora, 2018;Sanchez-Gordon & Luján-Mora, 2016). In these cases, the purpose of the questionnaire is to obtain a qualitative analysis to appreciate the users' experience of the process of using a given OER (Navarrete et al., 2019), based on questions like 'Is it easy to learn how to use the website?' or 'Can the user resolve the tasks on the website without unnecessary effort?' (Navarrete & Luján-Mora, 2018). Several researchers, however, claimed that using questionnaires may not be motivating for learners, since they are typically too long. Additionally, learners may not fully reveal their experiences and may try to respond optimistically when they feel that they are being assessed by others (Okada & Oltmanns, 2009). To counterbalance these attitudes, given the rapid growth of technology and the era of big data and learning analytics, researchers should focus more on using the data generated by learners to obtain insights about the accessibility of OER-based learning processes. If we consider that the accessibility of OER and OEP should aim at enabling all users, including disabled ones, having equitable learning opportunities, this focus on technical accessibility suggests that the research on OER and OEP for disabled learners is still in its infancy, since most researchers have focused on a rather superficial analysis that does not rely on rich datasets. Therefore, further research should be conducted to investigate how OER and OEP facilitate the deployment of accessible and inclusive learning from a more holistic perspective. WCAG 2.0 provides guidelines on how to make web content more accessible to people with disabilities and four principles to lay the foundation of Web accessibility (W3, 2008). Table 6 presents the results of the review along with the four accessibility attributes presented in the Background section: perceivable, operable, understandable and robust. It appears that the majority of researchers discussed accessibility as one concept without considering specific accessibility attributes. Table 6 shows that the general OER accessibility level could be improved: among the 16 papers which reported accessibility assessment results, 15 generally agreed that there was much room for improvement in the accessibility of OER, especially for disabled users. For instance, the accessibility evaluation results by Iniesto and Rodrigo (2014) show a low degree of compliance of the analysed OER with the WCAG 2.0 accessibility guidelines. Navarrete et al. (2019) also conclude that neither the OER website interface nor the educational resources are fully accessible.
If we analyse the accessibility attributes individually, stated that more errors are found under the attributes 'robust' and 'perceivable', which account for 50% and 31.81%, respectively, of the errors made when using the automatic tool TAW. On the other hand, for the attributes 'operable' and 'understandable', the percentage of errors is 20% and 17.64%, respectively. After accessibility evaluation with TAW of four OER platforms, including MERLOT, OCW UPM, OER COMMONS and OLI, similar results were reported in Navarrete and Luján-Mora (2015c), which showed that the greatest number of warnings are annotated under the attributes 'robust' and 'perceivable', while all of these warnings may be related to some issues that need to be
Conclusion, recommendations and future directions
This study presented a systematic review of the use of OER and OEP to provide accessible learning. The final notes based on the results discussed above (in the three presented research questions) are as follows: A limited number of countries (nine) were involved in the investigation of the use of OER and OEP for accessible learning (research question 1). Therefore, researchers worldwide should be encouraged to get involved in this research field. This can be changed by raising awareness about the new opportunities that OER and OEP could provide to disabled students for effective accessible learning, or by launching new projects or policies (e.g., governmental or institutional) that encourage the use of OER and OEP for inclusive learning.
Only two papers discussed the development of authoring tools with features to create accessible content, which might explain the reasons for having limited online OER and OEP for disabled students (research question 2). This should be changed by developing more inclusive authoring tools (that work with different functional diversities) that educators can use to create and publish open content.
Most assessments conducted focused only on the accessibility of the provided OER (research question 3). Therefore, more research should also be conducted to investigate the effectiveness of OER and OEP in providing accessible learning experiences and enhancing disabled students' learning achievements. There is still much room for improvement in OER accessibility (research question 3). Therefore, researchers and practitioners should consider different accessibility guidelines (e.g., WCAG 2.0) while developing their OER platforms, tools and devices. This helps provide an effective approach to accessibility, functional diversity and einclusion in educational settings. Only three assessment methods are used: automatic tools, simulator tools and manual tools (research question 3). Therefore, in the era of big data, researchers and practitioners should also begin applying learning analytics for more accurate assessment of the accessible learning experience provided to disabled and functional-impaired students. Among the four accessibility attributes, 'robust' has the highest percentage of errors (research question 3). Therefore, OER developers should place more emphasis on OER's compatibility with most assistive devices, as well as operating systems (Windows, Mac OSX and Linux).
In addition, the authors consider direct support to educators a key issue, so that they learn the foundations of functional diversity, develop the skill set to operate learning resources under these terms and are fully aware of the significance of and need for specific actions around the topic. Indeed, providing specific competencies and training for educators are a challenge but nonetheless a required measure to improve the impact of functional diversity and accessibility on the educational system. | 6,064.8 | 2020-01-03T00:00:00.000 | [
"Education",
"Computer Science"
] |
Fundamental limits to optical response in absorptive systems
At visible and infrared frequencies, metals show tantalizing promise for strong subwavelength resonances, but material loss typically dampens the response. We derive fundamental limits to the optical response of absorptive systems, bounding the largest enhancements possible given intrinsic material losses. Through basic conservation-of-energy principles, we derive geometry-independent limits to per-volume absorption and scattering rates, and to local-density-of-states enhancements that represent the power radiated or expended by a dipole near a material body. We provide examples of structures that approach our absorption and scattering limits at any frequency, by contrast, we find that common"antenna"structures fall far short of our radiative LDOS bounds, suggesting the possibility for significant further improvement. Underlying the limits is a simple metric, $|\chi|^2 / \operatorname{Im} \chi$ for a material with susceptibility $\chi$, that enables broad technological evaluation of lossy materials across optical frequencies.
Introduction
At optical frequencies, metals present a tradeoff: their conduction electrons enable highly subwavelength resonances, but at the expense of potentially significant electron-scattering losses [1][2][3][4][5][6][7][8][9][10]. In this article we formalize the tradeoff between resonant enhancement and loss, deriving limits to the absorption within, the scattering by, and the local density of states (LDOS) [11][12][13][14][15][16] near a lossy, absorptive body of arbitrary shape. Given a material of susceptibility χ(ω), the limits depend only on a material enhancement factor |χ(ω)| 2 / Im χ(ω) and on the incident-beam energy density (leading to a potential 1/d 3 LDOS enhancement for a metal-emitter separation d). The power scattered or dissipated by a material body must be smaller than the total power it extracts from an incident beam; we show that this statement of energy conservation yields limits to the magnitudes of the internal fields and polarization currents that control the scattering properties of a body. Unlike previous bounds [6,9,[17][18][19][20][21][22][23][24][25][26][27], our limits do not depend on shape, size, or topology, nor do they diverge for zero bandwidth. The crucial ingredient is that our bounds depend on χ and are finite only for realistic lossy materialsfor idealized lossless materials such as perfect conductors, arbitrarily large optical responses are possible. We provide examples of structures that approach the theoretical limits, and also specific frequency ranges at which common structures fall far short. Our bounds apply to any absorptive system, and thus provide benchmarks for the response of metals, synthetic plasmonic materials (doped semiconductors) [3,5,[28][29][30], and surface-phonon-polariton materials across visible and infrared wavelengths, resolving a fundamental question [1][2][3][4][5][6][7][8][9][10] about the extent to which resonant enhancement can overcome intrinsic dissipation.
There has been intense interest in exploiting "plasmonic" [2] effects, which arise for materials with permittivities that have negative real parts, in metals at optical frequencies. Geometries incorporating such materials are capable of supporting highly subwavelength surface resonances [1,2]. Yet such a material has inherent loss arising from the typically significant imaginary part of χ. Even for applications in which absorption is the goal, material loss dampens resonant excitations and reduces the overall response. This tradeoff between resonant enhancement and absorption has been investigated for specific geometries amenable to semianalytical methods, leading to a variety of geometry-dependent material dependences. For example, in the quasistatic limit, coated spheres absorb energy at a rate proportional to |χ|/ Im χ [31], whereas spheroids absorb energy at a rate [32] proportional to |χ| 2 / Im χ. Surface modes at planar metal-insulator interfaces exhibit propagation lengths proportional to (Re ε) 2 / Im ε at very low frequencies [33] (in the Sommerfeld-Zenneck regime [2]), but near their surface-plasmon frequencies their propagation lengths are approximately proportional to √ Im ε (cf. App. E). In electron energy loss spectroscopy (EELS), the electron scattering cross-section is proportional to a "loss function" Im(−1/ε) = (Im ε)/|ε| 2 that enables experimental measurement of bulk plasmon frequencies [34,35].
We show that |χ| 2 / Im χ is a universal criterion for evaluating the optimal response of a metal, with a suitable generalization in Sec. 3 for more general media that may be anisotropic, magnetic, chiral, or inhomogeneous. For most materials, |χ| 2 / Im χ increases as a function of wavelength (as demonstrated in Sec. 3.2), suggesting that if an optimal structure is known, the plasmonic response of a metal can potentially be much greater away from its bulk-and surfaceplasmon frequencies. For effective-medium metamaterials, our bounds apply to both the underlying material parameters as well as to the effective medium parameters, with the smaller bound of the two controlling the maximum response. Thus effective-medium approaches cannot circumvent the bounds arising from their constitutive materials; however, they may find practical application if they can achieve resonances at frequencies that are otherwise difficult to acheive with the individual materials.
The limits derived here arise from basic energy considerations. An incident field E inc interacting with a scatterer generates polarization currents P that depend on both the incident field and on the shape and susceptibility of the body. A lossy scatterer dissipates energy at a rate proportional to the squared magnitude of the currents, |P| 2 . At the same time, the total power extracted from the incident beam, i.e. the "extinction" (absorption plus scattering), is proportional to the imaginary part of the overlap integral of the polarization currents with the incident field, ∼ V E * inc · P, which is known as the electromagnetic optical theorem [32,[36][37][38] and can be understood physically as the work done by the incident field to drive the induced currents. The overlap integral is only linear in P whereas the absorption depends quadratically on P. If the magnitude of P could increase without bound, then, the absorption would become greater than extinction, resulting in a physically impossible negative scattered power. Instead, there is a limit to the magnitude of the polarization field, and therefore to the scattering properties of any body comprising the lossy material. We make this argument precise in Sec. 3, where we employ variational calculus to derive general limits for a wide class of materials, and we also present limits specific to metals, which are typically homogeneous, isotropic, and nonmagnetic at optical frequencies. We consider here only bulk susceptibilities, excluding nonlocal or quantum effects [39][40][41]. The key results are the limits to absorption and scattering in Eqs. (29a,29b,31a-32b) and the limits to LDOS enhancement in Eqs. (34a,34b). Before deriving the limits, we present the volume-integral expressions for absorption, scattering, and radiative and nonradiative LDOS in Sec. 2.
In Sec. 4 we compare the response of a number of structures towards achieving the various limits. For far-field absorption and scattering, we find that ellipsoidal nanoparticles are ideal and can reach the limits across a wide range of frequencies. For near-field enhancement of power expended by a dipole into radiation or absorption, we find that it is much more difficult to reach the limits. The nonradiative LDOS near a planar metal surface reaches the limit at the "surface-plasmon frequency" of the metal. At lower frequencies, common structures (thin films, metamaterials) fall far short. At all frequencies, common designs for radiative LDOS enhancement exhibit suboptimal response. These results suggest the possibility for significant design improvement if the limits are achievable (i.e. "tight").
Previous limits to the electromagnetic response of metals have emphasized a variety of limiting factors. At RF and millimeter-wave frequencies, where the response can be bounded relative to that of a perfect electric conductor (PEC) [42], the Wheeler-Chu-McLean limit [23][24][25]42] bounds the radiative Q factor of an electrically small antenna. At optical frequencies, absorption loss increases and often dominates relative to radiative loss. There are known lower bounds on the absorptive Q for low-loss, quasistatic structures [26], and more generally for metals of any size with susceptibilities comprising Lorentz-Drude oscillator terms [9].
Limits to frequency-integrated extinction are also known. Purcell derived the first such limit, using the Kramers-Kronig relations [38] to bound the integrated response of spheroidal particles to their electrostatic (ω = 0) induced dipole moments [17]. Recently the limits have been extended to arbitrary shapes [18][19][20], but one obtains a different limit for each shape. Moreover, it is important in many applications to disentangle the single-frequency response and the bandwidth, and to do so separately for absorption and scattering.
Single-frequency absorption and scattering limits have primarily been derived via sphericalharmonic decompositions, originally for spherically symmetric scatterers [43,44] and later for generic ones [21,22]. This approach has been generalized recently, yielding limits in terms of the inverse of a scattered-field-operator [27], although it the inverse of such an operator is seemingly difficult to bound without resorting to spherical harmonics. In Sec. 3 we show that the scattered-field-operator approach and our material-dissipation approach share a common origin in volume-integral equations. The key distinction is that the scattered-field operator and its corresponding limits are independent of material but dependent on structure, whereas our limits incorporate material properties and are independent of structure. Both classes of limits apply to any linear body. In Sec. 5 we provide a more detailed comparison, finding that the sphericalharmonic limits may provide better design criteria at lower (e.g. rf) frequencies, whereas our limits should guide design at higher frequencies, especially in the field of plasmonics.
This work was partly inspired by our recent bounds [45] on extinction by quasistatic nanostructures. In that work we derived bounds via sum rules of quasistatic surface-integral operators [46,47]; equivalently, we could have [48] derived the bounds via analogous constraints in composite theory [49,50]. The key distinction between this work and our previous work [45] that here we find limits in the full Maxwell regime, such that our bounds apply to any structure at any size scale, and they apply to functions of the scattered fields (e.g. scattered power and radiative LDOS), which have zero amplitude in quasistatic electromagnetism. An additional benefit of our simplified energy-conservation approach is that we can bound the responses of anisotropic, magnetic, and/or inhomogeneous media, whereas the surface-integral sum-rule approach only works for isotropic and nonmagnetic materials. In this work we also consider the local density of states, which we did not consider previously and which represents an important design application.
Absorption, scattering, and LDOS expressions
We consider lossy media interacting with electromagnetic fields incident from fixed external sources (e.g. plane waves or dipole sources). Figure 1 illustrates the conceptual setup: a generic scatterer with susceptibility tensor χ absorbs, scatters, and extinguishes (extinction defined as absorbed plus scattered power) incident radiation at rates proportional to volume integrals over the scatterer. In this section we present the known volume-integral expressions for absorption and scattering, and we also derive volume-integral expressions for the power expended by a dipole near such a scatterer, which is either radiated to the far field or absorbed in the near field. Relative to a dipole in free space, the enhancement in power expended is given by the relative increase in the local density of states (LDOS) [15]. The scatterer is taken to consist of a lossy, local material that is possibly inhomogeneous, electric, magnetic, anisotropic, or bianostropic (chiral). We assume the scatterer is in vacuum, with permittivity ε 0 , permeability µ 0 , impedance Z 0 = µ 0 /ε 0 , and speed of light c = 1/ √ ε 0 µ 0 . Extending the limits of the following section to non-vacuum and possibily inhomogeneous backgrounds is relatively straightforward and is discussed in Sec. 5. The response of the scatterer can be described by induced electric and magnetic polarization currents, P(x) and M(x), which satisfy the constitutive field relations For the general class of materials considered here, the currents P and M each depend on both E and H through a unitless 6×6 susceptibility tensor χ [15,51]: where F is a generalized vector field containing both electric and magnetic fields. For isotropic media with relative permittivity ε r and relative permeability µ r , the susceptibility tensor comprises only the diagonal elements ε r − 1 and µ r − 1. Lossy media as considered here have susceptibilities that satisfy the positive-definiteness condition (for an e −iωt time convention) [51,52] ω Im χ > 0, where Im χ = (χ − χ † )/2i (and † represents the conjugate transpose).
When light impinges on the scatterer, the absorbed, scattered, and extinguished powers can be written as overlap integrals of the internal currents and fields [53]. The absorption (dissipation) within such a medium is the work done by the total fields on the induced currents, given by the expression [54]: where the asterisk denotes complex conjugation. Equation (4) reduces to the usual (ε 0 ω/2) (Im χ) V |E| 2 for homogeneous, isotropic, nonmagnetic media. The total power extracted from the incident fields-the extinction-is the sum of the absorbed and scattered powers and can be computed by the optical theorem [38,55]. Although commonly written as an integral over fictitious effective surface currents [38], the optical theorem can also be written as a volume integral over the polarization currents [36,53,56], representing the work done by the incident fields on the induced currents: where, as for F, we define F inc by The scattered power is the difference between extinction and absorption: In addition to absorbing and scattering light, structured media can also alter the spontaneous emission rates of nearby emitters. Increased spontaneous emission shows exciting potential for surface-enhanced Ramam scattering (SERS) [57,58], fluorescent imaging [59,60], thermophotovoltaics [61,62], and ultrafast light-emitting diodes (LEDs) [63]. The common metric for the enhanced emission rate is the (electric) local density of states (LDOS), which represents the density of modes weighted by the relative energy density of each mode's electric field at a given position [15]. Equivalently, and more generally, the LDOS enhancement represents the enhancement in the total power expended by an electric dipole radiator [15,64,65], into either radiation or dissipation. Similar to extinction, the total LDOS is the imaginary part of a linear functional of the induced electric fields [12]: where E s j denotes the field from a dipole source at x 0 polarized in theŝ j direction, with a dipole moment p 0 = ε 0ŝ j , and the sum over j accounts for all possible orientations (the conventional LDOS corresponds to a randomly oriented dipole [12]).
To connect the LDOS to the material properties, we rewrite it as a volume integral over the fields within the scatterer. The total field at the source position, E s j (x 0 ), consists of an incident field and a scattered field: The incident field is known-it is the field of a dipole in vacuum-and can be left as-is (note that the imaginary part of a dipole field does not diverge at the source location [66]). The scattered field arises from interactions with the scatterer and is the composite field from the induced electric and magnetic currents, radiating as if in free space: where G EP and G EM are free-space dyadic Green's functions [51,67]. We could at this point insert Eqs. (9,10) into Eq. (8) and have a volume-integral equation for the total LDOS. However, note that the Green's functions in Eq. (10) represent the fields of free-space dipoles and the excitation in this case is also a dipole. We follow this intuition to replace the Green's functions by the incident fields. Whereas the source dipole at x 0 generates incident fields at points x within the scatterer, the Green's functions in Eq. (10) are the fields at x 0 from a dipole at x. By reciprocity [68] one can switch the source and destination points of vacuum Green's functions where for clarity we indexed the Green's function tensors. Now one can see that the product s j · G EP P equals ε −1 0 E inc · P. The magnetic Green's function yields the incident magnetic field, with a negative sign arising from reciprocity, as in Eq. (11b). Equation (9) can be written where we have defined Inserting the field equation, Eq. (12), into the LDOS equation, Eq. (8), yields the total LDOS as the sum of free-space and scattered-field contributions: where ρ 0 = ω 2 /2π 2 c 3 is the free-space electric LDOS [69]. It is typically more useful to normalize ρ to ρ 0 : where k = ω/c. Equation (15) relates the total LDOS to a volume integral over the scatterer, which will enable us to find upper bounds to the response in the next section. For many applications it is important to distinguish between the power radiated by the dipole into the far-field (where it may be imaged, for example) and the power absorbed in the near field (which may productively transfer heat, for example). Absorbed power is given by Eq. (4), and thus the nonradiative LDOS enhancement ρ nr /ρ 0 is given by Eq. (4) divided by the power radiated by a dipole (of amplitude ε 0 ) in free space, P rad = ε 0 ω 4 /12πc 3 (Ref. [38]): Finally, just as the scattered power in Eq. (7) is the difference between extinction and absorption, the radiative part of the LDOS is the difference between the total and nonradiative parts:
Limits
Given the power and LDOS expressions of the previous section, upper bounds to each quantity can be derived by exploiting the energy conservation ideas discussed in the introduction (Sec. 1). The extinction is the imaginary part of a linear function of the polarization currents, whereas absorption is proportional to their squared magnitude (and scattered power is the difference between the two), and thus energy conservation yields finite optimal polarization currents and fields for each quantity. Just as one can use gradients to find stationary points in finite-dimensional calculus, one can use variational derivatives [70] to find stationary points of a functional (i.e. a function of a function). It is sufficient here to consider functionals of the type P = f * g, which arise in the power expressions, Eqs. (4,7,16,17). The variational derivative of P with respect to g is given by δ δ g f * g = f * , analogous to the gradient in vector calculus: ∇ x a † x = a † . The primary distinction is the dimensionality of the space and thus the appropriate choice of inner product.
The optimal fields for the various response functions P are therefore those for which P is stationary under small variations of the field degrees of freedom. The field F is complex-valued, such that one could take the real and imaginary parts of F as independent (P is a nonconstant real-valued functional and therefore not analytic [71] in F), but a more natural choice is to formally treat the field F and its complex-conjugate F * as independent variables [71,72]. Then a necessary condition for an extremum of a functional P [F] is for the variational derivatives with respect to the field degrees of freedom to equal zero, δ P/δ F = 0 and δ P/δ F * = 0 (which are the Euler-Lagrange equations [70] for functionals that do not depend on the gradients of their arguments). Because our response functions are real-valued, the derivatives with respect to F and F * are redundant-they are complex-conjugates of each other [71]-and the condition for the extremum can be found with the single equation where we have chosen to vary F * instead of F for its slightly simpler notation going forward. We apply this variational calculus approach to bound each response function of interest. First, we derive limits for the most general class of materials under consideration. Then we specialize to metals, an important class of lossy media that are typically homogeneous, isotropic, and nonmagnetic at optical frequencies.
General lossy media
We consider first Eq. (7), for the scattered power. Setting the variational derivative of P scat to 0 yields The optimal field that satisfies Eq. (19) is for all points x within the scatterer volume V . The optimal field is guaranteed to exist because ω Im χ is positive-definite, per Eq. (3), and therefore invertible. We have only shown that this is an extremum, not a maximum, but because ω Im χ is positive-definite, the scattered power in Eq. (7) is a concave functional, for which any extremum must be a global maximum [73].
One can see that the optimal fields within the scatterer are related to the incident fields (directly proportional for homogeneous media), which conforms intuitively with the scatteredpower expression in Eq. (7). The internal field should strongly overlap with the incident field, to increase the power extracted from the incident beam, while the susceptibility dependence balances between maximizing extinction and minimizing absorption.
A similar procedure yields the optimal internal fields for maximum absorption within a scatterer. Although the absorbed power as given in Eq. (4) is unbounded with respect to F, adding the constraint that absorption must be smaller than extinction (i.e. the scattered power must be nonnegative) imposes an upper bound. Because Eq. (4) is unbounded, the Karash-Kuhn-Tucker (KKT) conditions [74] require that the constraint P scat ≥ 0 must be active, i.e. P scat = 0. Following standard constrained-optimization theory [74], we define the Lagrange multiplier and the Lagrangian functional L = P abs + P scat . The extrema of L satisfy To simultaneously ensure that the scattered power also equals 0, one can verify that the Lagrange multiplier is given by = 2. Then the optimal internal fields are which are precisely double the optimal scattering fields of Eq. (20). Maximizing P abs subject to P scat ≥ 0 is a problem of maximizing a convex functional subject to a convex quadratic constraint, such that the solution in Eq. (22) must be a global maximum [75]. The limits to the scattered and absorbed powers are given by substituting the optimal fields in Eq. (20) and Eq. (22) into Eq. (7) and Eq. (4), respectively: where extinction has the same limit as absorption, which can be derived by maximing P ext subject to P scat ≥ 0. The limits depend only on the intensity of the incident field and the material susceptibility x over the volume of the scatterer. The product χ Im χ −1 χ † , discussed further below, sets the bound on how large the induced currents can be in a dissipative medium. Whereas optimal per-volume scattering occurs under a condition of equal absorption and scattering, optimal per-volume absorption occurs in the absence of any scattered power and can be larger by a factor of four. This ordering is reversed in the spherical-multipole limits [21,22], where the scattering cross-section (not normalized by volume) can be four times larger than the absorption cross-section. Note that Eqs. (23a,23b) look superficially similar to the absorbed-and scattered-power limits in Ref. [27]. As discussed in Sec. 1, they share a common origin as energy-conservation principles applied to integral equations. The key distinction is which quantity serves as a nonnegative quadratic (in F) constraint. We treat absorption as the quadratic quantity, given by Eq. (4), with the scattered power as the difference between extinction and absorption. The scattered-field-operator approach rewrites the scattered power via a volume integral equation (VIE) [51]. This yields a non-negative, quadratic scattered power that is of the same form as Eq. (4) except with the replacement Im χ → Im G , where G is a scattered-field integral operator with the homogeneous Green's function as its kernel (the electric component of G is used in Eq. (10)). Energy conservation leads to the limits of Ref. [27], which take a similar form to Eqs. (23a,23b), except Im χ → Im G and the factors of four are reversed. The limits in Ref. [27] are a generalization of the spherical-multipole limits of antenna theory [21,22], which treat the special case of the scattered field decomposed into spherical harmonics.
Our limits have very different characteristics from those of [21,22,27]. Our approach, via absorption as the quadratic constraint, yields limits that incorporate the material properties and are independent of structure. This naturally results in per-volume limits, a normalization of inherent interest to designers. The scattered-field-operator approach yields limits that are independent of material but depend on the structure, in a way that can be difficult to be quantified because the inverse of the scattered-field operator is not known except for the simplest cases (e.g. dipoles). A spherical-harmonic decomposition of the operator yields analytical limits, but only to the cross-sections, without normalization. The cross-section is inherently unbounded (increasing linearly with the geometric cross-section at large sizes) and thus difficult to use from a design perspective. The different normalizations are responsible for the different orderings of the absorbed-and scattered-power limits. In Sec. 5 we extend this comparison to show that our material-dissipation approach provides better design criteria at optical frequencies.
The same derivations lead to optimal fields and upper bounds for the radiative and nonradiative LDOS. The optimal fields are nearly identical in form: where the complex conjugation arises due to the lack of conjugation in the LDOS expressions (which itself arises because open scattering problems in electromagnetism are complex-symmetric rather than Hermitian [15]). Substituting the optimal fields into the LDOS expressions Eqs. (16,17) gives the LDOS limits As for extinction, the limit to the total LDOS is identical to the limit to the nonradiative LDOS, which can be proven by maximizing ρ tot subject to ρ rad ≥ 0. The absorption, scattering, and LDOS limits in Eqs. (23a,23b,25a,25b) depend on the overlap integral of the material susceptibility and the incident field. We can simplify the limits further by separating the dependencies, which is simple for homogeneous, isoptropic media but can also be done for more general media through induced matrix norms [76]. The integrand in Eq. (23a) (and each of the other limits) is of the form z † A A Az, a quantity related to the norm (i.e. "magnitude") of a matrix A A A. The induced 2-norm of a matrix A A A, denoted A A A 2 , is given by the maximum value of the quantity z † A A Az/z † z for all z = 0. The integral in Eq. (23a) can then be bounded for general media and arbitrary incident fields: where the dependence on the material susceptibility is now separated from the properties of the incident field F inc . The field intensity |F inc | 2 is proportional to the energy density of the incident field: where U E,inc and U H,inc are the (spatially varying) incident electric and magnetic energy densities [38]. Generally the incident fields relevant to P scat and P abs are beams with nearly constant intensity and infinite total energy, for which one should bound the scattered or absorbed power per unit volume of material. Given the operator definition and energy-density relation just discussed, the scattering and absorption limits in Eqs. (23a,23b) simplify: Plane waves are incident fields of general interest. They have equal electric and magnetic energy densities and constant intensities I inc = cU E,inc , where c is the speed of light in vacuum. The cross-section of a scatterer is defined σ = P/I inc , representing the effective area the scatterer presents to the plane wave. Because plane waves are constant in space, the absorption and scattering bounds are tighter. In the first line of Eq. (26), |F inc | 2 can be taken out of the integral, which then simplifies to the average value of the norm of χ † Im χ −1 χ. With this modification to Eqs. (28a,28b), the bounds on absorption and scattering cross-sections per unit volume are: which apply for general 6×6 electric and magnetic susceptibility tensors. For susceptibilities that are only electric or only magnetic, and therefore 3×3 tensors, the bound is smaller by a factor of two, since the incident magnetic field cannot drive magnetic currents (or vice versa). The LDOS analogue of Eqs. (29a,29b) is not straightfoward, because the incident fields are inhomogeneous. Consequently, we leave Eqs. (25a,25b) as the general LDOS limits for inhomogeneous media, and derive a simpler version for metals in the next subsection. Nanoparticle scattering and absorption are often written in terms of electric/magnetic polarizabilities and higher-order moments [22,32,43,77], whereas Eqs. (29a,29b) are bounds in terms of only the material susceptibility. One implication is that Eqs. (29a,29b) imply restrictions on the number of moments that can be excited, or the strengths of the individual excitations, in a lossy scatterer. A lossy scatterer of finite size cannot have arbitrarily many spherical-multipole moments excited, nor can a single scatterer of very small size achieve full coupling to the lowest-order electric and magnetic dipole moments. Scatterers for which Im χ/ |χ| 2 V /λ 3 cannot achieve ∼ λ 2 cross-sections per "channel," even on resonance.
Metals
Metals represent an important and prevalent example of lossy media. (We define a material to behave as a "metal" at a given frequency ω if Re χ(ω) < −1, thus including materials such as SiC [78,79] and SiO 2 [79] that support surface-phonon polaritons at infrared wavelengths.) At optical frequencies, common metals have homogeneous, isotropic, and nonmagnetic susceptibilities, enabling us to write the matrix norm of the previous subsection as a simple scalar quantity, where χ(ω) is the electric susceptibility. Another alteration in the metal case is that the incident magnetic energy density, U H,inc , drops out of the limits because the magnetic polarization currents are zero in Eq. (2) (and therefore one can simplify F inc , F inc → E inc ). This is not a quasistatic restriction to small objects that only interact with the incident electric field; larger objects that potentially interact strongly with the magnetic field remain valid. But their optimal response can be written in terms of only the incident electric field, since absorption and extinction by nonmagnetic objects can also be written only in terms of electric fields, per Eqs. (4,5). The limits to per-volume absorption and scattering, simplifying Eqs. (28a,28b), are Similarly, the cross-section limits (reduced by a factor of two relative to Eqs. (29a,29b) because U H,inc is not in the limit) are where as before k = ω/c. Whereas the optimal per-volume scattering occurs at a condition of equal absorption and scattering, in App. C we also derive limits under a constraint of suppressed absorption, as may be desirable e.g. in a solar cell enhanced by plasmonic scattering [4]. The limits to the power expended by a nearby dipole emitter can be similarly simplified for metals. The incident field is the field of an electric dipole in free space, proportional to the product of the homogeneous Green's function and the dipole polarization vector, F inc,s j = E inc,s j = G EPŝ j (because the metal is nonmagnetic, only the incident electric field is relevant). The integral over the incident field in Eqs. (25a,25b), summed over dipole orientations, is given by , where · F denotes the Frobenius norm [76]. For the homogeneous photon Green's function the squared Frobenius norm is shown in App. G to be where the 1/r 6 and 1/r 4 terms arise from near-field nonradiative evanescent waves, and the 1/r 2 term corresponds to far-field radiative waves. Inserting Eq. (33) into the LDOS limits, Eqs. (25a,25b), yields a complicated integral that depends on the exact shape of the body. The integrand is positive, though, so one can instead calculate a limit by integrating over a larger space that encloses the body (we will show that most of the potential for enhancement occurs very close to the emitter, such that the exact shape of the enclosure is usually irrelevant). We consider in detail the case in which the scatterer is contained within a half-space, but we also note immediately after Eqs. (34a,34b) the necessary coefficient replacement if the enclosure is a spherical shell. All structures separated from an emitter must fit into a spherical shell, and thus we have not imposed any structural restrictions (in particular, there is no need for a separating plane between the emitter and the scatterer).
We consider a finite-size approximation of the half-space: a circular cylinder enclosing the metal body, a distance d from the emitter and with equal height and radius, L (ultimately we are interested in the limit L → ∞). The volume integrals are straightforward in cylindrical coordinates, yielding V 1/r 6 = π/6d 3 , V 1/r 4 = π/d, and V 1/r 2 = π ln(2)L, for L d (discarding the contributions ∼ d/L for the evanescent-wave terms). Then the limits to radiative and nonradiative LDOS rates are where O · signifies "Big-O" notation [80]. Note that the O kL terms, which arise from the far-field excitation, diverge as the size L of the bounding region goes to ∞, whereas one would expect the near-field excitation to be most important. The O kL divergence as L → ∞ is unphysical: it represents a polarization current that is proportional to the 1/r incident field, according to Eqs. (24a,24b), over the entire half-space, maintaining a constant energy flux within a lossy medium. Hence, this O kL term, while a correct upper bound, is overly optimistic, and the attainable radiative contribution must be non-diverging in L. One could attempt to separately bound the evanescent and radiative excitations. However, L also represents the largest interaction distances over which polarization currents contribute to the LDOS (in Eq. (10), for example); in App. B we show that for reasonable interaction lengths L and near-field separations d, the contribution of the O kL terms is negligible compared to the 1/d 3 terms (because the divergence is slow). Thus in the near field, where the possibility for LDOS enhancement is most significant, the limits are dominated by the 1/d 3 terms: A spherical-shell enclosure of solid angle Ω yields the same result but with the replacement 1/8 → Ω/4π in each limit. Again we see the possibility for enhancement proportional to |χ| 2 / Im χ. There is the additional possibility of near-field enhancement proportional to 1/(kd) 3 , which arises from the increased amplitude of the incident field at the metal scatterer. Figure 2 depicts |χ| 2 / Im χ as a function of wavelength for many natural and synthetic [3, 5, 28-30, 79, 81-83] metals. Three recent candidates for plasmonic materials in the infraredaluminum-doped ZnO (AZO), and silicon-doped InAs-are included using Drude models of recent experimental data from Naik et. al. [29], Law et al. [30], and Sachet et al. [83], respectively. For the conventional metals, data from Palik [79] was used; high-quality silver, consistent instead with the data from Johnson and Christy [85] and Wu et al. [86], would have smaller losses and a factor of three improvement in |χ| 2 / Im χ. A broadband version of the metric can be computed for extinction or LDOS by evaluation at a single complex frequency [16,56].
The material enhancement factor |χ| 2 / Im χ appears in the absorption cross-section of quasistatic ellipsoids [32]; here we have shown that it more generally bounds the scattering response of a metal of any shape and size. It arises in the increased amplitude of the induced polarization currents; for example, the optimal scattering fields of Eq. (20) simplify in metals to the optimal currents P scat,opt = i 2 with similar expressions for the optimal currents for maximum absorption and LDOS. The factor |χ| 2 / Im χ provides a balance between absorption and scattering: in terms of the polarization currents, the absorption in a metal is proportional to (Im χ/|χ| 2 ) V |P| 2 , whereas the extinction is proportional to Im V E * inc · P, thus requiring currents proportional to |χ| 2 / Im χ for absorption and extinction to have the same order of magnitude. The expression is intuitively appealing because a large |χ| signifies the possibility to drive a large current, while a large Im χ dissipates such a current. Our bounds suggest that epsilon-near-zero materials [87,88], with |χ| ≈ 1, require a very small Im χ to generate scattering or absorption as large as can be achieved with more conventional metals.
A similar, alternative understanding can be attained by considering the currents J = dP/dt = −iωP. Defining the complex resistivity of the metal as ρ = i/ε 0 ω χ, the analogue of Eq. CdO:Dy TiN ITO Fig. 2. A comparison of the metric |χ| 2 / Im χ, which limits absorption, scattering, and spontaneous emission rate enhancements, for conventional metals (Ag, Al, Au, etc.) [79] as well as alternative plasmonic materials including aluminum-doped ZnO (AZO) [29], highly doped InAs [30], SiC [79], TiN [81], ITO [82], and Dysprosium-doped cadmium oxide [83] (CdO:Dy). Silver, aluminum, and gold are the best materials at visible and near-infrared wavelengths, although at higher wavelengths the structural aspect ratios needed to achieve the limiting enhancements may not be possible. The dotted lines indicate wavelengths at which resonant nanorods would require aspect ratios greater than 30, approximating the highest feasible experimental aspect ratios [84]. Despite having lower maximum enhancements, AZO, doped InAs, and SiC should be able to approach optimal enhancements in the infrared with realistic aspect ratios.
for J is The enhancement factor |χ| 2 / Im χ thus corresponds to the inverse of the real part of the metal resistivity, which corroborates the idea that small metal resistivities enable large field enhancements, as discussed recently for circuit [41] and metamaterial [6] models of single-mode response. The limits in Eqs. (32a,32b,35a,35b) can be applied additively to multiple bodies: the pervolume absorption and scattering limits of Eqs. (32a,32b) are equally valid for a single particle, multiple closely spaced particles, layered films, or any other arrangement. Similarly, the LDOS limits in Eqs. (35a,35b) can be extended to e.g. a structure confined within two half-spaces, but with a prefactor of 2 × 1/8 = 1/4, and any other arrangement in space is similarly possible.
There are two asymptotic limits in which our bounds diverge: lossless metals (Im χ → 0), and, for the LDOS bound, the limit as the emitter-metal separation distance d → 0. In each case, the divergence is required, as there are structures that exhibit arbitrarily large responses. For example, as the loss rate of a small metal particle goes to zero, it is known that the absorption per unit volume increases until, for a given size, the radiative loss rate equals the absorptive loss rate [43,44,89,90]. However, if the size (and therefore the radiation loss rate) is decreased concurrently with the material loss, the cross-section per unit volume can be made arbitrarily large. Thus, for a small enough particle, any σ ext /V is possible, and the limit must diverge as material loss approaches zero (regularized physically by both nonzero loss and nonlocal polarization effects [39][40][41]). Similarly, the LDOS can diverge in the limit of zero emitterscatterer separation, both for lossy materials where absorption diverges [91] and for lossless materials with sharp corners [92]. The latter case can be reasoned as follows: the fields at a sharp tip, either dielectric or metal, diverge for any nonzero source (of compatible polarization) [38,93,94]. By reciprocity, for a source infinitesimally close to the tip, the LDOS must diverge.
Optimal and non-optimal structures
We turn now to the design problem: are there structures that approach the limiting responses set forth by Eqs. (32a,32b,35a,35b)? We show that optimal ellipsoids can approach both the absorption and scattering limits across many frequencies by tuning their aspect ratios. For the LDOS limit, however, the optimal designs are not as clear. At the resonant ("surface-plasmon") frequency ω sp of a given material the prototypical planar surface exhibits a nonradiative LDOS enhancement approaching the limit of Eq. (35b). However, at lower frequencies, neither thin films [95] nor common metamaterial approaches for tuning the resonant frequency achieve the |χ| 2 / Im χ enhancement, thereby falling short of the limit. Similarly, representative designs for increased radiative LDOS are shown to fall orders of magnitude short of the limits. These structures fall short because the near-field source excites higher-order, non-optimal "dark" modes that reduce the LDOS enhancement.
Absorption and scattering
Small ellipsoids approximated by their dipolar response can approach the limits of Sec. 3 across a wide frequency range by tuning their aspect ratios. Ellipsoids reach the absorption limits for small (ideally quasistatic) structures, and the scattering limits for larger structures that are still dominated by their electric dipole moment. The quasistatic absorption cross-section of an ellipsoid, for a plane wave polarized along one of the ellipsoid's axes, is [32] σ abs V = k Im where L is the "depolarization factor" (a complicated function of the aspect ratio) [32] along the axis of plane wave polarization. The optimal response is achieved for the aspect ratio such that L = Re (−1/χ(ω)), which yields a polarization field within the particle of [32]: which is exactly the optimal absorption condition of Eq. (22), as can be seen by comparison with Eq. (36). For this optimal depolarization factor, the peak absorption cross-section per unit Wavelength, λ (μm) Fig. 3. Absorption (blue) and scattering (red) cross-sections per unit particle volume for nanoparticles of (a) gold [79] and (b) Si-doped InAs [30], illuminated by plane waves polarized along the particle rotation axis. The ellipsoid aspect ratios can be tuned to approach both the maximum absorption and maximum scattering cross-sections (black) of Eqs. (32a,32b). The dimensions of the nanoparticles are optimized at three representative wavelengths, in the visible for gold (constrained to have radii not less than 5nm) and at longer infrared wavelengths for doped InAs. Whereas the maximum-absorption particles are small to exhibit quasistatic behavior, represented by dashed lines in (a), the maximumscattering particles are larger such that their scattering and absorption rates are equal.
volume is given by thereby reaching the general limit given by Eq. (32b). Equation (41) is valid for both oblate (disk) and prolate (needle) ellipsoids. Here we have considered the cross-section for a single incident plane wave; if one were interested in averaging the cross-section over all plane-wave angles and polarizations (as appropriate for randomly oriented particles), then it is possible to find a bound that is tighter, by 33% for most materials, in the quasistatic regime. In that case the bounds are achieved by disks but not needles. Whereas the absorption cross-section is maximized for very small particles approaching the quasistatic limit-necessary to exhibit zero scattered power, a prerequisite for reaching the absorption bounds-the optimal scattering cross-section is achieved for larger, non-quasistatic particles that couple equally to radiation and absorption channels. One can show either through a modified long-wavelength approximation [100][101][102] or by coupled-mode theory [43,44] that the dimensions of a small particle can be tuned such that the absorption and scattering crosssections are equal, at which point the scattering cross-section per volume is a factor of four smaller than the optimal quasistatic absorption. We validate this result with computational optimization and show that it enables the design of metallic nanorods with nearly optimal performance. Figure 3 shows the per-volume absorption and scattering cross-sections of (a) gold and (b) Si-doped InAs [30] nanorods designed for maximum response across tunable frequencies. As in Ref. [30] we employed a Drude model for the doped InAs, with plasma frequency ω p = 2πc/5.5µm and damping coefficient γ ≈ 0.058ω p , as is appropriate for a doping density on the order of 10 20 cm −3 (Ref. [103]). We employed a free-software implementation [104] of the controlled random search [105,106] optimization algorithm to find globally optimal el- For each metal except gold-which has significant losses-the nonradiative LDOS at the surface-plasmon frequency ω sp (dotted line) approaches the limit given by Eq. (35b). The emitter-metal separation distance is fixed at d = 0.1c/ω sp . The limits are equally attainable for conventional metals such as Al and Ag as for synthetic metals such as AZO [29] and highly doped InAs [30], and for SiC. (b) Enhancement as a function of metal-emitter separation distance d, with the frequency fixed at the surface-plasmon frequency ω sp for each metal. The limiting enhancements are asymptotically approached as the separation distance is decreased, because the quasistatic approximation of Eq. (42) becomes increasingly accurate.
lipsoid radii. For the gold nanorods a minimum radius of 5nm was imposed as representative of experimental feasibility [39,107] and a size scale at which nonlocal effects are expected to remain small [39][40][41]108]). The gold particles optimized for absorption fall slightly short of the limits due to the minimum-radius constraint; in the quasistatic limit (dashed), the absorption cross-section reaches the limit, as expected from Eq. (41). Both gold and doped-InAs nanoparticles closely approach the scattering and absoption limits of Eqs. (32a,32b). For Drude models, the factor k|χ| 2 / Im χ that appears in both limits simplifies to ω 2 p /γc, a material constant independent of wavelength, clearly seen in Fig. 3(b). The increase in σ /V as a function of wavelength in Fig. 3(a), for gold, can be seen as a measure of the deviation of the material response [79] from a Drude model. A constant response for Drude models is not universal: near-field interactions, in particular the local density of states (LDOS), instead depend only on |χ| 2 / Im χ ∼ ω 2 p /γω, thereby increasing at longer wavelengths, away from the bulk and flat-surface plasmon frequencies.
Not all small particles reach the limiting cross-sections. Coated spheres are common structures for photothermal applications [31,109,110], but their absorption cross-section per unit particle volume is proportional to (2/3)|χ|/ Im χ instead of |χ| 2 / Im χ (cf. App. D). Their enhancement does not scale proportional to |χ| 2 due to the small metal volume fractions required to tune the resonant frequency.
LDOS
Designing optimal structures for the local density of states (LDOS) enhancement limits, Eqs. (34b,34a), is not as straightforward as designing optimal particles for plane-wave absorp-tion or scattering. Because the structure is typically in the near field of the emitter, it is difficult to design a resonant mode that exactly matches the rapidly varying field profile of the emitter. We show that it is possible to reach the nonradiative LDOS limits at the surface-plasmon frequency of a given metal, but that away from these frequencies typical structures fall short. Similarly, for the radiative LDOS limits, common structures fall short of the limits, especially at longer wavelengths.
Planar layered structures support bound surface plasmons that do not couple to radiation, and thus only improve the nonradiative LDOS. As discussed in Sec. 1, this is potentially useful for radiative heat transfer applications, where near-field emission and absorption have been extensively studied [111][112][113][114][115]. Morover, adding either periodic gratings or even random textures can couple the bound modes to the far field [116][117][118] and potentially result in substantial increases to the radiative LDOS. Thus we first study how closely surface modes in planar structures can approach the nonradiative limits, and then we analyze the performance of representative coneand cylindrical-antenna structures relative to the radiative LDOS limits.
At the surface-plasmon frequency, the prototypical metal-semiconductor interface that supports a surface plasmon exhibits a nonradiative LDOS approaching the limiting value of Eq. (34b). In the small-separation limit (kd 1) and at the surface-plasmon frequency ω sp , the local density of states near a planar metal interface reduces to [12] ρ nr ω sp ρ 0 thereby approaching exactly the nonradiative LDOS limit of Eq. (34b). (Note that we define ρ 0 as the electric-only free-space LDOS, different by a factor of two from the elec-tric+magnetic LDOS in [12].) Although the surface-plasmon frequency is typically defined [2] as the frequency at which Re ε = −1, this is the frequency of optimal response only in the zero-loss limit. More generally, we define the surface-plasmon frequency ω sp such that Re ξ (ω sp ) = Re −1/χ(ω sp ) = 1/2. For gold, which never satisfies Re ξ = 1/2 due to its high losses, we define surface-plasmon wavelength to be λ sp = 510nm, where Re ξ (ω) is a maximum. Figure 4(a) compares semianalytical computations of the nonradiative LDOS near a flat, planar metallic interface to the nonradiative LDOS limits given by Eq. (34b). Six metals are included: Al (black), Ag (blue), Au (red), AZO (green), InAs (teal), and SiC (purple), with the surface-plasmon frequency ω sp of each in a dotted line and a fixed emitter-metal spacing of d = 0.1c/ω sp . Every metal except gold-which is too lossy-reaches its respective limit; it is possible that a different, nonplanar gold structure, with the correct "depolarization factor" (VIE eigenvalue, cf. App. A), could approach the limit. Figure 4(b) shows the emittermetal separation-distance dependence for ω = ω sp . The limits are approached-again, except for gold-as the emitter-metal separation decreases and the approximate 1/d 3 dependence of Eq. (42) becomes more accurate.
There are a few common approaches to tune the resonant frequency below ω sp . A standard approach is to use a thin film [2,119], coupling the front-and rear-surface plasmons to create lower-and higher-frequency resonances. Other approaches include highly subwavelength structuring, to create hyperbolic [120-122] or elliptical metamaterials with reduced effective susceptibilies. We show here that such structures do not exhibit the material enhancement factor, |χ| 2 / Im χ, and thus do not approach the limit to ρ nr given by Eq. (34b).
The nonradiative LDOS of a thin film can be computed by decomposing the dipole excitation into plane waves (including evanescent waves), which reflect from the layers according to the usual Fresnel coefficients. The LDOS near a thin film is well-known as an integral over the surface-parallel wavevector [69]; for a dipole with fixed frequency ω and height d above the Fig. 5. Away from the surface-plasmon frequency of a given metal-taken here to be silver [79]-it is more difficult to reach the radiative and nonradiative LDOS limits, Eqs. (34b,34a). The emitter-metal separation d is fixed at d = 10nm for (a) and (b). (a) Nonradiative LDOS enhancements, ρ nr /ρ 0 , for thin films (red) for various silver thicknesses (t), and for (type-I) hyperbolic metamaterials (HMMs, purple) for two silver fill fractions (ff). (b) Radiative LDOS enhancements, ρ rad /ρ 0 for cone and cylinder antennas, with dimensions optimized at wavelengths from λ = 450nm to λ = 850nm. (c) Scaling of ρ nr and ρ rad for optimized thin films and cone antennas, respectively, as a function of d (inset: log-log scale). The scaling of the optimal design appears to be 1/d 3 , with the structures falling short of their respective limits [the dashed line is (ρ rad /ρ 0 ) max ] because they do not exhibit a |χ| 2 / Im χ enhancement. film, and a film of optimal thickness t, one can show that ρ nr is (cf. App. F) which is valid for ω << ω sp (otherwise a bulk is optimal and Eq. (42) describes the response) and for relatively small loss, Im χ |Re χ| (as is typical at optical frequencies), to ensure the large-wavevector modes are resolved. Unlike a planar interface, an optimal thin film does not exhibit the |χ| 2 / Im χ material enhancement. The thin film falls short because it relies on nearfield interference to couple the front-and rear-surface plasmons, yielding a resonance that couples strongly to the dipole emitter over only a small bandwidth of wavevectors (∆k p ∼ Im χ/|χ|) that cancels the resonant enhancement from decreased loss. An alternative understanding arises from viewing a thin film as the single-unit-cell limit of a layered hyperbolic metamaterial (HMM) [95]. Hyperbolic metamaterials exhibit anisotropic effective susceptibilities such that the resonant frequency can be tuned, but their resonances occur within the bulk rather than along the surface, such that they cannot yield infinite LDOS even in the limit of zero loss. In Ref. [95] we show that the LDOS near an optimal HMM is nearly identical to the LDOS near an optimal thin film, as verified in Fig. 5.
Achieving the radiative LDOS limit is a similarly challenging problem. To reach the radiative LDOS limits, the polarization field must exhibit the 1/r 3 spatial dependence of the incident field, while also coupling to far-field radiation channels. Here we consider two representative structures for tuning the LDOS resonant frequency: mirror-image cones (akin to bowtie antennas) and cylindrical antennas (scaled [66] shorter than λ /2). and we show that each falls short of the limits across optical frequencies. Figure 5 shows the nonradiative and radiative LDOS near silver thin films, effective-medium hyperbolic metamaterials (HMMs), cones, and cylinders. In Fig. 5(a) and Fig. 5(b), the emittermetal separation distance is fixed at d = 10nm, and the structures are optimized at four wavelengths, λ = [450, 600, 725, 850]nm, using a standard local optimization algorithm [123]. The optimal lengths and thicknesses are included in the figure. The optimal cone half-angles and 6. A schematic comparison of absorption and scattering limits: multipole limits [21,22] to the total cross-section can provide design guidelines at low frequencies, where it is difficult to achieve "plasmonic" resonances, but at higher frequencies our dissipationbased limits provide tighter limits to the per-volume response. The frequencies at which our bounds can be reached range from the bulk plasma frequency (ω = ω p ) down to ω ∼ ω p /AR max , where AR max is the maximum achievable aspect ratio. Included is the relevant range for silver ellipsoids (red text), assuming AR max = 30. Plasmonic behavior at longer wavelengths is possible with materials such as AZO and doped InAs (cf. Fig. 2).
cylinder radii vary slightly but are typically on the order of θ = 20°and r = 10nm, resp. Away from ω sp , the optimal nonradiative LDOS of the thin films in Fig. 5(a) is very well described by Eq. (43), which shows that the structures fall short of the limits by the material enhancement factor. A similar effect is seen for the radiative LDOS in Fig. 5(b), as the cones also fall increasingly short of the limits as the frequency is decreased (≈ 250× at λ = 850nm). The quantum efficiencies (ρ rad /ρ tot ) of the cone antenna varies from 50-70%, near the optimal ratio of 50% to reach our bounds, suggesting that the reason they fall short is due to a mismatch between the emitter and resonance field profiles, not due to coupling to nonradiative channels. Figure 5(c) shows the LDOS dependencies as a function of d for λ = 600nm, in linear (main) and logarithmic (inset) scale, with the structure parameters optimized for each d. The optimal structures appear to exhibit the 1/d 3 scaling of the limits in Eqs. (34a,34b). An important open question is the extent to which structures can be designed to approach the limits, thereby improving over current designs by two to three orders of magnitude.
Extensions and discussion
The limits derived in Secs. (2,3) apply to a general class of materials embedded in vacuum, without any scatterers. They can be generalized for non-vacuum backgrounds. For a metal in a homogeneous, lossless background with permittivity ε bg , the limits are of the same form but with replacements k → √ ε bg k and χ → (ε(ω) − ε bg )/ε bg . A similar generalization applies for general lossy media in non-vacuum backgrounds. If there are background scatterers present (possibly periodic [124-126]), then the derivation is identical except that the Green's functions are the Green's functions in the presence of the background scatterers, and the "incident" fields therefore incorporate the effects of the background scatterers. The minimum thickness of a metal absorber on a substrate [127, 128], for example, could be computed with Eq. (32b), with the replacement E inc = E 0 e ikz + re −ikz (for substrate reflectivity r), or more generally by bounding the incident field via |E inc | < 2|E 0 |.
Previous approaches to general limits via energy conservation have bounded the response of a structure via its scattered-field operator [27]. This has yielded limits to the cross-section, σ , for spherically symmetric [43,44] or more general [21,22] scatterers whose response has been decomposed into spherical multipoles. The cross-section limits are proportional to λ 2 N 2 + N , where N is the number of excited multipoles.
From a design perspective, there are serious impediments to using such cross-section limits (as opposed to our σ /V limits). The cross-section itself is unbounded, increasing with the size of a large particle [32]. Thus implicit in any use of such bounds is a size normalization, but this can only be done at very small scales, where the number of multipoles is 1 (or 2 for perfect conductors) and our bounds are tighter, and at very large size scales, approaching the geometric-optics regime. At intermediate sizes, it is difficult to estimate the number of multipoles without further modeling of a given structure. We have presented a new approach, using material dissipation, instead of the scattering operator, as the binding constraint. Our limits have the unique feature that they incorporate the material properties and are independent of structure. From a design viewpoint this is a significant advantage, since for any problem there are infinitely many possible structures but typically only a handful of relevent materials. Furthermore, the normalization to geometric volume, e.g. σ /V , emerges naturally in our limits.
Reaching our absorption and scattering limits likely requires significant polarization currents throughout the scatterer volume, as can be seen in the optimal field profiles in Sec. 3. At low frequencies, it is difficult to fabricate structures with sizes or aspect ratios necessary to achieve such resonances. For a Drude-metal (appropriate at low frequencies) ellipsoidal nanorod, the optimal aspect ratio [32] for maximum absorption scales as ω/ω p , where ω p is the bulk plasma frequency. Thus, if we define a maximum feasible aspect ratio AR max , the minimum frequency at which a plasmonic resonance can be achieved is proportional to ω p /AR max . Below this frequency, the multipole limits can serve as a design guide, although the uncertainty about the number of multipoles, and the potential mismatch between a non-spherical object and its bounding sphere, remain as barriers. Figure 6 depicts schematically depicts which bound provides better design criteria as a function of frequency. Included in Fig. 6 is red text corresponding to the limiting frequencies at which silver nanorods (using experimental material data [79] and making no Drude approximation) can approach our limits, assuming a realistic [84] maximum aspect ratio of 30.
Similarly, it may be difficult to reach our limits with larger, wavelength-scale solid particles that are much larger than the skin depth. One of the conclusions from our work is that such particles are particularly inefficient scatterers, and thus should be avoided, because currents cannot be excited throughout such a large portion of their volume. At optical frequencies, any technology must ultimately incorporate some collection of ordered or disordered scatterers, whether in planar arrays [4,57,[129][130][131][132], aqueous environments [133-135], or some other configuration. Thus even if the individual scatterers have small cross-sections, there can be many of them (due to their small volumes), providing a large collective cross-section [107] while maintaining the per-volume response of the individual scatterers.
Aside from particle scattering, our limits extend to situations that do not have multipole counterparts. They yield meaningful limits for extended structures (whose large size would excite many spherical harmonics), and for the LDOS (where a near-field source would excite many spherical harmonics).
An interesting aspect of our limits is the fact that they apply to any open scattering problem. Open systems-in which energy can enter and exit-are typically described by non-Hermitian operators. Non-Hermitian operators are not guaranteed to be diagonalizable, and therefore may not have a complete basis of eigenfunctions (technically, even for Hermitian operators, rigorous eigendecomposition in infinite-dimensional spaces is subtle and subject to obscure counter-examples [136,137]). Breakdown of diagonalizability only occurs at "exceptional points" that must be forced [138,139] and which occur by chance with zero probability. Near an exceptional point, however, it is possible to have eigenfunctions that are nearly "self-orthogonal," with exceptionally large modal overlap (or in theory highly nonnormal and ill-conditioned Maxwell operators [140]) leading to effects such as destructive interference in scattering "dark states" [141] (e.g. Fano resonances [142]) and the Petermann factor for noise enhancement in lasers [143,144]. Nevertheless, passive systems near such exceptional points cannot exceed our limits, which impose only conservation of energy.
The limits presented here suggest new design opportunities with metals. Nearly lossless metals [8] could manifest unprecedented responses. Even for conventional lossy metals, large-area structures that achieved the absorption or scattering limits presented here could potentially do so with thicknesses approaching a single atomic layer. Nonlocal interactions, for which it may not be possible to separate material and structural properties, would be important in such a structure. Finding limits incorporating nonlocal effects would represent an important extension to this work. Designing structures to approach the LDOS limits could impact applications such as imaging, where there are potentially orders of magnitude improvement to be gained. We derive limits for the problem of near-field radiative heat transfer, where the sources are embedded within the designable media, in an upcoming publication [145], and it would be interesting to extend the limits derived here to other figures of merit, potentially finding new metrics or structures for optimal light-matter interactions.
A. Alternative understanding of the limits: VIE approach
Here we present an alternative viewpoint for understanding our limits, as arising from sum rules over eigenmodes of the volume integral equations of electromagnetism. This approach appears to only work for materials with a scalar χ (either electric or magnetic), and thus is less general than the derivation in the text (we assume a nonmagnetic medium). We include this appendix because higher-order sum rules may yield tighter limits in certain scenarios, e.g. angle-averaged incident fields. First we show how limits arise from eigenmodes of the volume integral equations (VIEs), which can be considered "material resonances." This connection was partially recognized by Rahola [146], and may be related to eigenvalues in SALT laser theory [147], but has not since been pursued any further. Material resonances are common in quasistatic electromagnetism [148,149], where there is no frequency; here, we show how to extend the concept to fixed, nonzero frequencies.
Inherent to the concept of a resonance in physics is the resonant frequency: intuitively, the frequency at which an electromagnetic, elastic, quantum mechanical, or any other type of wave oscillates without external forcing in a specific, predefined structure. In a closed or periodic structure, these resonances correspond mathematically to eigenvalues of the underlying differential equations. For photonic structures, defined by a spatially dependent permittivity ε(x), resonant frequencies ω n are eigenvalues of the eigenequation The conventional resonant-frequency approach is depicted in (a): the operating frequency is real-valued and can in theory be approached arbitrarily closely by a resonance with small imaginary part − Im ω n (i.e. a high-Q resonance). Conversely, volume integral equations yield the resonant-susceptibility approach depicted in (b): metal losses, which correspond to Im ξ = Im (−1/χ) > 0, inherently impose a minimum separation q to how closely a material resonance-restricted to lie on or below the real line-can approach the real system parameters. Moreover, quasistatic structures have real-valued eigenvalues, and thus have the potential to achieve the minimum eigenvalue separation and maximum optical response.
in Fig. 7(a). Frequency resonances in electromagnetism are well understood, but material resonances-which arise as eigenvalues in integral equations-have hardly been explored at all. The electric field integral equation (EFIE) formulation of Maxwell's equations is derived through the use of Green's functions [51,152]. As depicted in Fig. 1, we consider generic scattering problems in which a structure with susceptibility χ interacts with an externally imposed incident field E inc . The response of the scatterer is given by a convolution of the free-space Green's function G with the induced polarization currents P = χE. As in the text we assume a scatterer embedded in vacuum, with straightforward generalizations. The total field E is the sum of the incident and scattered fields [51,153], for all points in space, where we choose a negative sign convention for the Green's function (opposite that of Eq. (10) in the text). Equation (45) can be desingularized [154], but our treatment depends only on an abstract spectral decomposition that does not require us to grapple with such details. A similar integral equation arises in quantum mechanics, where it is known as the Lippmann-Schwinger equation and the susceptibility is replaced by the scattering potential [155]. For a scatterer with homogeneous susceptibility, χ is constant and can be taken out of the integrand in Eq. (45). Homogeneous scatterers are thus defined by the single material parameter χ. Resonances of Eq. (45), even in open systems, are true eigenvalues because the integral equation has unknowns E(x ) defined over the finite scatterer domain V , and thus the corresponding eigenfunctions are normalizable. Eigenfunctions E n and eigenvalues ξ n of the Green's function integral operator satisfy: for all points x in V . Given the eigenvalue ξ n , if we choose a material χ = χ n = −1/ξ n then by comparison with Eq. (45) we see that a "standing-wave" E = 0 is possible even for E inc = 0. Equation (46) is the integral-equation analogue of Eq. (44) (for a homogeneous scatterer), and yet we see that the integral operator on the left-hand side of Eq. (46) depends on the structure and the frequency but not on the susceptibility. Instead, the eigenvalue of the mode yields a resonant value χ n -a resonant material susceptibility, for a fixed frequency. Just as "leaky modes" in Eq. (44) are not actually physical solutions of Maxwell's equations, the resonances χ n are not physically realizable materials; we will see below that Im χ n < 0 for ω > 0, as shown in Fig. 7(b), corresponding to gain required to overcome modal radiation loss. For both frequency and material resonances, the separation between the system parameter (e.g. operational frequency) and the resonance defines the magnitude of the responsethe smaller the separation, the larger the response. In the resonant-frequency framework it is difficult to provide a lower bound on the imaginary part of the resonant frequency ω n (thereby bounding the maximum Q). In the resonant-material approach, however, the system parameter-the susceptibility, instead of the frequency-has a nonzero imaginary part for a lossy system. By causality [156] (or passivity [52]), frequency resonances for a fixed structure and material lie in the lower half of the complex-ω plane. Similarly, for a fixed frequency ω > 0, the material resonances ξ n must reside in the lower half of the complex-ξ plane (otherwise, one could construct a passive linear material that violates the condition on the resonant frequencies [157]). Thus, as depicted in Fig. 7, there is a minimum separation q = Im ξ (ω) between the material parameter ξ (ω) = −1/χ(ω) and the eigenvalue ξ n . There are further benefits to the resonant-material approach: the solutions to the integral equation are defined only over the scatterers, rather than all space, and quantities like the extinguished power and the local density of states can be written as volume integrals, ideally suited to a VIE framework.
To simplify further analysis, we rewrite the VIE of Eq. (45) as: where the fields are e and e inc (with lower-case e denoting vector fields restricted to the volume V , forming a Hilbert space, as opposed to the fields E defined everywhere in space), I is the identity operator, and G is the Green's function integral operator defined by G e = V G(x, x , ω)E(x ). The eigenfunctions of G are solutions of Eq. (46) and are given in vector notation by G e n = ξ n e n , where n is the mode index, ξ n = −1/χ n , and Im ξ n ≤ 0. There are advantages to considering the integral operator G rather than the Maxwell operator M = (1/ε)∇ × ∇×. For the operator M , metals are difficult to treat: material dispersion renders the eigenvalue problem nonlinear in ω 2 , and material loss yields a non-Hermitian operator even for closed or periodic structures [158][159][160]. The Green's function operator avoids these difficulties because it assumes a fixed frequency and is independent of material, and therefore of material loss. An additional advantage of the VIE approach is the compact domain, which sidesteps the subtle normalization [159,[161][162][163][164] required in the resonant-frequency approach, where the "leaky" fields diverge as they extend to infinity. We consider open systems-in which energy can enter and exit-such that typical operators, including the curl-curl operator M of Eq. (44) and the Green's function integral operator G of Eq. (47), are not Hermitian. By reciprocity G is complex-symmetric [11], such that if it is diagonalizable, its generic U ΞU −1 eigendecomposition can be written [165] where Ξ is a diagonal operator with entries ξ n and U is the basis of eigenfunctions e n .
The assumption of diagonalizability (the existence of a "spectral" eigendecomposition of the operator) is commonplace is physics. Technically, even for Hermitian operators, rigorous eigendecomposition in infinite-dimensional spaces is subtle and subject to obscure counterexamples [136,137]. However, if one makes the reasonable conjecture that the system can be simulated on a computer, i.e. that one has a convergent finite-dimensional discretization, then one should be able to apply the eigendecomposition in this finite-dimensional system to arbitrary accuracy; this can be viewed as a justification of the commonplace assumption of diagonalizability. Even in a finite-dimensional problem, of course, diagonalizability of a non-Hermitian matrix is not guaranteed [165], but breakdown of diagonalizability only occurs at "exceptional points" that must be forced [138,139]. Exceptional points occur by chance with zero probability; near an exceptional point, the operator G is highly nonnormal [140] and illconditioned (i.e. cond(U U † ) 1), but it is diagonalizable, and we show in the text that the response of such structures cannot surpass our limits. By continuity our limits also apply at non-diagonalizable exceptional points, thereby justifying our assumption of diagonalizability. Our decomposition is similar in spirit to the singularity and eigenmode expansion methods, SEM and EEM, respectively, for surface integral equations in electromagnetic scattering [68,[166][167][168][169].
The complex-symmetry of the operator leads to an atypical (unconjugated, indefinite) "inner product" under which the eigenfunctions are orthogonal: e T i e j = V E i · E j = δ i j . Because modes are not orthogonal under the typical inner product e † e = V E * · E, they are not powerorthogonal [162]: it is possible for energy in one mode to mix into another, leading to effects such as destructive interference in scattering "dark states" [141] (e.g. Fano resonances [142]) and the Petermann factor for noise enhancement in lasers [143,144]. Another possibility is that V E i · E i = 0, i.e. "self-orthogonality," which renders the eigenfunction basis incomplete [170] and corresponds to exceptional points [139,171]. As shown in the text, energy conservation prevents any of these phenomena from surpassing our limits.
A modal decomposition of the electric field e follows from the modal decomposition of G in Eq. (49). In order to isolate G , it is easier to work with the polarization currents, χe, which from Eq. (47) are given by: where ξ (ω) = −1/χ(ω) and the inverse of I + χG is guaranteed to exist (for ε = 0, 1) because it is a Fredholm operator [172]. Because the extinguished power and the total LDOS are both imaginary parts of linear functionals of the induced fields, their modal decompositions are single sums over the resonances. Given the electric field from Eq. (50), the extinction of Eq. (5) and total LDOS of Eq. (15) are determined by overlap integrals between the incident field and the VIE-basis eigenfunctions: Because (Ξ − ξ (ω)) −1 is a diagonal operator, it can be written as a single sum over the modes, simplifying the extinction and LDOS: , and ξ n are the VIE eigenvalues and the diagonal entries of Ξ. The p n and ρ n are normalized "oscillator strengths" representing the per-mode extinguished power and per-mode total density of states, respectively. Adding up only the oscillator strengths yields structure-independent sum rules for each quantity: where the sum over all modes exploits the eigenbasis orthogonality, U T U = I , and as before j indexes the polarization of the dipole emitter. For each quantity, the sum of the oscillator strengths p n or ρ n is given by the intensity of the electric field originally incident upon the volume occupied by the scatterer. Ideally, the sum rules for the extinction and the total LDOS would lead directly to limits, with the numerators in Eqs. (52a,52b) bounded above by the sum rules and the denominators bounded below by Im ξ (ω) = Im χ(ω)/|χ(ω)| 2 , which is nonzero due to material losses. Because our system is non-Hermitian, such an argument is not valid. If we had a Hermitian system with a conjugated orthogonality relationship E * i ·E j = 0, then we would have used U † instead of U T in Eq. (49) and the resulting amplitudes p n and ρ n would have been real and positive. Due to radiation losses (though not metal losses), we have complex p n and ρ n with possibly negative real parts, and hence it is possible to have e.g. |p n | | ∑ p n |. Such a response is a general feature of nonnormal dynamics, where there can be significant amplification beyond what one would expect from the resonances, as the pseudospectral level curves [140] may look very different from typical circles centered at the eigenvalues.
Energy conservation prevents such responses from surpassing the sum rule limits that are obtained when all of the oscillator strength in Eqs. (53a,53b) are concentrated at a single resonance (with the caveat that E inc,s j → E * inc,s j , although for the dominant 1/r 3 quasistatic term this makes no difference). The optimal resonance is given by which is on the real line, as close to the material parameter ξ (ω) as possible. This choice of eigenvalue leads to the extinction limit in Eq. (32b). Imposing energy conservation on the absorption, scattering, and radiative and nonradiative LDOS integrals in the VIE approach yields the limits of Eqs. (32a,32b,34a,34b), but we will not prove that here. An interesting possibility that arises from the VIE approach is the potential existence of further sum rules. In addition to considering the sum of oscillator strengths in Eq. (53a), one can consider the sum of eigenvalues, weighted by the oscillator strengths: where we used the resolution of the indentity ∑ n E n i (x)E n k (x ) = δ ik δ (x − x ) to simplify the third line. In the surface-integral representation of quasistatic electromagnetism it can be shown the oscillator strengths and relevant eigenvalues for extinction averaged over all angles is constrained to satisfy ∑ n ξ n p n / ∑ n p n = 1/3, reducing the possibility for all-angle response relative to the single-angle response [45,47]. Equation (55) and its higher-order counterparts (e.g. ∑ n ξ 2 n p n ∼ e † inc G G e inc ) may yield stricter sum rules under various incident fields, reducing the possible response.
B. Bounds on the O kL term in the LDOS limits
The bounds on the radiative and nonradiative LDOS, Eqs. (34a,34b), take into account the 1/r 3 , 1/r 2 , and 1/r terms in the free-space dyadic Green's function. Integrating the 1/r term over a half-space (or a spherical shell, or any other structure separated some distance d from the source) yields the O kL term in Eqs. (34a,34b), which diverges as the size L → ∞. As discussed in the text, this divergence is unphysical. It results from deriving the optimal current as proportional to the incident field, P ∼ E inc , which is appropriate and feasible for the evanescent waves, but which for the 1/r term yields a physically impossible fixed energy density over infinite space within a lossy medium.
Despite the divergence of this term, for finite object sizes its contribution to the limits of Eqs. (34a,34b) is actually very small. One does not even have to consider a finite object but a finite interaction distance: L represents the largest distance over which polarization currents in the metal generate nonzero scattered fields at the dipole source. Even wavelength-scale lengths are upper bounds to reasonable interaction distances in a lossy medium. Table 1 compares the near-field limit in Eq. (35a) to the full limit with all terms in Eq. (34a) as well as the full limit without the O kL term, ρ nr ρ 0 ≤ |χ| 2 Im χ 1 32(kd) 3 + 1 16kd + 1.
We take |χ| 2 / Im χ = 1 for simplicity. One can see from Table 1 that the near-field limit is a very good approximation to the overall limit for realistic separations, and that the far-field term contributes at most 0.03% for the cases considered.
C. Suppressing absorption
As discussed in the text, the optimal scattering limits are reached when absorption and scattering are exactly equal. For some applications (e.g. solar cells enhanced by plasmonic particle scatterers [4]), however, parasitic absorption is detrimental and needs to be avoided, even if the per-volume scattering is reduced. Here we present alternate limits to Eq. (32a) to account for absorption suppression. We define a fraction f that is the ratio of absorption to extinction, f = P abs P abs + P scat .
Suppose we define our figure of merit as the maximum scattering cross-section per unit volume subject to the condition that f is smaller than some maximum ratio f max : As in Sec. 3, we use standard Lagrangian optimization techniques and the the fact that the constraint on f is active ( f = f max ) by the KKT conditions. By the same steps as in Sec. 3, it is straightforward to show that the per-volume scattering cross-section is limited by: Eq. (58a) is maximized at f max = 0.5, corresponding to the unconstrained optimum in Eq. (29a). For significantly reduced absorption (say < 2%), there is a significant penalty (> 10×) to the maximum per-volume scattering.
D. Quasistatic cross-section of a coated sphere
Also shown in Fig. 3 are the cross-sections of coated spheres that have been optimized by the same procedure as the ellipsoids. Coated spheres, with dielectric cores and metallic shells, are a common structure for tunable resonances across visible and infrared frequencies [31,110,[173][174][175], but one can see that their response falls short of the limits. To understand why the performance falls short, we consider the quasistatic absorption cross-section, which is known analytically [32] and can be compared to Eq. (39). For simplicity we assume the particle core has the same permittivity as the shell, which has only a small effect on the response but enables us to write the typical [32] coated-sphere cross-section σ cs in the form where f V is the metal volume fraction, L 0 = 1/2 − 1/6 √ 1 + 8 f V and L 1 = 1/2 + 1/6 √ 1 + 8 f V are structural depolarization factors, and as in Sec. 3 we define ξ (ω) = −1/χ(ω). One can see that there are two quasistatic resonances that arise from coupling the plasmons at the interior and exterior interfaces. At visible and infrared frequencies, the material parameters of typical metals satisfy | Re χ| 1, such that a single resonance dominates the response and the volume fraction f V of the metal must be small (as in Fig. 3). The cross-section is then given by | 18,620 | 2015-03-12T00:00:00.000 | [
"Physics"
] |
Isolation and Screening of Dye Degrading Micro-organisms from the Effluents of Dye and Textile Industries at Surat
Text ile dyes have been used since the Bronze Age. They also constitute a prototype 21-century speciality chemicals market. Effluent and soil samples were collected from textile industry at Surat. The pH, temperature, BOD, COD, Nitrate and Nitrite values were compared with the values given by the Bureau of Indian Standards. The culture medium was designed and standardized in the laboratory for the isolation and degradation of the dyes. Pure cultures were screened on the basis of colony morphology. Three different types of unique cultures were selected and named as isolates S1, S2 & S3. Out of 12 dyes used, isolate S1 showed degradation on the maximum number of dyes (five) in comparison to other isolates (isolates S2 and S3). Thus, isolate S1 was used for the further studies. The isolate S1 was used for the study of the amount of dye to be degraded. For this study Red BB dye was chosen. Because, isolate S1 showed maximum degradation on Red BB dye within less time of incubation in comparison with other dyes. Almost all isolates showed the positive results in some of the biochemical tests. Thus most of the isolates can have the capacity to produce the enzyme tryptophanase, indole production, citrate permease (citrate as carbon and energy source), catalase enzyme, degradation of glucose oxidatively as well as fermentatively, urease, gelatinase, production of acid and gas (allow to ferment lactose and/or sucrose) and fermentation of sugar, lactose, sucrose, mannitol and glucose. Total cellu lar fatty acids profiling has been considered to be one of the important and ideal tool for identification of microorganis ms. On the basis of fatty acid profiling of isolate S1 the similarity index indicated as Bacillus cereus GC subgroup A (similarity index 0.825), B. thuringiensis sub sp. israelensis (similarity index 0.552) and for B. thuringiensis sub sp. Kurstakii (similarity index 0.511). The isolate S1 was assumed to be B. cereus GC subgroup A. Thus this isolates can be used to degrade harmful azo dyes utilized by the dye, text ile, paper, ink industries etc.
Introduction
The colored effluents discharged fro m text ile processing and dye-manufacturing industries contain a significant amount of unreacted dyes. During dyeing processes, upto 15% o f the dyestuff does not bind to the fibers and is therefore released into the environment [1]. The world annual production of the dyestuffs amounts to more than 7×10 5 tonnes [2]. Azo dyes, being the largest group of synthetic dyes, constitute up to 70% of all the known co mmercial dyes produced [3]. Text ile processing wastewaters with dye contents in the range of 10-200 mg l -1 are h ighly colored.
The chemical structure of coloured dyes are characterized by highly substituted aromatic rings joined by one or mo re azo groups(-N=N-). These substituted ring structures make these mo lecules recalcitrant and, thus, they are not degraded by conventional wastewater treatment processes [4]. These dyes are therefore released into the environ ment and lead to the acute toxic effects on the flora and fauna of the ecosystem. In addition to being aesthetically displeasing, the release of colored effluents in water bodies reduces the photosynthesis as it impedes penetration of light in water [5,6]. Moreover, many azo dyes and their metabolites are mutagenic and carcinogenic [7]. A review of the mutagenicity of effluents showed that text ile and other dyerelated industries produce consistently more potent wastewaters when compared to other industrial d ischarges [8].
Recent studies by Rajaguru et al. [9] and Umbu zeiro et al. [10] have shown that azo dyes contribute to mutagenic activity of ground and surface waters polluted by text ile effluents. Thus, the color removal of text ile wastewater is a major environmental concern. Therefore, industrial effluents, like textile wastewater containing dyes must be treated before their discharge into the environment. The dye wastewater fro m the text ile is one of the most difficu lt wastewater to treat [11,12]. Because of their commercial importance, the impact and to xicity of dyes that are released in the environment have been extensively studied [13]. Colour can be removed fro m wastewater by chemical and physical methods including absorption, coagulationflocculation, o xidation and electrochemical methods. These methods are quite expensive, have operational problems [14], and generate huge quantities of sludge [15]. A mong low cost, viable alternatives, available for effluent treat ment and decolourizat ion, the biological systems are recognized, by their capacity to reduce biochemical o xygen demand (BOD) and chemical o xygen demand(COD) by conventional aerobic biodegradation. There is large variability in the quality of industrial effluents which varies with industrial processes. The effluents discharged by different industries contain a high range of physico-chemical parameters like temperature, pH, conductivity, hardness, alkalinity, COD, TSS, n itrates, nitrites, cations (Na + , K + , Ca 2+ and Mg 2+ ) and anions (Cl -, CO ). These effluents from different industries also contain heavy metals and trace metals including chro miu m, cad miu m, copper, lead, nickel, zinc, cobalt, magnesiu m, iron and arsenic [16].
The treatment systems based on using microorganisms capable of decolorizing/degrading these recalcitrant compounds are environment-friendly and can lead to mineralizat ion of the target compounds. The effectiveness of these treatment systems depends upon the survival and adaptability of microorganisms during the treatment processes [2,17]. Many microorganisms belonging to different taxonomic g roups of bacteria [2], fungi [18], actinomycetes [19] and algae [20] have been reported for their ability to decolorize azo dyes.
The use of pure-culture system ensures the reproduction of data and interpretation of the detailed mechanism of dye degradation. However, higher degree of biodegradation and mineralizat ion can be expected when metabolic act ivit ies of mixed cultures within a microbial co mmunity co mplement each other. The advantages of mixed cultures are apparent as some microbial consortia can collectively carry out biodegradation that cannot be achieved by pure culture [21,22]. Azo-dyes are also degraded efficiently under aerobic conditions by wood-rotting fungi (e. g. Phanerochaete chrysosporium, Trametes spp. etc.), which are in nature responsible for the degradation of lignin [23]. While fungal treatment of dye containing effluents is usually t imeconsuming and d ifficult to control [24], the potential of enzy mes for this purpose has clearly been demonstrated. Thus an effort has been made to isolate the bacteria capable of degrading the azo dyes present in the effluents of text ile industries located at Surat.
Materials and Methods
For present study effluent fro m the dye and text ile industries was used. The soil samp les were also collected fro m the same site for study of the microbial flora in the adjourning area. Nutrient media (Introduced by Robert Koch) was used with slight modification for the enrich ment of the culture fro m the effluent and the soil samples.
Collection of Effluent and Soil Sample
The effluent sample was collected fro m GIDC, Pandesara, Surat, India. The pH and the temperature of the sample were (10.3 and 24℃ respectively) measured at the time of collection. The Chemical o xygen demand (COD) was estimated by the titrat ion of the effluent samp les and found to be 7507.20 mg l -1 . The Biological o xygen demand (BOD) value could not be found for the effluent samp les. The value for the n itrates and nitrites was found to be 1893 mg l -1 and 70 mg l -1 respectively. The soil samp les were also collected fro m the near bank about 50 to 100 cm far fro m the effluent channel by digging the soil up to 5 cm.
Isolati on of Dye Degradi ng Microorganism
The effluent and combination o f distilled water & effluents (v/v) was inoculated in N-broth mediu m. To each of the flask containing 100 ml of mediu m 10 g of soil samp le was added and incubated at 28℃ for 72 hours. Serial dilutions fro m 10 0 to 10 -3 were made fro m the upper phase of the culture containing microorganisms for each of the med iu m separately. Fro m each dilution 100 μl was spread over the solid plate med iu m-1(Peptone 5 g; Yeast extract 2.46 g; NaCl 5 g; Agar 20 g and p H to 7.00), mediu m -2 (Glucose 30 g; KH 2 PO 4 6 g; Na 2 CO 3 10 g; MgSO 4 7H 2 O 0.2 g; Yeast extract 6 g; Agar 20 g and pH 7.00) containing appropriate and all different dyes separately using sterile glass spreader and incubated at 37℃ for up to 100 hours. After incubation the observations for the zone of clearance/decolorization on the respective plates were made and recorded.
Optimizati on of Conce ntrati on of Dye Degradation
Concentration of dye degraded by the microorganism was optimized for one dye and by isolate no.S1. The Red BB dye was taken in a concentration of 0.05 %, 0.
Biochemical Tests
Biochemical tests were performed for checking the presence of particular substance or enzy me produced by the bacterial isolate. Effluents of Dye and Textile Industries at Surat
Tests for Ut ilization of Carbohydrates and Organic Acids
Tests for utilizat ion of carbohydrates and organic acids were carried out by carbohydrate fermentation test, oxidation-fermentation test, methyl red test, voges-proskauer test and citrate utilizat ion test as per the method described by Patel [25].
Tests for Nitrogenous Compounds
Tests for n itrogenous compounds was studied by perform ing indole production test, H 2 S production test, deamination test, urea hydrolysis test, nitrate reduction test and ammon ia production test as per the method described by Patel [
Triple Sugar Iron Test
Co mbined test using composite media was performed using triple sugar iron agar test as per the method described by Patel [25].
Sample Processing
The pure culture was inoculated on to TSBA solid plate and incubated for 48 hours at 28℃. The culture so obtained was harvested for cellular fatty acid profiling in following steps using Gas Chro matography.
Harvesting
A loop of cultured microorganism (about 40 mg of bacterial cells, cultured on TSBA p late) was taken in 13 x 100 ml culture tube.
Saponification
In above harvested culture tube1.0 ml of Reagent 1 was added. The tube was tightly sealed with teflon lined caps, vortexed briefly and heated in a boiling water bath for 30 minutes. The tube was vigorously vortexed for 5 -10 seconds at an interval of 5 minutes of entire incubation period.
Methylation
After incubation the tube was cooled at room temperature, uncapped and 2 ml of Reagent 2 was added. The tube was capped again and briefly vortexed. After vortexing, the tube ware heated for 10 ± 1 minutes at 80 0 ± 1℃ .
Extract ion
After methylation 1.25 ml of Reagent 3 was added to the cooled tube followed by recapping and gentle tumbling on a clin ical rotator for about 10 minutes. The tube was uncapped and the aqueous (lower) phase was pipetted out and discarded.
2.6.6. Base Wash About 3ml of Reagent 4 was added to the organic phase remained in the tube. The tube was recapped and tumbled for 5 minutes. Follo wing uncapping, about 2/3 of the organic phase was pipetted into a GC vial, capped and ready for analysis GC analysis.
The RTSBA6 6.00 library method was used to found out the similarity index of the isolate on the basis of total cellular fatty acids profiling. The table, graph and result so obtained were recorded.
Results & Discussion
The effluents sample collected had the pH nearer to the permissible range. The average and permissible pH fo r the effluents of azo dye industries is 9 [26]. Hence, the effluents will not have adverse impact on aquatic ecosystem after being discharged. The permissible limit of BOD is 3000 mg l -1 and COD is 15000 mg l -1 as set by the Bureau of Indian Standards [27]. Wastes containing high BOD and COD are responsible for a heavy depletion of o xygen levels in the particular sector of the stream or soil [28]. The value for the nitrates (1893 mg l -1 ) and n itrites (70 mg l -1 ) was found to be higher than the permissible limit. The microorganisms present in the sewage reduce the nitrate into nitrite and then to ammonia, sulphates into sulphides and ferric iron into ferrous iron at very low concentrations of o xygen. Therefore, they create great nuisance for the environment [29]. The data for COD (7507.20 mg l -1 ) revealed that the effluents in present condition are fit for d ischarge to land/ water bodies, as it wou ld not be hazardous for human and aquatic life due to the lower concentration of toxicants.
Isolati on of Dye Degradi ng Microorganism
The result of the dye degradation by the isolate is shown in figure 1(a-f). An analysis of the data reveals that out of four solid med ia used three did not show the degradation of dye by the growing microorganis ms. All the media used differs in the capacity to support the growth of dye degrading microorganis ms. The degradation was seen in the Red BB, purple H3R, BHE 81, Dir Black and blue 171 dye.
In rest of the 7 dyes used were not degraded by any of the isolates obtained from the enrich ment cultures. Out of the above five, Red BB was found to be degraded more by the isolates. The change in color is due to the dye utilized by the isolates. The isolates were named on the basis of colony morphology fro m dye degradation and were named as S1, S2 and S3. Out of 3 isolates, one isolate (S1) showed degradation on all five dyes used. Thus S1 was subjected for the further study.
The growth of the culture on medium-2 could have attributed due to less concentration of glucose in the previous med iu m (mediu m-1) used. With the addition and/or substitution of the above factors the bacteria could grow on the med iu m plate which was conducive for the organis ms. Bacterial decolorizat ion of azo dyes under methanogenic conditions is non-specific [30]. The requirement fo r yeast extract or peptone making the process economically unavailable for industrial-scale application unless alternate cheaper sources are identified [2,21,31]. Glucose is known to enhance the decolorization activity of b iological systems [14,32,33]. However, there are reports that the glucose inhibits the decolorizing activ ity [34]. The variability may be due to the different microbial characteristics. Chen et al. [2] has found that 10 g l -1 concentrations of glucose led to the decolorization of RED RBN by A. hydrophila and glucose concentration of higher than 15 g l -1 inhib ited appreciably the azo reduction of azo dye by the same bacteria.
Optimizati on of Concentrati on of Dye Degradation
S1 isolate showed degradation of Red BB dye in varying concentration ( Table 1). Out of ten different concentrations used, 0.05% concentration was degraded most efficiently within 24 hours. While 0.10% -0.25 % concentration was degraded in 48 hours. Degradation was found after 72 hours in 0.30 % & 0.35 % concentration. Moderate degradation was seen in 0.40 % and 0.45 % concentration of dye. But the degradation was very less or meagre even after 96 hours of incubation in 0.50 %.
All microorganisms have ability to grow on different med iu m. Hence, different types of media have been introduced for the growth of different types of microorganis ms. An analysis of the result obtained for the degradation of dye shows that only five different dyes were degraded. This could have been due to the fact that the microorganis ms present in the isolates might have the efficiency to degrade only five dyes but not the rest (i.e. 7 dyes) used. Studies on 4-ABS degrading strains have also shown that the all different dye degrading microorganisms are highly specific, as they can utilize only 4-A BS and not other benzenesulfonates [35]. The 2-A BS degrading Alcaligenes sp. strain O-1 can utilize t wo other aromat ic sulfonates, benzene and toluene sulfonate, for growth. However, cell extracts of this strain can desulfonate at least six substrates [36]. This suggests the presence of highly specific transport systems for the uptake of aro mat ic sulfonates in these cultures. Thus the isolated bacteria may still have restricted substrate specificity.
In order to test the activity for the degradation of dye, all the isolates were tested for the Red BB dye degradation. Lower the concentration higher the degradation efficiency and vice-versa pattern were obtained. The biodegradation capability of the dyes varies from organis m to organism [37]. He found that out of 15 isolates 4 had the maximu m decolorizing capability after 72 hours of incubation. The similar finding has also been reported by Chen et al. [2].
Biochemical Test
The biochemical tests for all three isolates are depicted in Table 2 The result of biochemical tests (Table 3) for all three isolates reveals that isolate S1 can produce the enzyme tryptophanase and indole production, citrate permease (citrate as carbon and energy source), catalase enzyme, degradation of glucose oxidatively as well as fermentatively, production of acid and gas (ferment lactose and/or sucrose and fermentation of sugar glucose, sucrose and mannitol). Isolate S2 showed presences of enzy me en zy me tryptophanase and indole production and urease. Whereas Isolate S3 showed the presence of en zy me tryptophanase and indole production, gelatinase, degradation of glucose oxidatively as well as fermentatively, p roduction of acid and gas (allow to ferment lactose and/or sucrose) and fermentation of sugar lactose, sucrose, mannitol and glucose. All together isolate S1 showed positive tests for all the sugar used. Biochemical tests have been done by several workers [38] on bacterial co mmun ities.
Conclusions
The textile, dyeing and fin ishing industry use wide variety of dyestuffs due to the rapid changes in the customer's demands. Thus by the use of the above isolates sustainable biodegradation of the harmful azo dyes utilized by the dye, textile, paper in k etc. industries can be possible. These methods are not only eco-friendly but also commercially viable even for the small scale industries. A thorough investigation, taking into consideration of certain parameters such as optimization of the dye concentration for the isolates as well as for the dye to be degraded, effect of physicochemical parameters on degradation etc. at large scale is necessary to provide unequivocal evidence for the usefulness of these isolates in sustaining dye degradation capability. | 4,344.2 | 2012-12-01T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Antibody levels and protection after Hepatitis B vaccine in adult vaccinated healthcare workers in northern Uganda
Hepatitis B vaccine has contributed to the reduction in hepatitis B virus infections and chronic disease globally. Screening to establish extent of vaccine induced immune response and provision of booster dose are limited in most low-and-middle income countries (LMICs). Our study investigated the extent of protective immune response and breakthrough hepatitis B virus infections among adult vaccinated healthcare workers in selected health facilities in northern Uganda. A cross-sectional study was conducted among 300 randomly selected adult hepatitis B vaccinated healthcare workers in Lira and Gulu regional referral hospitals in northern Uganda. Blood samples were collected and qualitative analysis of Hepatitis B surface antigen (HBsAg), Hepatitis B surface antigen antibody (HBsAb), Hepatitis B envelop antigen (HBeAg), Hepatitis B envelop antibody (HBeAb) and Hepatitis B core antibody (HBcAb) conducted using ELISA method. Quantitative assessment of anti-hepatitis B antibody (anti-HBs) levels was done using COBAS immunoassay analyzer. Multiple logistic regression was done to establish factors associated with protective anti-HBs levels (≥ 10mIU/mL) among adult vaccinate healthcare workers at 95% level of significance. A high proportion, 81.3% (244/300) of the study participants completed all three hepatitis B vaccine dose schedules. Two (0.7%, 2/300) of the study participants had active hepatitis B virus infection. Of the 300 study participants, 2.3% (7/300) had positive HBsAg; 88.7% (266/300) had detectable HBsAb; 2.3% (7/300) had positive HBeAg; 4% (12/300) had positive HBeAb and 17.7% (53/300) had positive HBcAb. Majority, 83% (249/300) had a protective hepatitis B antibody levels (≥10mIU/mL). Hepatitis B vaccine provides protective immunity against hepatitis B virus infection regardless of whether one gets a booster dose or not. Protective immune response persisted for over ten years following hepatitis B vaccination among the healthcare workers.
Hepatitis B vaccine has contributed to the reduction in hepatitis B virus infections and chronic disease globally. Screening to establish extent of vaccine induced immune response and provision of booster dose are limited in most low-and-middle income countries (LMICs). Our study investigated the extent of protective immune response and breakthrough hepatitis B virus infections among adult vaccinated healthcare workers in selected health facilities in northern Uganda. A cross-sectional study was conducted among 300 randomly selected adult hepatitis B vaccinated healthcare workers in Lira and Gulu regional referral hospitals in northern Uganda. Blood samples were collected and qualitative analysis of Hepatitis B surface antigen (HBsAg), Hepatitis B surface antigen antibody (HBsAb), Hepatitis B envelop antigen (HBeAg), Hepatitis B envelop antibody (HBeAb) and Hepatitis B core antibody (HBcAb) conducted using ELISA method. Quantitative assessment of antihepatitis B antibody (anti-HBs) levels was done using COBAS immunoassay analyzer. Multiple logistic regression was done to establish factors associated with protective anti-HBs levels (� 10mIU/mL) among adult vaccinate healthcare workers at 95% level of significance. A high proportion, 81.3% (244/300) of the study participants completed all three hepatitis B vaccine dose schedules. Two (0.7%, 2/300) of the study participants had active hepatitis B virus infection. Of the 300 study participants, 2.3% (7/300) had positive HBsAg; 88.7% (266/ 300) had detectable HBsAb; 2.3% (7/300) had positive HBeAg; 4% (12/300) had positive HBeAb and 17.7% (53/300) had positive HBcAb. Majority, 83% (249/300) had a protective hepatitis B antibody levels (�10mIU/mL). Hepatitis B vaccine provides protective immunity against hepatitis B virus infection regardless of whether one gets a booster dose or not. Protective immune response persisted for over ten years following hepatitis B vaccination among the healthcare workers. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111
Introduction
Elimination of hepatitis B virus (HBV) transmission is an achievable public health goal, particularly in the light of proven effectiveness and safety of hepatitis B vaccine [1]. Studies conducted in areas with high HBV endemicity have demonstrated declines in the prevalence of chronic HBV among children to < 2% after routine infant vaccination [1]. A substantial decline in HBV-related disease burden and prevalence of chronic HBV infection has been observed among children following introduction of universal infant hepatitis B vaccination [2]. However, the vaccine may not provide protection from exposure to hepatitis B virus later on in life due to waning of immune memory over time [3].
Persistence of hepatitis B antibodies (anti-HBs) and ability of the immune system to mount a response to exposure of HBV later in life is necessary for long term protection against hepatitis B virus infection [4]. Some studies have confirmed persistence of antibodies and immune memory following hepatitis B vaccination [5] while others confirm waning of antibody concentrations 13-15 years after primary vaccination among those vaccinated at birth [6]. The current vaccination of adult individuals against hepatitis B virus is premised on the fact that sufficient anti-HBs concentrations and immune memory is formed against HBV. However, unless routine post-vaccination serological testing is performed, it remains unclear what proportion of individuals who complete all 3 dose schedules of hepatitis B vaccine actually get protected, as was found in an earlier study where 22.9% of vaccinated children had undetected antibody levels [7].
Annually 5.9% of the healthcare workers are exposed to HBV corresponding to 66,000 preventable HBV infections globally [8]. Healthcare workers thus represent an important group in the population that need to be protected against HBV infection. While hepatitis B is a disease of public health concern in Uganda [9], healthcare workers (HCWs) in the country are at higher risk for transmission of hepatitis B virus (HBV) compared to the general population [10]. They have a higher risk of contracting the disease from exposure (eye, oral mucosa and skin) to potentially infectious patient's blood and percutaneously from contaminated sharp objects such as needles [11].
Northern Uganda has disproportionately high prevalence of hepatitis B virus infection compared to other parts of the country [12]. As a preventive measure, all healthcare workers in Uganda are required to take hepatitis B vaccine as adults. A recent study by Ssekamatte et al., [13] reported a rather low hepatitis B vaccine dose completion of 57.8% only among healthcare workers in central Uganda. In spite of the low hepatitis B vaccine dose completion rate, no evaluation of hepatitis B vaccination has been done to assess immune response to the vaccine since the introduction of hepatitis B vaccination in Uganda in 2002. This could be due to the limited funding of the health sector, a common occurrence in most LMICs [14]. There is paucity of information on the extent of immune protection against hepatitis B virus infection among adult vaccinated individuals in Uganda despite the high risk of exposure. We therefore set out to determine the proportion of healthcare workers with protective levels of anti-HBs and also evaluate the prevalence of hepatitis B virus infection among those without protective anti-HBs post hepatitis B vaccination.
Ethical considerations
Ethical review and approval of the protocol was done by the Makerere University School of Biomedical Science Research and Ethic Review Committee (#SBS-REC 798). Additionally, administrative clearance was obtained from the hospitals prior to study initiation. A written informed consent was obtained from potential study participants prior to enrollment into the study.
Study design, setting and population
This was a cross sectional study done in Lira and Gulu regional referral hospitals in northern Uganda from October to December 2020. In Uganda, regional referral hospitals offer specialist clinical services such as psychiatry, Ear, Nose and Throat (ENT), ophthalmology, higher level surgical and medical services, and ancillary services (laboratory, medical imaging and pathology). They also provide general healthcare services including, preventive, promotive, curative, maternity, in-patient health services, surgery, blood transfusion, laboratory and medical imaging services. The regional referral hospitals are also involved in teaching and research. Hepatitis B screening and management services are also offered in regional referral hospitals in Uganda. Gulu and Lira regional referral hospital has 320, 420 healthcare workers respectively. The study was conducted among adult vaccinated healthcare workers in Lira and Gulu regional referral hospitals in northern Uganda. We enrolled healthcare workers who had previously received primary Hepatitis B vaccine irrespective of when the vaccination was received.
Sample size determination
The sample size was calculated using Kish Leslie formula [15] applying a prevalence of nonimmune protection of 22.9% [7], 95% level of significance and 10% non-response giving a sample size of 300 study participants.
Data collection
Interview data collection: The interview was conducted by two research assistants, a laboratory technologist and a nurse. The two research assistants were trained on the survey tool prior to conducting the interviews. Using simple random sampling, the healthcare workers in Lira and Gulu regional referral hospitals were approached for inclusion into the study. A sample frame of healthcare workers in each of the hospital was obtained from the hospital administrator. The name of each healthcare worker was then written on a separate piece of small paper which was folded and placed in a basket. With shaking of a basket at each point, one piece of paper was picked at a time without replacement until the required sample size was obtained. A healthcare worker whose name was picked was then approached for recruitment into the study. A written informed consent was obtained prior to enrollment into the study. All the healthcare workers who reported having taken a hepatitis B vaccine and consented for the study were recruited. Healthcare workers who were under hepatitis B treatment were excluded. Interview data was collected using interviewer administered questionnaires, which had been pretested on 10 healthcare workers at Mulago national referral hospital. The study tool collected data on, (i) socio-demographic characteristics, (ii) risk of exposure to HBV, (iii) HBV testing, (iv) HBV vaccine awareness, and (v) HBV vaccination (S1 Appendix).
Laboratory data collection: For each consenting healthcare worker, 4mls of venous blood was collected using ethylene diamine tetra acetic acid (EDTA) vacutainer tubes. The blood samples were immediately centrifuged (3000rpm) for five minutes and plasma separated from blood cells in to cryovials. The plasma was then screened using HBV Combo Rapid test (Vaxpert Inc Suite 355 Two South Biscaynne Blvd. Miami, FI, USA). This is a rapid test for qualitative detection of; Hepatitis B surface antigen (HBsAg) a protein on the surface of hepatitis B virus whose presence in serum is an indicator of acute or chronic hepatitis B virus infection; Hepatitis B surface antibody (HBsAb or Anti-HBs) a protein produced by the body's immune system in response to the presence of Hepatitis B surface antigen; Hepatitis B envelop antigen (HBeAg) a viral protein made by the hepatitis B virus that is released from infected liver cells into the blood and is an indicator of active HBV replication; Hepatitis B envelop antibody (HBeAb) a protein produced by the body's immune system in response to HBeAg a marker of resolution of illness; and Hepatitis B core antibody (HBcAb or Anti-HBc) a protein produced by the body's immune system in response to hepatitis B virus and is an indicator of previous hepatitis B virus infection. e. The test was done following the manufacture's guidelines. Briefly, the test cassette was removed from the sealed foil pouch and placed on a clean, leveled surface work-top in the laboratory. Holding the dropper vertically, three (3) full drops (approximately 75μl) of plasma was transferred to each sample well and the timer started. The results were read after 15 minutes. The appearance of a colored line in the control region (C) confirmed the viability of the test. For HBsAg, HBsAb and HBeAg tests, a positive result was confirmed with the presence of two distinct colored lines on the test cassette, one being the test region and the other the control region. While for HBeAb and HBcAb tests, a positive result was confirmed by presence of one colored line in the control region (C) and no colored line in the test region (T). Sample tests with no colored line in the control region (C) were repeated. In addition, known positive and negative control samples were run alongside the test samples for quality control. The plasma samples with detectable HBsAb were then transferred to Uganda Blood Bank Nakasero laboratory under 20˚C, for analysis of HBsAb concentration. The COBAS Elecsys 2010 immunoassay analyzer was used in the analysis of HBsAb concentration. The analyzer was calibrated using control and sample plasma prior to the analysis. 50μl of plasma for each sample was processed in duplicate following manufacturer's guidelines, the reaction mixture is then aspirated into the measuring cell. The HBsAb concentration was measured by comparing the electro-chem-luminescence signal obtained to that from the calibration. Hepatitis B antibody levels � 10mIU/mL were considered protective [16].
Data management and analysis
At the end of each data collection day all the questionnaires were checked for completeness. Double data entry was done by two data entrants (OC and KR) into Epi-Data ver 3.1. Data was transferred to STATA ver 23 and cleaned prior to analysis. Categorical variables were analyzed using a modified Poisson regression with robust standard errors [17,18]. Bivariate analysis was performed for each of the independent variables to determine whether they were independently associated with immune response to hepatitis B vaccine using prevalence ratios (PR) and p-values at 95% level of significance. All variables were entered and carried to the multivariate logistic regression using a backward elimination method. Confounding was assessed by comparing crude and adjusted PR, with a difference between crude and adjusted PR of greater than 10% considered as confounding.
Factors associated with hepatitis B virus immune response among healthcare workers in Gulu and Lira regional referral hospitals
In bivariate analyses, the factors that were significantly associated with protective anti-HBs levels include, year of last hepatitis B vaccine dose (p<0.001), time schedule for hepatitis B vaccine (0-1-6 months) (p<0.001), and booster hepatitis B vaccine dose (p<0.001) ( Table 2).
Majority of the participants, 50% (133/266) who were 15-25 years at the time of vaccination had sufficient immune response (�10mIU/mL). After adjusting for other covariates in multivariable analysis, there was no predictor of participants having protective anti-HBs titre (>10mIU/mL). There was no significant difference in the odds of having protective anti-HBs titre (>10mIU/mL) in participants who received the vaccine between 2000-2009 and those who received the vaccine between 2010-2020 (aPR = 0.94, p<0.001). The odds of having protective anti-HBs titre (>10mIU/mL) did not significantly differ by vaccine schedule (aPR = 1.07, p<0.001) and receiving of booster dose (aPR = 1.07, p<0.001) ( Table 2).
Discussion
In this study we found a fairly high hepatitis B vaccine three-dose completion rate (81.3%) among healthcare workers in northern Uganda. This was an improvement when compared to a previous report of 57.8% among healthcare workers in central Uganda and another report from other LMICs with completion rates of 40-90% (16)(17)(18)(19) [13]. The high Hepatitis B vaccine completion rate found in this study could be due to the higher prevalence of Hepatitis B virus infection in northern Uganda compared to the rest of the country [12]. Whereas there is an increase in public awareness through promotion of hepatitis B vaccine and the requirement by the ministry of health for all healthcare workers to receive hepatitis B vaccine prior to engagement in clinical work including all health professional students [9], our findings indicate that the vaccination rate still falls short of the 100% completion among healthcare workers as recommended by the WHO [19]. The likelihood of spread of hepatitis B virus infection between healthcare workers and their patients become more apparent especially given the unknown hepatitis B vaccine coverage in the general population in Uganda. Of concern was the increased risk of transmission where we found that the majority of the healthcare workers got intramuscular/intravenous injections, or conducted surgical procedures and accidentally had blood splashed on their bodies during those procedures, further emphasizing the need for completion of hepatitis B vaccination doses.
On qualitative analysis, we found a high proportion of healthcare workers with detectable anti-HBs indicative of either recovery from hepatitis B infection or immunity secondary to hepatitis B vaccination. After quantification of the anti-HBs, we found over 90% of the healthcare workers with detectable anti-HBs had protective antibody concentration (� 10mIU/mL). These findings are similar to reports from previous studies [20] that reported presence of protective immune response among individuals vaccinated with hepatitis B vaccine. The high proportion of healthcare workers with detectable anti-HBs is a confirmation of Hepatitis B vaccination among the study participants. In addition, protective anti-HBs titers found in majority (72%) of the study participants is an indicator of effective immune response to the hepatitis B vaccine.
The findings that healthcare workers who received hepatitis B vaccine 20 years back as adults still had protective anti-HBs levels is an indicator of long-term protection against hepatitis B virus infection even though the majority never received a booster vaccine dose. While the Uganda Ministry of Health does not conduct periodic screening of healthcare workers to assess for immune protection, our findings in this study are in line with evidence from previous studies [20,21] that reported more 30 years of immune protection from hepatitis B vaccine among individuals vaccinated as children and young adults. Our findings show that regular screening to assess extent of hepatitis B immune protection among individuals who have completed all the three hepatitis B vaccine dose schedules may not be necessary due to the limited resources in most LMICs. However, for the few individuals who had weak or no protective immune response (<10mIU/mL) against HBV following complete vaccination, provision of a challenge dose of the vaccine is recommended as this has previously been shown to prompt development of immunity [22].
While only three of the healthcare workers had received a booster dose of the hepatitis B vaccine, we found no significant difference in the presence of protective immune response between individuals who received a booster dose and those who did not. A similar finding was reported in a previous study by Bruce et al. [20] which found development of protective immune response (�10mIU/mL) among fully vaccinated individuals who did not receive a booster hepatitis B vaccine dose. In our study we included individuals who reported to have received hepatitis B vaccines in the past 20 years. Although there is currently no screening to establish immune response to hepatitis B vaccine in Uganda, it is likely that despite the booster dose of the vaccine, vaccinated individuals already had sufficiently high levels of anti-HBs in the body. This could be due to the persistence of anti-hepatitis B antibodies in vaccinated individuals [22]. Our findings indicate that completion of all three hepatitis B vaccine doses is sufficient for the development of protective anti-HBs titers.
A low proportion (2.3%) of healthcare workers in this study had detectable HBsAg an indicator of acute or chronic hepatitis B virus infection, while 1 in every 6 healthcare workers had a positive HBcAb indicative of previous or ongoing hepatitis B virus infection, possibly due to non-response to the hepatitis B vaccine as all study participants had been vaccinated. This is supported by the findings that a low proportion of hepatitis B virus infection occurred among healthcare workers who had undetectable anti-HBs on qualitative screening. The lack of protective anti-HBs titre following vaccination found in our study is similar to findings from a previous study [7].
While our study had some limitations where the HBV Combo Rapid test was used for qualitative screening of HBsAg, HBsAb, HBeAg, HBeAb and HBcAb, we were able to transfer samples with positive HBsAb to the national blood bank laboratory (Nakasero blood bank) for quantitative measurement of the HBsAb levels in the plasma samples. Additionally, we also referred participants with positive HBsAg for further tests to confirm Hepatitis B viral infection. It was also not possible to ascertain from the study participants what kind of hepatitis B vaccine that they had received, whether plasma derived or recombinant vaccine. However, results from previous studies have shown that antibody response and the effectiveness of the plasma-derived vaccine are similar to that of recombinant hepatitis B vaccine [19,23].
Conclusion and recommendation
Most vaccinated healthcare workers in northern Uganda developed protective anti-HBs levels and the protective immune response persisted for over ten years following vaccination. Some vaccinated healthcare had weak/no immunity and developed breakthrough hepatitis B virus infection which would require booster hepatitis B vaccination. There may be need for postvaccination testing to assess immunological priming from hepatitis B vaccine among healthcare workers.
Supporting information S1 Appendix. Study data collection tool. (PDF) | 4,715.2 | 2022-01-21T00:00:00.000 | [
"Medicine",
"Biology"
] |
Revolutionizing hysteroscopy outcomes: AI-powered uterine myoma diagnosis algorithm shortens operation time and reduces blood loss
Background The application of artificial intelligence (AI) powered algorithm in clinical decision-making is globally popular among clinicians and medical scientists. In this research endeavor, we harnessed the capabilities of AI to enhance the precision of hysteroscopic myomectomy procedures. Methods Our multidisciplinary team developed a comprehensive suite of algorithms, rooted in deep learning technology, addressing myomas segmentation tasks. We assembled a cohort comprising 56 patients diagnosed with submucosal myomas, each of whom underwent magnetic resonance imaging (MRI) examinations. Subsequently, half of the participants were randomly designated to undergo AI-augmented procedures. Our AI system exhibited remarkable proficiency in elucidating the precise spatial localization of submucosal myomas. Results The results of our study showcased a statistically significant reduction in both operative duration (41.32 ± 17.83 minutes vs. 32.11 ± 11.86 minutes, p=0.03) and intraoperative blood loss (10.00 (6.25-15.00) ml vs. 10.00 (5.00-15.00) ml, p=0.04) in procedures assisted by AI. Conclusion This work stands as a pioneering achievement, marking the inaugural deployment of an AI-powered diagnostic model in the domain of hysteroscopic surgery. Consequently, our findings substantiate the potential of AI-driven interventions within the field of gynecological surgery.
Introduction
Uterine fibroids, also known as myomas, are the most common benign tumors affecting the female reproductive system.They are most prevalent among patients aged 30-50 years (1).By the age of 50, the cumulative incidence of myomas in women can reach up to 70%-80%.Among all myomas, submucosal myomas account for approximately 5.5%-10% (2).Submucosal myomas often lead to abnormal uterine bleeding, infertility, recurrent pregnancy loss, and pelvic pain (3).
Submucosal myomas were divided into three subtypes.In 1993, Wamsteker et al. introduced a classification system for submucosal myomas based on the degree of intramural extension during hysteroscopic myomectomy (4).According to this system, submucosal myomas are categorized as type 0, 1, or 2. This classification system was adopted by the European Society of Hysteroscopy and served as the basis for the myoma subclassification system established by the International Federation of Gynecology and Obstetrics (FIGO) (5).Hysteroscopic myomectomy is considered the optimal method for type 0 myomectomy, and the slicing technique is now widely accepted as the standard approach (6).However, the International Society for Gynecologic Endoscopy (ISGE) suggests intrauterine morcellation (IUM) as an alternative option due to its advantages in terms of learning curve and operative time (7,8).
However, the FIGO subclassification system also has clinical limitations.Uterine myomas will usually distort uterine structure.It will increase the difficulty to distinguish the extent of myometrial invasion, reducing the accuracy of FIGO system (9).Magnetic resonance imaging (MRI) has become an essential imaging modality for assessing myomas.In the clinical management of myomas, MRI T2-weighted images, particularly sagittal images, are commonly utilized.MRI has demonstrated significant advantages in terms of accurately determining the number, size, and location of uterine myomas (10).Conventional MRI can clearly differentiate from uterine myomas, myometrium, and endometrial layers.Myomas exhibit low signal intensity in T2-weighted images, allowing for a clear demarcation of their boundaries from the surrounding myometrium.Study by Wilde, S. and S. Scott-Barrett has indicated that MRI exhibits higher sensitivity in detecting small submucosal myomas measuring less than 5 mm (11).And MRI images were obtained in a more objective way, not dependent on the operator's experience.
Traditional imaging diagnosis heavily relies on the expertise and experience of physicians.However, artificial intelligence (AI), based on machine learning and deep learning techniques, offers significant advantages in terms of image feature extraction, repeatability, and objectivity, thus aiding in decision-making process (12).
The approaches powered by AI can enhance the efficiency and reliability of diagnoses, ultimately benefiting patient care and outcomes.AI has demonstrated promising results in the diagnosis of breast cancer (13), prostate cancer (14), and brain glioma (15) using MRI.However, its application in the diagnosis of gynecological tumors is still in its early stage.Robin Wang et al. conducted a study that they utilized AI and MRI data to differentiate between benign and malignant ovarian tumors (16).They developed a deep learning model that outperformed primary radiologists in terms of accuracy and specificity.Moreover, the AI model enhanced the specificity of diagnosis for both junior and senior radiologists.Tang et al. proposed a new segmentation network using deep learning based on T2-weighted sagittal MRI data, exhibiting high sensitivity and specificity in the diagnosis of uterine diseases (17).They modified a convolutional neural network to achieve automatic segmentation of uterine MRI images based on T2-weighted signatures, covering uterine endometrial cancer, uterine cervical cancer, and uterine myomas.This approach demonstrated the feasibility of diagnosing uterine myomas through a deep learning approach (17, 18).But the existing studies mostly focused on the semantic segmentation, limiting the identification among the myomas, uterine wall and uterine cavity.
Our team has successfully constructed a large-scale uterine myoma MRI dataset that covers all FIGO types.This dataset comprises a substantial number of T2-weighted sagittal images of myomas, along with corresponding annotation files.Additionally, we have developed an instance MRI segmentation model based on deep learning, which significantly contributes to myoma classification and facilitates surgical decision-making through precise instance segmentation of myomas, uterine wall, and cavity (19).
This research represents the first endeavor to introduce AI technology into the realm of operative decision-making for submucosal myomas.By utilizing AI model based on MRI, surgeons could be better prepared.Consequently, patients can benefit from various advantages, such as less bleeding and shorter operation duration, highlighting the potential of AI applications in hysteroscopic myomectomy.
Participants and study design
Participants in this study were enrolled from January 2022 to January 2023 at Beijing Shijitan Hospital.56 patients were included with age ranging form 39 to 46 years old and in size 2.43± 0.77 cm.This study was conducted in accordance with the World Medical Association's Declaration of Helsinki.And it was approved by the scientific research ethics committee of Beijing Shijitan Hospital, Capital Medical University (code: sjtkyll-lx-2022 (1)).This study would not violate the rights and interests of patients.The ethics committee clearly stated that specific consent procedures were not required for this study.
Participants met the followed inclusion criteria: 1) with symptoms such as abnormal uterine bleeding, infertility, and recurrent pregnancy loss; 2) diagnosed with submucosal myomas by magnetic resonance imaging (MRI); 3) The postoperative pathology was submucosal myoma.The exclusion criteria were as follows: 1) with severe comorbidities; 2) acute stage of pelvic inflammation and vaginal inflammation, or body temperature >37.5°C; 3) uterine active massive bleeding, severe anemia; 4) normal pregnancy status; 5) history of uterine perforation within 3 months; 6) invasive cervical cancer; 7) genital tuberculosis without anti-tuberculosis treatment; 8) with MRI contraindications, such as febrile convulsions, active foreign bodies in the eyes, cardiac pacemakers, metal intrauterine devices, metal joints and metal dentures; 9) postoperative pathology excluded uterine myomas.
All eligible subjects underwent MRI examination.All eligible subjects were equally divided into 2 groups with the method of random number table.Half of them were divided into group MRI-AI, and the other half were divided into group MR.Bipolar electrosurgical system was employed and intrauterine pressure was set at 100 mmHg during hysteroscopic myomectomy in this study.The surgical procedure in both groups was performed by the same surgeon with abundant experience.
MRI image acquisition
MRI examination in this study was completed in the PHILIPS INGENIA magnetic resonance imaging system with 3.0T ultra-high field.The MRI scan parameters were as follows: repetition time 4200ms, echo time 130ms, voxel 0.8x0.8x4.0cm3,field of view 24x24cm, reverse angle 90°.MRI provided multiple images from the sagittal, coronal and axial scans and from various sequences including T1W, T2W, mDIXON and DWI.The image resolution was larger than 512x512 pixel.T2W sagittal images were finally collected for the followed image processing.
MRI image instance segmentation
MRI image was processed based on the instance segmentation model which has been published by our team (19).
MRI images are characterized by the presence of offset fields, low contrast and blurred uterine tissue boundaries, which increasing difficulty in AI automatic segmentation.In order to solve this problem, adaptive histogram equalization was used to adjust the contrast between uterine tissues, especially for uterine myomas and uterine wall with slightly similar signal intensity.The N4ITK method was used to correct the offset field problem, and the Z-Score method was used to normalize the MRI images to the same range.
A specialized network architecture was designed for image processing.Firstly, HRNetv2p was used for high-resolution feature extraction and multi-scale feature fusion operations in the backbone section, so that small scale targets in the uterine region can also be extracted effectively.And DCN was used to address the issue of diverse organ shapes, to extract true feature information from different shapes, and to reduce the loss of shape information.CBAM modules were used to assist in feature extraction, filter out irrelevant and interfering feature information, and enhance the feature expression ability of the AI model.An anchor-based approach was used to assist in target localization.
The size of the myoma, uterine wall and uterine cavity within the uterine region varies considerably so that conventional size settings are no longer suitable.Our previous work conducted distribution statistics on the length, width, and aspect ratio of the target minimum peripheral bounding box on our dataset, providing reference for MR image processing.K-Means clustering method was used to calculate the number of clusters in the target bounding box and output the box size.Simultaneously, it is applied to different feature layers for better detection of small-scale targets in the shallow layer and large-scale targets in the deep layer.Finally, the PointRend module was introduced in the segmentation task to continuously optimize the segmentation edges between adjacent targets using an iterative segmentation strategy.This algorithm reduced jaggies and rough edges, resulting in smoother and more detailed edges for various objects within the uterine region.Since the model contains multiple subtasks, the loss function also consists of multiple parts.The classification loss function tested the accuracy of target classification by using cross entropy loss.The bounding box loss function detected the accuracy of target localization by using smooth L1.The segmentation loss function consists of two parts, CoarseMaskHead and MaskPointRend, which are mainly calculated by binary cross-entropy loss.
Measurement methods
The clinical data, including age, weight, height, pregnancy times, abortion times, clinical symptoms, preoperative hemoglobin value, operation time, bleeding, and fluid deficit, were analyzed in this study.The size, type and position of submucosal myomas were measured using MRI and AI models we built.And the final hysteroscopic myomectomy was the gold standard according to the FIGO system.A graduated plastic bag was placed under the participant to collect the effluent fluid during operation, contributing to get the fluid deficit.Cervical dilatation time was not included in the operation time.
Statistical analysis
Statistical analysis was realized using the SPSS software (version 29.0, SPSS Inc., Chicago, IL, USA).Quantitative data that conform to normal distribution were expressed as mean ± standard deviation (SD).Comparisons between the data were performed with t test.Quantitative data that do not fit a normal distribution are expressed as percentiles.Comparisons between the data were performed with Mann-Whitney U test.Qualitative data were expressed as number and percentage.And chi-square test was performed to analyze the difference of the two groups.Probability values of p<0.05 were considered significant.
General clinical characteristics
The clinical characteristics were similar in both groups MRI and MRI-AI.No significant differences in terms of age, height, weight, BMI, time of pregnance, and the level of hormone (p>0.05,Table 1) were found.The symptoms, such as abnormal menstruation and anaemia, caused by myoma were also similar in the two groups (p>0.05).Besides, there are no significant differences in the size (2.44 ± 0.64cm vs. 2.43 ± 0.90cm, p=0.99) and type ((type 0:
MRI image instance segmentation
Figure 1 showed the results of the instance segmentation of AI model.MRI images of submucosal myomas, including type 0, type 1 and type 2, were segmented by AI model.Inference masks were covered on the original MRI images, representing myomas, uterine cavity and uterine wall.The left side represents original MRI image, and the middle side represents the inference masks generated by our AI model.And the right side represents the intraoperative view.The inference masks clearly showed the position of myomas, uterine cavity and uterine wall in relation to each other.And it was verified by the intraoperative view.A, B and C showed that the myoma is attached to the lower part of the uterine cavity by a stalk(green narrow), representing the type 0 myoma.D, E and F showed that the intrauterine penetration of myoma exceeded 1/2, which represented it is type 1 myoma.G, H and I showed the bigger myoma is <50% submucosal and≥50% intramural, representing the type 2 myoma.
Myoma type matching and perioperative data
Table 2 showed the myoma type consistency and perioperative data in group MRI and group MRI-AI.The type consistency rate in group MRI-AI was higher than that in group MRI, but not statistically significant (24[85.71]vs. 26[92.86],p=0.34).Although no significant differences in terms of fluid in, fluid out and fluid deficit were found(p>0.05),the difference in operation time (41.32 ± 17.83min vs. 32.11± 11.86min, p=0.03) and bleeding (10.00(6.25-15.00)ml vs. 10.00(5.00-15.00)ml, p=0.04) was reported to be statistically significant.
Discussion
The application of AI powered algorithm in clinical decisionmaking is globally popular among clinicians and medical scientists (20).Artificial intelligence, especially machine learning, reinforcing repetition learning and deep learning, are particularly well-suited to deal with challenges in healthcare industry.Decision trees are common tools used by clinicians, which aim to make accurate decision in daily practice (21).Convolution neural networks, role as a deep learning model, aim to decipher various clinic images to help make diagnosis (22).Some machine learning models that can risk- stratify patients in preparation for surgery will help clinicians identify high-risk factors and optimize the healthcare progress (23,24).There are still many challenges.The adoption of AI has the potential of making unsound decision and inadvertent bias, which means the training curricula renewal and the update of knowledge are necessary (25).Moreover, the black-box process which most AI models present make it hard for clinicians to trust and explain to the patients, which restrict their application in healthcare practice.In this article, to make it more acceptable in practice, we explain our AI algorithm in detail rather than simply give out the figure and data.We are willing to construct an accurate, locally calibrated and clinically accessible AI powered algorithm to risk-stratify patients with submucosal uterine myoma and to optimize patients' care process.Uterine myomas, occurring in 70% of women in their reproductive years, are the most common benign, solid, pelvic tumors in women.The minimally invasive operations choice for uterine myomas includes laparoscopy and hysteroscopy, which is the common indication for the submucosal myomas.A metaanalysis showed that, compared with infertile women without submucosal myomas, those who are infertile women with submucosal myomas showed significantly lower pregnancy and delivery rates (26).Consequently, the resection of submucosal myomas led to a significant increase in the pregnancy rate.
Research also indicates that the operation effectively decrease the patients' blood loss (27).Submucosal myomas, which grouped as type0,1 and 2, are almost removable only with hysteroscopy, which means the preoperative recognition and classification of myoma is meaningful and decisive in clinical practice.
In a retrospective cohort study designed by Mayo Clinic, preoperative MRI FIGO myoma staging read by experts is not completely consistent with surgical description, the variation of which is clinically significant.The author concluded that additional validation of FIGO staging is needed, however, in the other way, it means that even experts' ability to map the lesion is not always stable and liable (9).
However, only preoperative myoma mapping is not enough.The image character also make difference.In a prospective study, according to the signal intensity of T2-weighted MR images, myomas are classified into 3 types: low intensity; intermediate intensity; high intensity, which kinds of classification method is totally different from the method we mentioned before which is classified by the positional relationship between uterine myoma and myometrium (28).In conclusion, the researchers assumed that the myomas with the high intensity in T2-weighted MR images should be exempted from a kind of specific myoma surgery, the magnetic resonance-guided focused ultrasound surgery, because the postoperative outcome is unfavorable.It hints us that not only the positional relationship between uterine myoma and myometrium is meaningful, the MR signal intensity also makes the difference, which our AI algorithm will take into account.
The time of operation influence the preoperative recovery, as well as the loss of blood.Our data firmly proved that the AI algorithm group experienced shorter operation time and less loss of blood.The concept of prehabilitation or enhanced recovery after surgery (ERAS) is frequently mentioned and quoted by the clinicians, which aim to speed up the progress of postoperative recovery.The AI algorithm can also help to make the patients who experienced hysteroscopy recover faster, because its application shortens the operation time.In a prospective randomized controlled trail, researchers reported that intracervical vasopressin injection during hysteroscopy reduces intraoperative bleeding (29).In another RCT study, oxytocin drip during hysteroscopy showed the similar effect.However, hypotension, arrhythmia, hyponatremia, might occur after administration of oxytocin and vasopressin, which seriously restrict the medication administration.The AI algorithm help to showed the same result but not lead to any side effect.
In addition to fluid overload and hyponatremia, hysteroscopy is associated with several other immediate complications, including uterine perforation, air embolism, transient blood oxygen desaturation, hypercapnia, and coagulopathy (29)(30)(31).However, it is noteworthy that none of these complications were observed among the participants in our study.The absence of these complications could be attributed, in part, to the relatively modest sample size of our study.
Our prior research introduced a deep learning-based instance segmentation model capable of automatically generating output encompassing the class, location, and masks related to the uterine wall, uterine cavity, and uterine myomas (19).Although MRI has been proved to be a useful technique to diagnose uterine myoma and enable clinicians to select appropriate management (32, 33), there are few similar studies with limited transformation and application.In 2017, Korean researchers proposed a 3D reconstruction method with uterine MRI templates enables 3D visualization of myomas (34).This article exclusively focuses on the methodology aspect, without delving into real-world applications or providing a comparison with operative foundations.As a result, its practical applicability might be viewed with skepticism.Furthermore, three dimensional printed uterine model from MRI was constructed to guide the operation and choose the uterine incision in a pregnancy woman, at the same time with caesarean section (35), which is just a case report with restricted clinical evidence.The author asserts that the expense and complexity associated with producing the three-dimensional model were comparatively modest.This emphasizes the attainability of employing 3D models within the realm of myoma operations.In another research, a 3D MRI model was drawn preoperatively and taken into application in clinical practice (36).The deduction that the present study surpasses conventional 2-dimensional MRI in accurately identifying the locations of uterine myomas and endometrium was based on a web-based survey distributed to gynecologists.However, this conclusion lacks precise data on factors such as blood loss or operation time.Based on our current accomplishment, our future research endeavors will extend beyond refining the methodology of 3D reconstruction.In addition to methodological advancements, we aim to harness the visual representations derived from MRI images of uterine myomas for widespread clinical applications.We intend to explore advanced algorithms, machine learning techniques, or innovative image processing methodologies that could further elevate the quality of our 3D models.
Our goal is to bridge the gap between research and practical clinical applications.To achieve this, we plan to collaborate with healthcare institutions and practitioners to implement our 3D uterine myoma models in various clinical scenarios.This integration will encompass a wide range of applications, including but not limited to: Preoperative Planning: We aim to provide surgeons with comprehensive 3D models that assist in surgical planning and decision-making.This includes the precise localization of myomas, estimation of surgical complexities, and selection of optimal incision sites.
Intraoperative Guidance: We will explore the real-time use of 3D models during surgeries, particularly in scenarios such as cesarean sections and myomectomy procedures, to aid surgeons in navigating complex anatomical structures.
Patient Education: We intend to develop educational tools that utilize 3D models to communicate with patients, helping them understand their condition and the proposed surgical interventions.
In summary, our forthcoming research endeavors will en c o m p a s s a m u l t i f a ce t e d a p p r o a c h , e n c o m p a s s i n g methodological enhancements, clinical integration, and rigorous validation.We aspire to translate our current achievements into tangible benefits for both patients and healthcare providers in the realm of uterine myoma diagnosis and treatment.
Conclusion
This study, applying an AI-powered uterine myoma diagnosis algorithm created by our team based on MRI, revealed a promising prospect on improving the efficiency of hysteroscopic myomectomy.The further stage needs more patients to refine the diagnosis algorithm and realize the achievements in clinical widely application.
FIGURE 1
FIGURE 1 Visualization of the instance segmentation of our AI model.The left side represents original MRI image, and the right side represents the inference masks generated by our AI model.In the masks of model inference, yellow represents myomas, red represents the uterine wall, green represents the uterine cavity.(A, B) and (C) represents type 0 myoma.(D, E) and (F) represents type 1 myoma.(G, H) and (I) represents type 2 myoma.
TABLE 1
General clinical characteristics.
TABLE 2
Type matching and Perioperative data. | 4,914 | 2023-12-08T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Jieyu Anshen Granule, a Chinese Herbal Formulation, Exerts Effects on Poststroke Depression in Rats
Jieyu Anshen granule (JY) is a traditional Chinese medicine formula for treating depression and anxiety. The aim of the study was to observe the effects of JY on poststroke depression (PSD) and investigate the underlying mechanism. PSD rat model was developed by middle cerebral artery occlusion following chronic unpredictable mild stress in conjunction with isolation rearing. We performed behavioral tests, Western blot, ELISA, and BrdU/NeuN staining. Treatment with JY showed significant antidepressant effect in open-field and sucrose preference tests, as well as significant improvement in beam-walking, cylinder, grip strength, and water maze tests. In addition, treatment with JY could restore the levels of neurotransmitters and decrease the levels of hormone and inflammation cytokines in serum and brain. Treatment with JY also showed significant regulation in the expression of neurotransmitter receptors and NF-κB/IκB-α signaling in the prefrontal cortex and hippocampus. Moreover, the numbers of newborn neurons in the hippocampus were increased by treatment with JY. Our results suggest that JY could ameliorate PSD and improve the neurological and cognitive functions. The antidepressive effect may be associated with the modulation of JY on monoamine system, neuroendocrine, neuroinflammation, and neurogenesis.
Introduction
Poststroke depression (PSD), which is different from general depression, is an extremely frequently neuropsychiatric disorder following ischaemic stroke, and common mood symptoms after stroke include anxiety and feelings of despair as well as anhedonia [1]. One-third of stroke patients suffer from depression, and depression negatively affects patients' ability to engage in rehabilitation therapies [2]. e pathophysiology of PSD is associated with a complex network of interrelated regulatory factors including the hypothalamic-pituitary-adrenal (HPA) axis, monoamine neurotransmitters, neuroinflammation, and neurogenesis [3]. Selective serotonin reuptake inhibitors (SSRIs) are the first choice for the pharmacological prevention and therapy of PSD [2,4]. Patients taking antidepressant drugs often experience high relapse rates and a variety of side effects, such as nausea, headaches, somnolence, and dry mouth [5,6].
Traditional Chinese medicine (TCM) is one of the commonly used complementary and alternative medicine therapies for depression, while a formula contains several herbs (Chaihu, Gancao, Fuling, Suanzaoren, Yujin, Baizhu, Yuanzhi, Shichangpu, Banxia, etc.) with a specific proportion interacting with each other to improve therapeutic effects and to reduce toxicities [7,8]. TCM or its major constituents are often used in clinical practice in China for treating depression; for example, polyphenols may show antidepressant effects through normalizing HPA hyperactivity [9]. Jieyu Anshen granules (JY) are a classical TCM with antidepressant activity that have been recognized by the China Food and Drug Administration [10]. e use of JY alone or in combination with additional antidepressants has already been widely implemented in China as a means of treating anxiety and depression [11]. However, it is not clear whether JY can be effective for PSD. us, a rat model of PSD was established using chronic unpredictable mild stress (CUMS) following middle cerebral artery occlusion (MCAO); then effects of JY were observed and underlying mechanism was investigated.
Animals.
Male Sprague-Dawley rats (240-260 g, 7-9 weeks) provided by Pengyue Animal Co. Ltd (License No. SCXK 20140007) had ad libitum food/water access while housed at 22 ± 2°C with 55 ± 5% humidity, as well as a 12 hour light/dark cycle. All animals had 1 week to acclimatize prior to study initiation. All studies were consistent with the guide from the National Institute of Health and received approval from the Ethical Committee of Yantai University.
CUMS consisted of a total of 7 unique stressors: swimming in 4°C water (5 min), 45°cage tilt (17 h), water deprivation (18 h, after which empty water bottles were given for 1 h), deprivation of food and water (20 h, followed by a sucrose preference test), soiled cages (200 ml water mixed into 100 g sawdust bedding; 21 h), paired caging (2 h), and overnight illumination (no darkness for 36 h). Neurological scores were determined 24 h after MCAO, then stressors were administered to rats in a random order for the following 18 days [12]. e rats were isolation-reared (one per cage) except CON group.
From the second day after MCAO, rats in the JY groups were administered JY (1 or 3 g/kg) daily for 4 weeks, while rats in the CIT10 group were administered citalopram (10 mg/kg) on the same schedule. Both JY and citalopram were administered intragastrically a volume of 1 mL/kg. Study design is illustrated in Figure 1. is test was performed both at baseline and following treatment, and sucrose preference was calculated as follows: sucrose preference � sucrose intake (g)/sucrose intake (g) + water intake (g) [12].
Open-Field Test.
Despair of rats was evaluated by open-field test (OFT). For this examination, rats were put into an open-topped cylindrical box with black walls (height, 40 cm; diameter, 75 cm). e bottom of the box was marked in 25 equally sized block sections. Rat locomotion and rearing activity were then monitored, with locomotor activity being quantified based on the number of blocks crossed and rearing activity being assessed based on how many times rats stood on their hind legs. Rats were each assessed one time for 3 min (isolation) [13].
Procedure for Evaluation of Neurological and Cognitive
Function in Rats 2.5.1. Beam-Walking Test. Rat coordination and motor movement integration was evaluated by beam-walking test (BWT) [14]. Before MCAO, rats underwent two training sessions per day for 2 days. During all sessions, a wooden beam (2.5 cm × 120 cm) was used, with a 30 × 30 × 30 wooden box placed on the other side to encourage rats to cross the beam. For training, rats were positioned on the center or starting points on the beam during the 1 st and 2 nd trials, respectively. Cushions were present under the beam to prevent any fall-induced injury to animals. Crossing time was determined as the time between when a forepaw was first extended onto the beam and when a forepaw was first extended into the wooden box, with a maximum cut-off time of 60 s.
Cylinder
Test. Asymmetry in forelimb use was evaluated by cylinder test (CT). For this assay, a transparent glass cylinder (30 cm high, 20 cm in diameter) was used. Rats were placed into the cylinder, and 20 movements were recorded for each animal, after which we calculated forelimb asymmetry as follows: 100 × (ipsilateral forelimb use +1/2 both)/ total forelimb use [15].
Forelimb Grip Force
Test. Muscular strength was evaluated by forelimb grip force test (FGFT), using a grip strength meter [16]. To measure this strength, rats were held by the tail over a grid in a position where they were able to reach the strength meter using their forepaws. Maximal grip strength was that at which rats were no longer able to grip the meter when they were pulled backwards away from it.
Memory Water Maze
Test. Spatial memory was evaluated by memory water maze test (MWMT), which is a standard approach for evaluating cognitive function [17]. For this test, we filled a pool, 1.2 m in diameter, with water and placed the study animals into the pool for 120 s during which time they sought to locate a hidden platform which was positioned 1 cm beneath the water surface. Surrounding the walls of the pool were specific visual cues to guide rats, including a horizontal tube on the north wall, an "X" on the south wall, an entrance door on the east wall, and a vertical tube on the west wall. Upon finding the hidden platform, rats were given 15 s during which they could rest on it. If they did not locate this platform within 60 s, they were put onto it for 15 s. Over a period of 4 days, rats received 4 total attempts to locate this platform. On the 5 th day this platform was removed and rats were given a 60 s performance assessment. e time spent in the quadrant where the platform had been located was used as a means of evaluating the degree to which rats were able to remember and navigate to the platform location.
2.6. Western Blotting. Prefrontal cortex and hippocampal tissue samples were washed with PBS and then lysed via the use of lysis buffer containing phenylmethylsulfonyl fluoride. Lysates were thoroughly homogenized and then spun for 10 minutes (12,000 ×g at 4°C), after which a Bradford assay was utilized to measure protein concentrations in supernatants. ese samples were then boiled in 5x loading buffer, electrophoretically separated using 12 % SDS-PAGE gels, and transferred to PVDF membranes that were blocked for 4 h with 5% nonfat dried milk. Membranes were then probed overnight using 1 :1000 dilutions of appropriate primary antibodies against 5-HT 1A R, ADRα2, NF-κB, IκB-α, GR, and β-actin at 4°C. Blots were then probed using appropriate secondary antibodies and were developed with ECL reagents and a gel imager (GE, LAS4000). Densitometric quantification was utilized as a means of assessing protein levels, and samples were normalized to levels of β-actin contained therein [18]. Evidence-Based Complementary and Alternative Medicine samples were measured by HPLC using an ultraviolet detector, with an approach adapted from previous work [19]. Briefly, we suspended brain tissue using ice-cold 0.6 mol/L perchloric acid supplemented with 0.5 mmol/L EDTA as well as 0.8 mmol/L L-cysteine. ese tissue homogenates then underwent 4°C centrifugation at 16,000 rpm for 20 minutes, and supernatants were filtered via 0.25 filtration into the chromatographic system. A Quaternary pump HPLC system (SHIMADZU: LC-20A, Shimadzu, Kyoto, Japan), 25 cm long by 4.6 mm in diameter, and a Cosmosil C 18 Column (5 mm particles) (Inertsil ODS-2) were used for all HPLC. e mobile phase contained 90% (v/v) 0.1 mol/L of a sodium acetate solution (0.1 mmol/L EDTA-2Na, pH 5.1), 10% (v/v) methanol. A 20 μL injection volume was used, with a 1 mL/min flow rate and a 275 nm wavelength. DA, 5-HT, and NE retention times were, respectively, 5.02, 10.08, and 3.37 minutes.
Statistical
Analysis. SPSS v.16.0 was used for all analyses, and data are expressed as means ± SEM. Two-way ANOVAs were used to compare SPT results, with treatment and day as the two factors. Other behavioral tests, Western blotting, and biochemical assays were performed by using one-way ANOVA via Tukey's post hoc comparison when comparing more than two groups. P < 0.05 was considered significant.
JY Improved Neurological and Cognitive Functions in PSD
Rats.
e MCAO + CUMS rats significantly decreased the time [F (4,35) � 2.71, P < 0.05] and distance [F (4,35) � 3.50, P < 0.05] in the target quadrant compared to CON rats. Treatment of rats with JY or CIT10 significantly increased the distance and time in the target quadrant compared to MCAO + CUMS rats (all P < 0.05) (Figure 4). e results suggested that JY could significantly improve neurological and cognitive functions in PSD rats.
JY Increased the Levels of Neurotransmitters in Prefrontal
Cortex, Hippocampus, Hypothalamus, and Striatum. As shown in Table 2, there was a significant increase in the levels of DA and 5-HT in prefrontal cortex of MCAO + CUMS rats compared to CON rats (all P < 0.01). Compared to MCAO + CUMS rats, the levels of DA and 5-HT were significantly increased in prefrontal cortex of CIT10 and JY1 groups (P < 0.05, P < 0.01). ere was also significant differences in the levels of NE, DA, and 5-HT in hippocampus between MCAO + CUMS and CON rats (P < 0.05, P < 0.01). JY1 treatment significantly increased the levels of these neurotransmitters in the hippocampus compared to MCAO + CUMS rats (P < 0.05, P < 0.01), whereas CIT10 significantly increased the level of 5-HT (P < 0.05). Levels of these neurotransmitters in the hypothalamus of MCAO + CUMS rats were significantly decreased compared to CON rats (all P < 0.01). JY1 treatment significantly increased the levels of these neurotransmitters in the hypothalamus compared to the MCAO + CUMS rats (P < 0.05, P < 0.01), whereas CIT10 increased the levels of DA and 5-HT (P < 0.05, P < 0.01). e levels of these neurotransmitters in the striatum of MCAO + CUMS rats were significantly decreased compared to CON rats (all P < 0.05). Compared to MCAO + CUMS rats, the level of NE in the striatum of CIT10 and JY1 treated rats was significantly increased (P < 0.05, P < 0.01). e results suggested that the levels of neurotransmitters in brain could be increased by administration of JY on PSD rat model.
JY Regulated the Expression of 5-HT 1A R and ADRα2 in Prefrontal Cortex and Hippocampus.
e expression of 5-HT 1A R was significantly reduced in prefrontal cortex in MCAO + CUMS rats compared to CON rats (P < 0.05). CIT10 and JY1 treatment significantly increased the expression of 5-HT 1A R (P < 0.05, P < 0.01) (Figure 5(a)). e expression of ADRα2 was significantly reduced in prefrontal cortex in MCAO + CUMS rats compared to CON rats (P < 0.01). JY1 and Evidence-Based Complementary and Alternative Medicine JY3 treatment significantly increased the expression of 5-HT 1A R (P < 0.01) ( Figure 5(b)). e expression of hippocampal 5-HT 1A R was significantly reduced in MCAO + CUMS rats compared to CON rats (P < 0.01). CIT10, JY1, and JY3 treatment significantly increased the expression of 5-HT 1A R (P < 0.01) ( Figure 5(c)). e expression of hippocampal ADRα2 was significantly increased in MCAO + CUMS rats compared to CON rats (P < 0.01), and CIT10, JY1, and JY3 significantly increased the expression of ADRα2 (P < 0.01) ( Figure 5(d)). e results suggested that the levels of neurotransmitters in brain could be increased by administration of JY on PSD rat model.
JY Regulated the HPA Axis Activity and GR Expression in Hippocampus.
e level of serum ACTH was significantly increased in MCAO + CUMS rats compared to CON rats (P < 0.01). CIT10, JY1, and JY3 treatment significantly decreased the level of serum ACTH (P < 0.05, P < 0.01) (Figure 6(a)). e level of serum CORT was significantly increased in MCAO + CUMS rats compared to CON rats (P < 0.01). JY1 and JY3 treatment significantly decreased the level of serum CORT (P < 0.05, P < 0.01) (Figure 6(b)). e expression of hippocampal GR was significantly decreased in MCAO + CUMS rats compared to CON rats (P < 0.01). CIT10, JY1, and JY3 treatment significantly increased the expression of GR (P < 0.01) (Figure 6(c)). e results suggested that the dysfunction of HPA axis could be reversed by administration of JY on PSD rat model.
e results suggested that the excess level of cytokines could be inhibited by administration of JY on PSD rat model.
Quantifying the BrdU+/NeuN+ Cells in Dentate Gyrus of Hippocampus.
To determine the effect of JY on the newly formed neurons in the dentate gyrus of hippocampus, the newly formed neurons were assessed by analysis of BrdU with the neuronal marker NeuN (Figure 9(a)). e number of BrdU+/NeuN+ cells was significantly decreased in the dentate gyrus of hippocampus in MCAO + CUMS rats compared to CON rats (P < 0.05). CIT10 and JY1 treatment Data are expressed as mean ± SEM. n � 7-10. # P < 0.05, ## P < 0.01 compared to CON group. * P < 0.05, * * P < 0.01 compared to MCAO + CUMS group. 6 Evidence-Based Complementary and Alternative Medicine significantly increased the number of BrdU+/NeuN+ cells in the dentate gyrus of hippocampus (P < 0.05) (Figure 9(b)). e results suggested that JY could significantly improve the neurogenesis of hippocampus on PSD rat model.
Discussion
PSD is the most frequent neuropsychiatric consequence, and experimental models may also pave the way for the discovery of novel therapeutic strategies [20]. e PSD model, which is developed by MCAO plus CUMS in our study, showed significant depressive behaviors, including decreased sucrose preference and motivation. Decreased sucrose preference in SPT is consistent with anhedonia, the major symptom of depression [21,22], whereas the OFT measures the horizontal and vertical movement of rats for the evaluation of activity and curiosity [23]. e SPTand OFTare widely accepted behavioral paradigms for assessing pharmacological antidepressant activity [24]. Treatment with JY reversed the decreased sucrose preference and motivation in PSD rats the same as treatment with citalopram, indicating that JY could exert an antidepressant-like activity in PSD rats. PSD is strongly associated with further worsening of physical and cognitive recovery, functional outcome, and quality of life [20]. e impairment of physical and cognitive function is thought to be the factor most closely associated with PSD development and severity [25]. It has been shown that the physical and cognitive impairments could be reversed by treatment with TCM via targeting multiple pathways [26]. In our study, we found that PSD rats exhibited both neurological impairments such as reduced asymmetry, strength, and decreased walking time, as well as impaired memory with MWMT test. BWT is a behavior test on assessing motor coordination after a stroke, and the CT and FGFT tests are behavior tests on assessing subjects for motor impairments [4,27,28]. e impairment of neurological and cognitive functions was observed in our PSD rat model, and treatment with JY significantly decreased the impairments, including improving grip force, decreasing asymmetry and beam crossing time. Lesions in the hippocampus are known to disrupt memory and spatial awareness [17]. It was shown that treatment with JY could improve the disrupted spatial memory in PSD rats.
Pathophysiology of PSD is complex and multifactorial, resulting from the combination of ischaemia-induced neurobiological dysfunctions and psychosocial distress [29]. An integrated view of the etiopathology of PSD posits the interlinking between monoamines, neuroinflammation, HPA axis, and neurogenesis as the common denominator [30,31]. e central monoamine hypothesis proposes that imbalances in 5-HT, NE, and DA could result in depression [32]. Axons between the brainstem and cerebral cortex which contain these neurotransmitters can be disrupted during PSD, thus leading to imbalances in 5-HT, NE, and Evidence-Based Complementary and Alternative Medicine 9 DA synthesis throughout the brain [33,34]. In this study, neurotransmitters such as 5-HT, NE, and DA were assessed in prefrontal cortex, hippocampus, hypothalamus, and striatum of rats, which showed that the level of neurotransmitters was suppressed by CUMS exposure, which is consistent with previous studies. However, treatment with JY upregulated the levels of neurotransmitters. Certain 5-HT 1A R ligands are currently used as a means of treating depression, such vilazodone and vortioxetine (SSRIs and partial 5-HT 1A R agonists), or generalized anxiety disorder, such as buspirone (a partial 5-HT 1A R agonist) [26]. Regulating ADRA2, associated with stress-dependent depression, may also improve treatment of a range of neuropsychiatric disorders [35]. We observed abnormal expression of 5-HT 1A R and ADRA2 in the prefrontal cortex and hippocampal in PSD rats compared to CON rats, which could be also reversed by treatment with JY. Depression is known to be linked with hyperfunctionality of the HPA axis [36], which primarily regulates physiological and psychological reactions to environmental changes [37]. Elevated stress hormone levels can be released as a result of chronic stress [38]. Impaired negative feedback in the HPA axis can cause levels of these stress hormones to continuously climb in a manner correlated with 5-HTsystem activity [39]. In this study, the level of serum CORT and ACTH was significantly increased in PSD rats. Hippocampal structure, acquisition of memory, and regulation of emotion could be affected by glucocorticoids [38]. We found that treatment with JY decreased the level of CORT and ACTH in the serum of PSD rats, consistent with reduced HPA axis hyperfunctionality. Glucocorticoids also have the potential to influence the structure of the hippocampus, as well as memory acquisition and emotional regulation [40]. us, JY could well regulate the functions of memory and emotion through HPA axis. GR is also most highly expressed in the hippocampus and can therein negatively regulate the HPA axis [41]. Altered HPA axis functionality is associated with both increased CORT level and decreased GR expression [42]. Chronic stress in animals is also known to reduce the plasticity and long-term potentiation of CA1 hippocampal neurons in a GR-dependent fashion, resulting in impairments to both learning and adaptation [43]. We found that CUMS treatment significantly decreased GR expression in the hippocampus of PSD rats, which could be reversed by JY treatment.
Increased levels of cytokines are also known to be linked to the pathophysiology of PSD [43]. Cerebral ischaemia leads to the production of increased proinflammatory IL-1β and TNF-α, which could further result in decreased 5-HT production, additionally promoting depression [44,45]. e cytokines could modulate 5-HT metabolism and HPA axis functionality, further modulating the pathophysiology of depression [46]. Reducing CORT level and inflammation can enhance the synthesis of 5-HT [47]. We observed increased levels of TNF-α and IL-1β in the serum, prefrontal cortex, and hippocampus of PSD rats, in addition to elevated CORT levels. at is to say, JY treatment could significantly inhibit neuroinflammation in PSD rat model. Moreover, we detected the expression of NF-κB and IκB-α, which regulates inflammatory cytokine production, and found that JY was able to inhibit neuroinflammation in the prefrontal cortex and hippocampus in PSD rats through the inhibition of NF-κB/IκB signaling.
It has been shown that newly formed neurons in the dentate gyrus were detected by the increasing of BrdU+/ NeuN+ cells [48]. Moreover, this study showed the decreased BrdU+/NeuN+ cells, while treatment with JY was associated with significant increase of BrdU+/NeuN+ cells in the dentate gyrus of hippocampus in PSD rats.
Conclusion
Our results indicate that JY could markedly exert antidepressant effects, also including improving neurological and cognitive functions in PSD rat model. ese beneficial effects of JY may be involved in the modulation of levels of neurotransmitters and their receptors, the restoration of dysfunction in HPA axis, the inhibition of neuroinflammation, and the improvement of neurogenesis in hippocampus.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
All authors declare that they have no conflicts of interest.
Authors' Contributions
Yuan Du contributed to data curation, formal analysis, and methodology and wrote the original draft. Jian Ruan contributed to data curation and methodology. Leiming Zhang wrote, reviewed, and edited the manuscript. Fenghua Fu was responsible for conceptualization, funding acquisition, and supervision, in addition to manuscript writing, reviewing, and editing. | 5,184.2 | 2020-02-23T00:00:00.000 | [
"Biology",
"Psychology"
] |
Physics-Informed Neural Networks for Cardiac Activation Mapping
A critical procedure in diagnosing atrial fibrillation is the creation of electro-anatomic activation maps. Current methods generate these mappings from interpolation using a few sparse data points recorded inside the atria; they neither include prior knowledge of the underlying physics nor uncertainty of these recordings. Here we propose a physics-informed neural network for cardiac activation mapping that accounts for the underlying wave propagation dynamics and we quantify the epistemic uncertainty associated with these predictions. These uncertainty estimates not only allow us to quantify the predictive error of the neural network, but also help to reduce it by judiciously selecting new informative measurement locations via active learning. We illustrate the potential of our approach using a synthetic benchmark problem and a personalized electrophysiology model of the left atrium. We show that our new method outperforms linear interpolation and Gaussian process regression for the benchmark problem and linear interpolation at clinical densities for the left atrium. In both cases, the active learning algorithm achieves lower error levels than random allocation. Our findings open the door toward physics-based electro-anatomic mapping with the ultimate goals to reduce procedural time and improve diagnostic predictability for patients affected by atrial fibrillation. Open source code is available at https://github.com/fsahli/EikonalNet.
A critical procedure in diagnosing atrial fibrillation is the creation of electro-anatomic activation maps. Current methods generate these mappings from interpolation using a few sparse data points recorded inside the atria; they neither include prior knowledge of the underlying physics nor uncertainty of these recordings. Here we propose a physics-informed neural network for cardiac activation mapping that accounts for the underlying wave propagation dynamics and we quantify the epistemic uncertainty associated with these predictions. These uncertainty estimates not only allow us to quantify the predictive error of the neural network, but also help to reduce it by judiciously selecting new informative measurement locations via active learning. We illustrate the potential of our approach using a synthetic benchmark problem and a personalized electrophysiology model of the left atrium. We show that our new method outperforms linear interpolation and Gaussian process regression for the benchmark problem and linear interpolation at clinical densities for the left atrium. In both cases, the active learning algorithm achieves lower error levels than random allocation. Our findings open the door toward physics-based electro-anatomic mapping with the ultimate goals to reduce procedural time and improve diagnostic predictability for patients affected by atrial fibrillation. Open source code is available at https://github.com/fsahli/EikonalNet.
INTRODUCTION
Atrial fibrillation is the most common arrhythmia in the heart, affecting between 2.7 and 6.1 million people in the United States alone [1]. A standard procedure to diagnose and treat atrial fibrillation is the acquisition of electrical activation maps, where a catheter is inserted to the cardiac chamber and the electrode at the tip records the activation time of the tissue at a given location. This process is repeated at multiple sites to cover the entire atrium. Finally, these measurements are interpolated to create a complete electro-anatomic map of the chamber [2]. The most common approach to interpolate the data is to use linear functions [2,3] or radial basis functions [4]. There is a recent focus into incorporating the uncertainty associated with these maps [3,5]. This is relevant since there are multiple sources of noise that can pollute the activation map, such as the position of the electrode and the difficulty to determine the activation time from an electrical signal. There is also uncertainty in the interpolation method that is used, which can be relevant particularly in regions of low data density. However, most current techniques [2,3] ignore the underlying physics of the electrical wave propagation [6]. This can result in unrealistic interpolations with artificially high conduction velocities. The strategies that do include the physics assume either a constant conduction velocity field [7] or a fixed amount of activation sources [8]. Additionally, there are no recommendations on the best strategy to acquire new measurements toward reducing the procedure time and improving its accuracy.
Deep learning has revolutionized many fields in engineering and medical sciences. However, in the context of activation mapping, we have to rely on a few sparse activation measurements. To address the limitations of deep learning associated with sparse data, recent techniques have emerged to incorporate the underlying physics, as governed by partial differential equations, into neural networks [9][10][11][12][13]. This powerful framework allows to train a neural network that simultaneously approximates the data and conforms to the partial differential equations that represents the physical knowledge of the system [14], a concept known as physics-informed neural networks.
Here, we propose to use a physics-informed neural network to create activation maps of the atria [15], more accurately and efficiently than with linear interpolation alone. We incorporate the physical knowledge using the Eikonal equation, which describes the behavior of the activation times for a conduction velocity field. We estimate the uncertainty in our predictions using randomized prior functions [16]. In addition, we take advantage of these estimates to create an active learning algorithm that, for a given set of initial measurements, recommends the location of the next measurement to systematically reduce the model error [17]. We highlight the advantages of this method for both a two-dimensional benchmark problem and a three-dimensional personalized geometry of the left atrium.
This manuscript is organized as follows: In section 2 we introduce the physics-informed neural network, the method to estimate uncertainties and the active learning algorithm. In section 3, we show two numerical experiments to test the accuracy of the method. We end this manuscript with a discussion and future directions in section 4.
METHODS
In this section we introduce a physics-informed neural network [9,18] to interpolate activation times for cardiac electrophysiology. We also present the methodology to estimate uncertainties and an active learning algorithm to efficiently sample the data acquisition space.
Physics Informed Neural Network for Activation Times
The electrical activation map of the heart can be related to a traveling wave, where the wavefront represents the location of cells that are depolarizing [19]. The time at which cells depolarize is referred to as the activation time and corresponds to an increase in transmembrane potential above a certain threshold and the initiation of the cell contraction. The activation times of the traveling wave must satisfy the Eikonal equation: where T(x) is the activation time at a point x and V(x) represents the local speed of the wave at the same location, which is referred to as the conduction velocity. We can rewrite (1) in residual form as Further, we approximate both the activation time T(x) and conduction velocity V(x) by where NN T and NN V are neural networks with parameters θ T and θ V , respectively, that need to be trained in order to obtain a good approximation. Since the conduction velocity is strictly positive and is bounded in a physiological range, we pass the output of the last layer through a sigmoid function σ so that the conduction velocity neural network reads where V max represents the maximum conduction velocity, specified by the user. Finally, we define a loss function to train our model: Figure 1 illustrates the first two terms of the loss function: The first term enforces that the output of neural network coincides with the N T activation time measurements availableT i , the second term enforces that the output of networks satisfies the Eikonal equation at N R collocation points. The third and fourth terms serve as regularization for the inverse problem. The third term, which we evaluate at the N R collocation points, is a total variation regularization for the conduction velocity, which allows discrete jumps in the solution. We select this term to model slow regions of conduction, such as fibrotic patches. Finally, we use L 2 regularization on the weights of the activation time neural network, for reasons we explain in the following section. We solve FIGURE 1 | Physics-informed neural networks for activation mapping. We use two neural networks to approximate the activation time T and the conduction velocity V. We train the networks with a loss function that accounts for the similarity between the output of the network and the data, the physics of the problem using the Eikonal equation, and the regularization terms.
the following minimization problem to train the neural networks and find the optimal parameters:
Uncertainty Quantification
We will be interested in quantifying the uncertainty in our predictions to both inform physicians about the quality of the estimates as well as to use active learning techniques to judiciously acquire new measurements. Given the large number of parameters in neural networks, using gold-standard methods for Bayesian inference, such as Markov Chain Monte Carlo, is prohibitively expensive. Instead, we borrow ideas from deep reinforcement learning and use randomized prior functions [16] to quantify the epistemic/model uncertainty associated with our neural network predictions. The key idea is to introduce an additional neural network with the same architecture, such that: We draw the parametersθ T ,θ V from a prior distribution and keep them fixed during the training process. In this approach, a mean squared loss is equivalent to a normal likelihood for the data and the L2 regularization is equivalent to a zero mean normal prior for the neural network parameters. It can be shown that training the parameters to minimize the loss is equivalent to generating samples from a posterior distribution p(θ |D) when using a linear regressor [16]. To account for uncertainty in our predictions, we use an ensemble of neural networks initialized with different prior functions defined by the parametersθ T ,θ V , which we randomly sample with Glorot initialization [20]. Additionally, we perturb our data with Gaussian noise with variance σ 2 N to train each network of the ensemble with a slightly different dataset. Our final prediction is obtained as the mean output of the ensemble of neural networks.
Active Learning
We take advantage of the uncertainty estimates described in the previous section to create an adaptive sampling strategy. We start with a small number of randomly located samples, fit our model, and then acquire the next measurement at the point that we estimate to reduce the predictive model error the most. We iteratively fit the model and acquire samples until we reach a userdefined convergence or until we exceed our budget or time to acquire new data. Since the exact predictive error is unknown, there are several heuristics to determine where to place the next sample. A common approach is to select the location where the uncertainty is the highest, which we can quantify by evaluating the entropy of the posterior distribution p(T|D) [21]. We can view the entropy as negative information and, in the case of a Gaussian posterior, it is proportional to the variance, which has also been proposed for active learning [22]. In our computational experiments, we observe that the predictive posterior distribution p(T|D) is generally not Gaussian and we opt to use a nonparametric estimator for the entropy [23,24]. This is likely induced by the discrete jumps in conduction velocity that are a result of the total variation regularization term. Algorithm 1 summarizes the procedure. Since the initial predictions will be inaccurate due to the lack of data, it is not necessary to train the neural network completely to obtain the uncertainty estimates and start the active learning process. We can train the model and acquire data in parallel, as the prediction step and the entropy computation are of negligible computational cost. Here, we iteratively refine the predictions and the uncertainty estimates as more data become available and the model is trained.
Algorithm 1: Active learning algorithm to iteratively identify the most efficient sampling points Given: number of initial samples N init , number of active learning samples N AL , set of candidate locations X cand , number of initial training iterations M init , number of active learning training iterations M AL , and empty sets X and T that contain locations and activations times: Randomly select N init samples from X cand Remove the N init samples from X cand and add them to X Acquire the values of the activation times at the N init locations and add them to T Initialize the model and train it using the ADAM optimizer [25] for M init iterations.
find the new location of maximum entropy: arg max x∈X cand H(x) remove x from X cand and add it to X acquire activation time at x and add it to T train the model using ADAM [25] for M AL iterations. end
Application to Surfaces From Electro-Anatomic Mapping
During electro-anatomic mapping, data can only be acquired on the cardiac surface, either of the ventricles or the atria. We thus represent the resulting map as a surface in three dimensions and neglect the thickness of the atrial wall. This is a reasonable assumption since the thickness-to-diameter ratio of the atria is in the order of 0.05. Our assumption implies that the electrical wave can only travel along the surface and not perpendicular to it. To account for this constraint, we include an additional loss term: This form favors solutions where the activation time gradients are orthogonal to the surface normal N i . To implement this constraint, we assume a triangular discretization of either the left or right atrium, which we obtain from magnetic resonance imaging or computed-tomography imaging. We can then compute a surface normal N i for each triangle. We define the N R collocation points as the centroids of each triangle in the mesh x i . We enforce this constraint weakly by adding a factor α N . If the gradient of the activation times were exactly orthogonal to the triangle normals, it would force a linear interpolation between mesh nodes in the neural network, which is not desirable and unlikely attainable.
Implementation and Training
We implement all models in Tensorflow [26] and use the Tensorflow ADAM optimizer [25] with default parameters and a learning rate of 0.001. For the N R collocation points, we use a minibatch implementation, in which we use a subset of all available collocation points to compute the loss function and its gradient. For the two-dimensional benchmark problem, we randomly sample N mb points using a Latin hypercube design [27] and use them as collocation points. For the three-dimensional left atrium, we shuffle the order of the triangles in the mesh and divide them into batches of size N mb . For each iteration, we use the centroid locations of the triangles of one of these batches as collocation points and loop through them as the optimization progresses.
NUMERICAL EXPERIMENTS
In this section, we explore our method using a two-dimensional benchmark problem and a three-dimensional personalized left atrium. We also quantify the effectiveness of the active learning algorithm.
Benchmark Problem
To characterize the performance of the proposed model, we design a synthetic benchmark problem that analytically satisfies the Eikonal equation. We introduce a discontinuity in the conduction velocity and collision of two wavefronts in the following form: T(x, y) = min with x, y ∈ [0, 1]. Figure 2, left, illustrates the exact mapping of the activation times and conduction velocity. We generate N = 50 samples with a Latin hypercube design and train our model. We only have data on the activation times and we predict both the activation times and the conduction velocity. We use 5 hidden layers with 20 neurons each for the activation time neural network and 5 hidden layers with 5 neurons each for conduction velocity neural network. We perform a sensitivity study for α TV and α L2 and then select them to α TV = 10 −7 and α L2 = 10 −9 while keeping all other parameters fixed. We train the network for 50,000 ADAM iterations with a batch size N mb = 100 and then train with the L-BFGS method [28]. We compare our method against three other methodologies: a neural network with the same architecture and parameters except without including the physics, linear interpolation [2], and Gaussian process regression [3,29]. In the neural network without physics, we compute the conduction velocity analytically as V = 1/ ∇T . In the linear interpolation case, we use the scatteredInterpolant function from MATLAB with linear extrapolation [2]. We compute the conduction velocity by approximating the gradient of the activation time with finite differences on a regular grid across the domain. Then, we calculate the conduction velocity as V = 1/ ∇T . For the Gaussian process regression, we use our open-source implementation [30] with a squared exponential kernel and automatic relevance determination. We compute the gradient of the resulting Gaussian process analytically and we use this value to compute the conduction velocity. We use the root mean squared error (RMSE) for the activation times and the mean absolute error (MAE) for the conduction velocity. We make this distinction to avoid the artificially high errors that will be reported in the root mean squared error near the discontinuity of conduction velocity. Figures 2, 3 and Table 1 compare the results of the different methods against the exact solution. Qualitatively and quantitatively, the physics-informed neural network presents the best results for activation times and conduction velocity. It captures the wavefront collision and detects the two distinct regions of conduction velocity. The discontinuity in the conduction velocity is smoother in this method than in the data. Nonetheless, the neural networks outperform other methods, which show problematic representations of the conduction velocity. Since the neural network without physics and the Gaussian process regression are smooth, they are not capable of reproducing the discontinuity in the gradient that occurs when two wavefronts collide. This inevitably results in a region of zero activation time gradient, which results in an infinite conduction velocity. The linear interpolation suffers from a similar problem, although it will depend heavily in the location of the data. Figure 3 illustrates the numerical artifacts of these two methods compared to the Gaussian process regression by means of the solution along the x = y line.
To conclude, we evaluate the performance of all four methods to noise. We introduce Gaussian noise with a standard deviation Performance of physics-informed neural network (PINN), neural network without physics, Gaussian process regression, and linear interpolation in the presence of noise. The root mean squared error (RMSE) for the activation times is normalized by 1 ms and the mean absolute error (MAE) is normalized by 1 m/s. Errors are presented as mean and range. of 1, 5, and 10% of the maximum value of activation time and run all methods 30 times with the same datasets. In this case, we train the physics-informed neural network and the regular neural network for 100,000 ADAM iterations. Table 1 summarizes the results. We can see that the physics-informed neural network outperforms other methods except for the 5% noise case in the activation times. For the conduction velocity, the physics informed neural network performs better in all cases by a large margin. Gaussian process regression is as robust to noise as our approach, with similar levels of error for the activation times.
Remarkably, the adding physics to the neural network reduces the error in both activation time and conduction velocity.
Uncertainty Quantification
To study the accuracy of our uncertainty estimates, we vary the number of neural networks trained in our randomized priors ensemble. We set the noise parameter to σ N = 0.01, based on reported uncertainties [3]. We compute the entropy of a case with 30 samples from a Latin hypercube design and 5, 10, 30, 50, and 100 neural networks. We train the networks for 50,000 ADAM iterations with a mini-batch size of N mb = 96. We define the entropy estimated by 100 neural networks as our ground truth and compute the root mean square error for the remaining cases. We also compute the time it takes to train the networks. Figure 4 summarizes the results of this study. We observe a trade-off between accuracy and cost: The error reduction rate is reduced when more than 30 neural networks are used; however, we observe a linear trend in the time it takes to train multiple networks. Yet, the time it takes to train 100 networks is only four times of what it takes to train a single network. The baseline wall clock time to train one work was 404 s using a laptop with 8 CPUs. Combining these observations, we set the number of neural networks to estimate the uncertainty to 30. We also test the hypothesis that the maximum entropy is colocalized with the predictive error of the model. If this is true, the active learning approach will be effective, since placing a new sample at the point of maximum entropy will reduce the error. We run the two-dimensional example with the same parameters specified in the previous sections with 30 samples drawn from a Latin hypercube design and train 30 neural networks in parallel. Figure 5 reveals that regions of high entropy are co-located with regions of high error and that the point of maximum entropy is close to the point of maximum error.
Active Learning
To test the effectiveness of the proposed active learning method, we train models with 30 different initial conditions. We start with N init = 10 samples and follow algorithm 1, acquiring N AL = 40 additional samples. We also train 30 models where N samples are placed using a Latin hypercube design for N = {20, 30, 40, 50}. Figure 6 summarizes the performance of our active learning algorithm. We observe that the active learning strategy quickly reduces the error until a total of 20 samples are obtained. Then, the error reduction is slower and reaches an asymptote. However, when we compare it to the Latin hypercube design, we see that the error is smaller in all cases. We test this hypothesis with the Mann-Whithney test [31] and obtain a significant difference for all cases, both in activation time (p < 10 −7 ) and in conduction velocity (p < 0.015). The error in the conduction velocity is higher than for the activation time in all cases. This difference may be explained by the difficulty in capturing the discontinuity of the conduction velocity set in the example. A small difference in where the different conduction velocity regions are identified can cause a large error.
Left Atrium
To test our model in three dimensions, we studying the electrophysiology of the left atrium. We obtain the mesh from one of the examples of the MICCAI challenge dataset [32] created from magnetic resonance imaging. We use the monodomain model for the tissue and the Fenton Karma model for cells under the MLR1 conditions [33]. We use the open-source software cbcbeat [34] to perform the simulation. We consider two cases, one where the conductivity is homogeneous at 0.1 mm 2 /ms in the entire domain and one where it is heterogeneous such that half of the domain has a reduced conductivity of 0.05 mm 2 /ms. In both cases, we initiate the activation at the center of the septum. We define the activation time as the time at which the transmembrane potential reaches 0 mV. We then compute the conduction velocity as V = 1/ ∇ T|, and approximate the gradient of the activation time with a finiteelement approximation constructed based on the triangular mesh. For the neural network, we use the same parameters as before, except that we now use seven layers of 20 neurons for the activation time network and a maximum conduction velocity of V max = 1 m/s. We consider two experiments.
In the first experiment, we set the number of samples by the optimal density [2], which corresponds to 1.05 samples/cm 2 . We randomly choose nodes in the mesh and use the activation times as data points. We repeat this process 30 times and compare the error to using a linear interpolation trained with the same data as detailed in section 3.1 with a Wilcoxon test [35]. Figure 7, left, shows that the error for the physics-informed neural network is significantly lower than for the linear interpolation (p < 10 −5 ) for both the homogeneous constant conductivity case (median 1.53, range 1.34-1.85 ms vs. median 3.92, range 3.42-5.34 ms) and the heterogeneous case with reduced conductivity in half of the domain (median 2.23, range 1.84-2.84 ms vs. median 4.77, range 3.90-6.02 ms).
In the second experiment, we explore the performance of the active learning algorithm by running 30 cases that start with N init = 10 randomly selected samples, which corresponds to a density of 0.081 samples/cm 2 . We acquire samples until we reach 90 measurements, which corresponds to a density of 0.734 samples/cm 2 . Figure 7 shows the accuracy of the method and Figures 8, 9 illustrate the evolution of the active learning algorithm. We observe that in both cases, homogeneous conductivity and reduced conductivity in half of the domain, the active learning algorithm reduces the error and converges to the same value, irrespective of the initial conditions. This is reflected in the small range of errors in activation times at 0.73 samples/cm 2 for the homogeneous case (1.01-1.88 ms) and the heterogeneous case (1.38-2.28 ms). The mean absolute errors in conduction velocity are relatively high for both cases (constant: median 0.086, range 0.083-0.101 m/s; half: median 0.085, range 0.081-0.095 ms). This can be explained, at least in part, by the difficulty of computing conduction velocities from the cardiac electrophysiology simulation. As Figures 8, 9 show, there are regions of artificially high conduction velocities, especially when two wavefronts collide, which may bias the reported error. Nonetheless, the method can delineate the two regions of conduction velocity in the heterogeneous case, as Figure 9 confirms. We also compute the density required by the active learning algorithm to achieve the same median error in activation time as the linear interpolation at the optimal density of 1.05 samples/cm 2 . This results in 3.92 ms in the constant conductivity case and 4.77 ms in the reduced conductivity case. The results show that a density of median 0.20, range 0.18-0.26 samples/cm 2 is needed for the homogeneous case and a density of median 0.24, range 0.16-0.30 samples/cm 2 for the heterogeneous case. Finally, we compare the performance of the active learning algorithm against randomly choosing samples by Figure 7, bottom center and right panels, shows that the active learning algorithm significantly reduces the error in activation times (p < 0.002).
DISCUSSION
In this work, we present a novel framework to create activation maps in cardiac electrophysiology by incorporating prior physical knowledge of the underlying wave propagation. To this end, we employ neural networks to represent the spatial distribution of activation times and conduction velocity, subject to physics-informed regularization prescribed by the Eikonal equation. In particular, we show that our method is able to capture the collision of wavefronts, a situation that is not captured by other state-of-the-art methods. This is a critical step toward reliably estimating conduction velocities and avoiding artificially high conduction velocity values. Our methodology directly predicts conduction velocities, without the need of creating ad-hoc techniques [36]. Further, it allows us to quantify the uncertainty in our predictions via randomized prior functions, which represents a useful tool in the clinical setting [3]. Notably, this uncertainty quantification comes at small computational cost, where we can train 30 networks in parallel, increasing the training time by only 1.75× the time needed to train one network. The uncertainty estimates become the cornerstone of our active learning algorithm, which reduces the predictive error in both two-and threedimensional cases. With this algorithm, we need fewer samples to achieve the same accuracy than that required in random allocation. This implies that the procedure of activation time sampling can be significantly reduced. In other words, for the same procedure time, the clinician can obtain more accurate estimates of the activation times and conduction velocity. In the way we designed the active learning algorithm, we can simultaneously train the neural networks and predict points for active learning, which makes the real time application of this methodology feasible in a clinical setting. In our experiments, we observed that 1 min of training per sample in a laptop with 8 cores was more than enough to obtain accurate results. In the future, we could easily accelerate this process by using graphic processing units or other dedicated hardware to train neural networks. If this is still not sufficient and samples can arrive faster than the training speed, we could extend the algorithm to make multiple recommendations for sample locations or simply gather more samples randomly in the vicinity of the current recommendation. Even though our method is computationally more expensive than other alternatives, the gains in accuracy could lead to reduction in procedural times for the patient, which outweighs the cost of training the model. Our methodology displays remarkable consistency and robustness and achieving similar error levels, irrespective of the initial set of samples.
Even though our method shows promising results when compared to existing solutions, it displays some limitations. First, we have ignored the anisotropy in conduction of cardiac tissue [19,37]. However, we could easily use the anisotropic Eikonal equation in our loss and estimate fiber and cross-fiber conduction velocities. This would require information of the fiber orientations in the atria and ventricles. There are several methodologies to incorporate this information with ruled-based approaches [38,39] and mapping techniques [40,41]. On the estimation of uncertainties, we see two limitations: First, we have not included the noise that is generated by the acquisition of the activation times with the electrode. We can incorporate this source of uncertainty by estimating some variance in the activation times [3] and include it in the Gaussian perturbation σ N that we use in the randomized prior functions. Second, our uncertainty estimates are only approximations, since the true uncertainties depend on the geodesic distance between points on the manifold and not on the Euclidean distance in R 3 , which we have used to parametrize this problem [3,42]. We can address this limitation by using more complex architectures, such as convolutional neural networks on graphs [43], which we plan to explore in the future. However, empirically, we see that active learning works well with this approximation. Finally, we have only tested our method with synthetic data and additional challenges could arise when applying it to real clinical data. We expect that the method will perform well for focal activations and macro re-entry tachycardia [2]. For localized re-entry or fibrillation, however, we expect that the method will not work, as the conduction velocity of the spiral wave depends on both time and space and the Eikonal equation does not hold. As a next step, we plan to test this methodology in a cohort of patients with focal activations or macro re-entry tachycardia [15].
In summary, we have presented a new and efficient methodology to acquire and create activation maps in cardiac electrophysiology. We believe that our approach will enable faster and more accurate acquisitions that will benefit the diagnosis of patients with cardiac arrhythmias.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author. Source code is available at https:// github.com/fsahli/EikonalNet. | 7,294 | 2020-02-28T00:00:00.000 | [
"Computer Science"
] |
Role of the Interplay Between the Internal and External Conditions in Invasive Behavior of Tumors
Tumor growth, which plays a central role in cancer evolution, depends on both the internal features of the cells, such as their ability for unlimited duplication, and the external conditions, e.g., supply of nutrients, as well as the dynamic interactions between the two. A stem cell theory of cancer has recently been developed that suggests the existence of a subpopulation of self-renewing tumor cells to be responsible for tumorigenesis, and is able to initiate metastatic spreading. The question of abundance of the cancer stem cells (CSCs) and its relation to tumor malignancy has, however, remained an unsolved problem and has been a subject of recent debates. In this paper we propose a novel model beyond the standard stochastic models of tumor development, in order to explore the effect of the density of the CSCs and oxygen on the tumor’s invasive behavior. The model identifies natural selection as the underlying process for complex morphology of tumors, which has been observed experimentally, and indicates that their invasive behavior depends on both the number of the CSCs and the oxygen density in the microenvironment. The interplay between the external and internal conditions may pave the way for a new cancer therapy.
It appears that such requirements are available for some internal cells, as well as those on the border. Active cells thickness reaches 3 mm in some area. The unit of proliferation activity at each site is the number of devisions in the previous 96 hours at that site, which can be a noninteger number because we consider proliferation as a continuous process.
The nutrient concentration is held fixed in the medium, and a CSC is placed at the central plaquette in the lattice. A lattice site is then randomly chosen and the governing equation, Eq. (4), for the diffusion and consumption of the nutrient for the living cells (if any) are first numerically solved. If there exist living cells at the chosen site (or plaquette), Eq. (1) for the internal energy is also solved numerically. If the internal energy of the cell exceeds the threshold u p , then the cell will proliferate according to the rules summarized in Fig. 3 of the text. In the case of duplication of a CSC, Eq. (2) is numerically solved; otherwise, the CSC concentration is only governed by the first term of Eq. (2). After the first round of the proliferation develops, the first generation of the cancerous cells is produced [the second term in Eq. (3)]. Each "time step" is defined by the completion of 201 × 201 (the total number of sites of the lattice) trials, i.e. every site on average has the chance to be updated at least once. By increasing the internal energy of the cancerous cells, the next generations may emerge, leading ultimately to the production of the dead cells. Repeating the procedure over time, a tumor forms whose morphological properties are the main subject of the paper. The main parameters of the model are, (i) p s , the cancerous stem cell proliferation rate, and (ii) n, the nutrient density in the medium. The competition between the two parameters, representing the internal and external factors, can lead to completely different structures for the tumors, an issue that was addressed in the past. We believe that this might pave the way to devise an experiment to test the CSC hypothesis, as well as help controlling the tumor growth by controlling the external parameters.
Note that the main idea of the paper is based on many experimental observations in which a qualitative relation between the complexity of the morphology of tumors and their malignancy was reported. In this paper we have quantified the relation in terms of two possible main parameters that may play major roles in the tumor growth.
As emphasized in the paper, proliferation is not confined to a tumor's border, as previously was suggested [1, 2]. About 200 layers of cells on the border contribute to proliferative activity that can barely be considered as "surface" growth.
The morphology of the tumors under the effect of various number of the CSCs and nutrient availability: In contrast to the previous studies [3][4][5][6][7], internal features of the tumors (in this case the number of the CSCs) can increase tumor malignancy. In addition to the fractal analysis, Figure 9 depicts the malignancy of tumors as a result of both the internal features and the external environment of the tumor.
Malignancy may be the result of distinct conditions. We chose the tumors with fractal dimension D f ≈ 1.82 as malignant. This type of tumors can arise by various values of p s and n; see Fig. 10.
Circularity is another method for classifying irregular shapes based on their space-filling feature. It is a measure of how close a shape is to a circle, and is defined by Circularity(r) = area which is filled by shape between r and r + dr area of ring wtih radius equal to r with thickness of dr Based on the previous studies, tumors with larger values of the probability p s should have more regular shapes than those with lower values [8]. Our results indicate, however, a completely different behavior. In addition to the fractal analysis, circularity indicates that irregularities increase for each tumor during its growth and for tumors with the same area, those with larger p s have more irregular shapes. Circularity in tumors of different sizes, as well as for different tumors but with the same mass, are shown in Fig. 13 We also present the cells' distribution in various tumors under various conditions. To test the various assumptions of the model and their efect on the main results, we carried out extensive simulations with various scenarios. In what follows we describe the results.
(a) Clearly, alternative boundary conditions and oxygen supply systems can be considered, and the model is fully capable of adapting them. Thus, we carried out simulations in which we varied the density of oxygen at the perimeter of the circle with a radius of 1 cm. Figure 8 shows that the main results for the relation between D f and p s are preserved.
(b) Regarding the radius of the circles (see the text of the paper), we varied the size of the medium in which the tumor grows. The results are shown in Fig. 9, indicating that they do not depend on radius of the circle.
(c) As for the structure of the oxygen supply system, We carried out simulations in which the same disc (circle) was considered, but instead of supplying the oxygen from the outside of the disc, we used a lattice of vessels that were separated by 0.2 mm and supplied the oxygen to the tumor. Then, the density of oxygen at such units was updated to 1 with a fixed certain rate. Equal numbers of such units with a random spatial distribution in the disc were used. We also simulated the case in which each unit directly acquires its oxygen. In all the cases, other rules for nutrient evolution, such as oxygen diffusion, remained unchanged. But, as Fig. 10 shows, the main results remained unchanged.
(d) In the main simulations we used the reported value of oxygen diffusion cofficient β [3, 10], but the results will not change by lower values of β. Figure 11 shows that our results do not depend on β.
(e) The mobility of different kinds of cells is not the same, of course. We assumed the diffusion coefficient of the cells to be the same, but as Fig. 12 indicates, the main results do not depend on the differences between the diffusion coefficients of various cells.
(f) Regarding the oxygen consumption rate, α: the CSCs and CCs are assumed to have the same rate of oxygen consumption, but when we changed the rates for every kind of cell, the results remained unchanged, as Fig. 13 demonstrates it.
(g) The CSCs and CCs are assumed to have the same internal energy threshold u p for duplication, and equal rates of crossing the S, G 2 and M phases in the cell cycle, R m . But, changing the proliferation activity of the cells does (h) We assume that the dead cells remain inactive in the medium. But, even if we eliminate them after their death, the main results would be unchanged. This is shown in Fig. 15. | 1,976.8 | 2018-04-13T00:00:00.000 | [
"Biology"
] |
Giant amplification of Berezinskii-Kosterlitz-Thouless transition temperature in superconducting systems characterized by cooperative interplay of small-gapped valence and conduction bands
Two-dimensional superconductors and electron-hole superfluids in van der Waals heterostructures having tunable valence and conduction bands in the electronic spectrum are emerging as rich platforms to investigate novel quantum phases and topological phase transitions. In this work, by adopting a mean-field approach considering multiple-channel pairings and the Kosterlitz-Nelson criterion, we demonstrate giant amplifications of the Berezinskii-Kosterlitz-Thouless (BKT) transition temperature and a shrinking of the pseudogap for small energy separations between the conduction and valence bands and small density of carriers in the conduction band. The presence of the holes in the valence band, generated by intra-band and pair-exchange couplings, contributes constructively to the phase stiffness of the total system, adding up to the phase stiffness of the conduction band electrons that is boosted as well, due to the presence of the valence band electrons. This strong cooperative effect avoids the suppression of the BKT transition temperature for low density of carriers, that occurs in single-band superconductors where only the conduction band is present. Thus, we predict that in this regime, multi-band superconducting and superfluid systems with valence and conduction bands can exhibit much larger BKT critical temperatures with respect to single-band and single-condensate systems.
Introduction
Two dimensional multi-band superconductivity can generate rather interesting physics [1], especially in the case of electronic systems with valence and conduction bands both participating to the superconducting condensate.In such configurations, complex redistribution of electrons occurs between valence and conduction bands, leading to a new physics with respect to single-band superconductors, such as density induced and band-selective BCS-BEC crossover [2], topological quantum phase transitions and hidden criticalities [3][4][5]29].Furthermore, at finite temperature, this phenomenon is also responsible for the nonmonotonic behavior of the superconducting gaps resulting in a superconducting-normal state reentrant transition [6].
A peculiar feature about two dimensional systems regards the nature of the superconducting transition.Indeed, in thin films the transition to the superconducting state with decreasing temperature occurs in two stages: at first, the finite amplitude of the order parameter forms around the mean-field critical temperature T MF but without ordering of the phase, after that, the true superconducting transition occurs with phase ordering at a lower temperature, which is expected to belong to the universality class of the BKT transition of the two dimensional XY model [7][8][9].In contrast to the power-law dependence of the coherence length ξ c predicted in the Ginzburg-Landau theory [10,11], this transition is characterized by an exponential divergence at the BKT critical temperature.In order to observe the BKT phenomenon, a condition is that d << ξ c being d the sample thickness, though some experiments have reported the BKT transition also outside the expected ranges [12].In this regime, the transition to the normal state is driven by the vortex-antivortex dissociation instability, which is connected with the logarithmic dependence of the interaction energy on the separation between vortices.This leads to the discontinuous jump of the phase stiffness J from a finite value right below T BKT to zero above it [13,14].The value of J at the BKT critical temperature can be inferred via measurements of the London penetration depth or from the nonlinear exponent of the I-V characteristics [15,16].The study of the BKT transition in valence and conduction band systems can be relevant for recently discovered 2D superconducting bilayer graphene systems [17,18], where the energy shift between the conduction and valence bands can be precisely tuned by an external electric field perpendicular to the graphene layers, and electron-hole superfluid systems, such as the doublebilayer graphene (2BLG) [19], consisting of two conducting bilayer graphene sheets, one containing electrons and the other holes.The two bilayer sheets are separated by an insulating barrier in order to prevent tunneling between the sheets.This system can be studied by means of a BCS mean-field theory by performing a standard particle-hole transformation [20].In this way, the intra-band couplings create Cooper pairs made up by a positively charged hole in the conduction/valence band of the p-doped bilayer and an electron in the conduction/valence band of the n-doped bilayer, while the pair-exchange couplings transfer electrons and holes involved in the formation of Cooper pairs from the conduction band to the valence band (and viceversa) of their respective graphene bilayer.Furthermore, materials where charge density wave (CDW) and superconducting orders coexist can be an interesting platform for studying valence and conduction band configuration, such as underdoped cuprates [21][22][23][24] and transition metal dichalcogenides [25][26][27].In fact, the presence of CDWs and their fluctuations can modify the energy spectrum by opening (pseudo)gaps and at the same time can mediate Cooper pairing, splitting single bands in two branches, that in wave-vector space behave locally as valence (hole-like) and conduction (electron-like) bands.Finally, in FeSe it is possible to tune the position of the valence band with respect to the conduction band, and thus the carrier density, through the chemical potential alignment with the trilayer graphene substrate, where the local work function is spatially inhomogeneous [28].In this work, we have investigated the superconducting state properties of a 2D electronic system with valence and a conduction bands considering the case of different intra-band and pair-exchange couplings, which is the typical configuration in bilayer graphene superconductors and electron-hole multilayer systems.We have obtained numerical results for the phase stiffness and the BKT critical temperature as functions of the energy band gap, the intra-band, and the pair-exchange couplings, that can be tuned in bilayer graphene systems by applying external electric fields through metal gates and, in the case of electron-hole multilayers, by tuning the insulating barrier width.We have found that there exist a giant enhancement of the Kosterlitz-Thouless critical temperature and a consequent shrinking of the pseudogap region, with a different mechanism from the one found in [30], in the regime of small energy gap between the bands and of low density of carriers in the conduction band, with the latter being the optimal condition for observing superfluidity in the electron-hole systems [19,20].Moreover, we have found a minimum in the pseudogap region, in the intermediate-regime of the pair-exchange couplings, shifting to the weak-coupling regime by reducing the level of filling of the conduction band.We also made comparison with single-band 2D superconducting systems, in which the phase stiffness is directly proportional to the electron density, resulting in small values of the BKT transition temperatures in the low density regime.It turns out that the presence of the valence band act as a reservoir of electrons for the two-band system, contributing constructively to the total stiffness of the system.Thus, we predict that in this regime, multi-band superconducting and superfluid systems with valence and conduction bands can have enhanced Kosterlitz-Thouless critical temperatures with respect to single-band and single-condensate systems.
The manuscript is organized as follow.In section 2 we describe the model for the physical system considered and the theoretical approach for the evaluation of the superconducting state properties.In section 3 we report our results.The conclusions of our work will be reported in Section 4.
Model and Methods
We consider a two-dimensional (2D) two-band superconductor with a valence and a conduction electronic band in a square lattice.This model can be applied to 2D electronhole layered superfluids as well.The valence and the conduction band have a tight-binding dispersion given, respectively, by Eqs. ( 1) and (2): where t is the nearest neighbour hopping parameter, a the lattice constant, E g the energy band-gap between the conduction and valence band.and the wave-vectors belong to the first Brillouin zone − π a ≤ k x,y ≤ π a .The band dispersions are reported in Fig. 1.We assume that Cooper Fig. 1: Band dispersions for the two-band system.The energy and the wave-vectors are measured in units of t and a, respectively.E g is the energy gap between the valence and the conduction bands.pairs formation is due to an attractive interaction between opposite spin electrons.The two-particle interaction has been approximated by a separable potential V i j (k, k ′ ) with an energy cutoff ω 0 , which is given by: where V 0 i j > 0 is the strength of the potential and i, j label the bands.V 0 11 and V 0 22 are the strength of the intraband pairing interactions (Cooper pairs are created and destroyed in the same band).V 0 12 and V 0 21 are the strength of the pair-exchange interactions (Cooper pairs are created in one band and destroyed in the other band, and vice versa), so that superconductivity in one band can induce superconductivity in the other band.We set ℏ = 1 and k B = 1 throughout the manuscript.
The same energy cutoff ω 0 of the interaction for intraband and pair-exchange terms is considered, and is taken much larger than the bandwidth so that the interaction can be considered contact-like.The terms corresponding to Cooper pairs forming from electrons associated with different bands (inter-band or cross-band pairing) are not considered in this work (see [32]).The term 3) is the energy dispersion with respect to the chemical potential µ.The index i = 1, 2 numerates the bands, where i = 1 denotes the valence band and i = 2 the conduction band.
The superconducting state of the two-band system is examined within a mean field theory.We consider the two-band case generalization of the superconducting gap equation at finite temperature T where 2 are excitation branches in the superconducting state, and Ω is the area occupied by the 2D system.Note that for a separable interaction of the form of Eq. ( 3), the superconducting gaps assume the following expression: We point out that all the coupling configurations considered in this work led to solutions of the gap equations in Eqs. 4 which are global minima of the free energy and thus stable physical solutions for the two-gap superconducting state, as discussed in Ref. [33].The total electron density of the two-band system is fixed and it is given by the sum of the densities of the single bands, n tot = i n i , that can vary instead.The electronic density n i in the band (i) at a temperature T is defined as: where f is the Fermi-Dirac distribution function.The BCS coherence weights v i (k) and u i (k) are: The low-temperature physics of a 2D attractive Fermi gas turns out to be different from that of a 3D one.This is a consequence of the Mermin-Wagner theorem, which prohibits the spontaneous breaking of a continuous symmetry at finite temperatures, allowing one to find off-diagonal long-range order only at T = 0 in two dimensions.However, in the low temperature phase of these kind of 2D systems there exists a "quasi-long range order" in which the phase correlations decays algebraically, i.e. ⟨e i(θ(r)−θ(0)) ⟩ ∼ |r| −η , where η is a T -dependent exponent and θ is the phase of the order parameter.This algebraic decay of the correlation function is observed up to a finite temperature T KT , known as the Berezinskii-Kosterlitz-Thouless (BKT) critical temperature, that separates the low-temperature from the high-temperature phase.The high temperature phase is characterized instead by an exponential decay of phase correlation ⟨e i(θ(r)−θ(0)) ⟩ ∼ e −r/ξ , where ξ is a characteristic length of the system and depends on temperature.
The transition temperature can be determined through the relation where J T OT (T BKT ) is the total phase stiffness at the Kosterlitz-Thouless transition temperature, that in our case is given by the sum of the stiffnesses of the single bands Following the approach used in Appendix A of Ref. [34], the phase stiffness of a superconductor can be computed from the current density induced by an external electromagnetic field, given by where A α are the component of the vector potential in a gauge where the scalar potential ϕ vanishes and ∇ • A = 0, K iαα ′ (q, ω) is the response function.The index α refers to the Cartesian axis and i is the band index.The phase stiffness is related to the static limit of the response function, The response function in Eq. ( 10) is given by the sum of a paramagnetic and a diamagnetic contribution, The off-diagonal elements (α α ′ ) of the response function vanish since the dispersions are symmetric in k x and k y because of the square symmetry of the considered lattice.Furthermore, the response function does not depends on the direction, that is, K iα = K i is independent of α.The phase stiffness J i is independent on the direction as well, and is given by The energies are normalized in units of the hopping parameter t and the dimensionless couplings λ i j are defined as λ i j = NV 0 i j , where N = 1/4πa 2 t is the density of states at the top/ bottom of the valence / conduction band, that coincide since the density of states is not modified by the concavity of the band.
Results
In this section, we present results for the phase stiffness calculated at the BKT transition temperature and the BKT transition temperature itself, that are closely connected by the Kosterlitz-Nelson criterion in Eq.( 9), for the twodimensional system made up by a valence and a conduction band.The superconducting order parameters ∆ 1 and ∆ 2 , which enter the expressions for the total phase stiffness J T OT derived in Sec. 2, are computed from Eqs.(4) coupled with Eq.( 5), that gives the total density of the system, considering a contact-type interaction (ω 0 /t = 20).
We report the phase stiffness of single bands and the total phase stiffness as functions of the band-gap energy E g in Figs.2a.The presence of the valence band contributes to enhance the total stiffness of the system, especially in the region of small band-gap energy where the transfer of electrons from the valence to the conduction band is favoured.The consequent presence of the holes in the valence band contributes constructively to the total stiffness of the system, adding up to the stiffness of the conduction band electrons and furthermore, there is a boost in the conduction band stiffness itself due to the electrons coming from the valence band.When the band-gap energy becomes too large the transfer of electrons is reduced, and superconductivity cannot exploit the presence of a proximate valence band, with the two condensates being essentially decoupled.In this regime, when the density of the conduction band is small (top panel of Fig. 2a) superconductivity is strongly suppressed, since very few holes and electrons that can form Cooper-pairs are present in valence and conduction bands, respectively, thus leading to a suppression of the total phase stiffness J T OT .When the density in the conduction band is higher (bottom panel of Fig. 2a), for large E g superconductivity is sustained only by the condensate in the conduction band, which is the only significant condensate in the system.This means that the enhancement in the total phase stiffness due to the valence band condensate is lost, since in absence of holes, the filled valence band will not create any stiffness and will be inactive for superconductivity.Thus, in this regime the system is acting as a single component condensate.In Figs.2b we report the phase stiffness of the single bands and the total phase stiffness as functions of pair-exchange couplings λ 12 = λ 21 .For higher values of λ 12 = λ 21 the total phase stiffness is enhanced, since the pair-exchange interactions facilitate the mixing of electrons between the two bands.For small values of the conduction band electron density (top panel of Fig. 2b), even in the limit of zero pair-exchange interactions the presence of the valence band is enough to enhance the total stiffness with respect to a system having only the conduction band.In fact, the transfer of electrons from the va-lence band to the conduction band is not only due to the pair-exchange but to all the channels of the interaction, including also the intra-band channels.Moreover, since we are dealing with a system at finite temperature, thermal effects can also contribute to the redistribution of the elctrons between the two bands.However, for higher values of the conduction band electron density (lower panel of Fig. 2b), for zero pair-exchange couplings the intra-band channels alone are not enough to transfer the electrons in the conduction band, and the valence band condensate in not superconducting.Also in this regime the system is behaving as a single component condensate, with only the conduction band contributing to the total phase stiffness J T OT .When the pair-exchange couplings are turned on, the Cooper pairs transfer from valence band to conduction band can start, leaving holes in the valence band, that in this way can host a superconducting condensate and give its contribution to the total stiffness.The physics is very similar to the previous case in the conduction band low density regime, when the intra-band couplings are tuned (top panel of Fig. 2c).For higher values of the conduction band electron density, (bottom panel of Fig. 2c), even though the stiffness of the conduction band condensate is again the only relevant contribution to the total stiffness in the weak-coupling regime, the stiffness of the valence band condensate is not completely zero as in the previous case, due to the presence of finite pair-exchange interactions that couple the two condensates.In the intermediateand strong-coupling regime, the stiffness of the conduction band condensate is decreasing, but the total stiffness is sustained by the valence band condensate stiffness, that is increasing instead.The behavior of the phase stiffness described until now is reflected into the BKT transition temperature.In Fig. 3a the mean-field and the Kosterlitz-Thouless critical temperature are reported as functions of the band-gap energy E g .Note that the mean-field critical temperature T MF has been evaluated from the linearized form of Eqs.(4) in the limit of vanishing superconducting gaps.While the mean-field critical temperature remains finite for low filling of the conduction band when E g becomes large, the Kosterlitz-Thouless critical temperature is strongly suppressed since the total phase stiffness is going to zero in this regime.Conversely, when E g is small, there is a giant enhancement in the BKT critical temperature due to the valence and conduction band condensates additive contributions to the total system stiffness.In Fig. 3b the mean-field and the Kosterlitz-Thouless critical temperature are reported as functions of the pair-exchange couplings λ 12 .Both are enhanced for increasing λ 12 , since the pair-exchange coupling induce the transfer of electrons from the valence band to the conduction band.However, while T BKT is almost insensitive to the level of filling of the conduction band, for small pair-exchange couplings values T MF is not.The situation is reversed when the intra-band couplings are tuned, as shown in Fig. 3c, with T MF that is insensitive to the level of filling of the conduction band while, in the weak-coupling regime T BKT is not, resulting in a suppression of superconductivity for decreasing values of the conduction band electron density.Having a general overview on the phase diagram of the system, we now focus on the pseudogap region.In Fig. 4a, we show the ratio between the mean-field critical temperature T MF and the BKT transition temperature T BKT as a function of the energy band gap E g .The ratio is almost constant in the region of small energy gap and for low density of carriers in the conduction band, while for larger E g the ratio is increased, evidencing an enhancement of the pseudo-gap region due to the decoupling of the two condensates.For higher values of carrier density, the behavior of the ratio is almost insensitive to the value of E g since the decoupling between the condensates occurs also for increasing number of electrons in the conduction band.In Fig. 4b, we show the ratio between the mean-field critical temperature T MF and the BKT transition temperature T BKT as a function of the pair-exchange couplings λ 21 = λ 12 .In this case for large values of the total density the ratio exhibits a non-monotonic behavior in the intermediate-coupling regime with the presence of a minimum in the pseudo-gap region.For larger values of the couplings the ratio keeps increasing as occurs in the conventional BCS theory, since T BKT saturates while T MF is going to infinity.For smaller values of the total density instead, the ratio is always monotonically increasing for all the values of the λ 21 = λ 12 and the minimum in the pseudo-gap region is shifting toward the weak-coupling regime by further reducing the density.The ratio between T MF and T BKT as a function of the intra-band couplings in Fig. 4c instead, is always monotonically increasing for the different levels of filling of the conduction band, so that the minimum of the pseudogap region is always in the weak-coupling regime.
The next step is to study in detail the effect of the valence band on the BKT critical temperature.In Fig. 5a we show the ratio T BKT /T BKT 1b between the BKT transition temperature of the two-band system and the BKT transition temperature of a system made up by a conduction band only, as a function of the band-gap energy.In the region of small E g , the two condensate positively interfere and the BKT critical temperature is dramatically enhanced in the regime of low density, when the valence band is present with respect to the system made up by a conduction band only.The enhancement in the Kosterlitz-Thouless critical temperature can be observed also when the pair-exchange coupling is tuned, as shown in Fig. 5b.Surprisingly, we found that the amplification of T BKT with respect to the single band case is important even when the pair-exchange couplings are very small, for low levels of filling of the conduction band.This strong cooperative effect between the valence and conduction bands condensates avoids the suppression of the BKT transition temperature for low density of carriers, that occurs in the single band and single gap superconductors.In Fig. 5c, we show the ratio T BKT /T BKT 1b between the BKT transition temperature of our two-band system and the BKT transition temperature of the system made up by a conduction band only, as a function of the intra-band couplings.We have found again an amplification in the BKT critical temperature, which becomes huge in the regime of low filling of the conduction band.Moreover, in the weak-coupling regime the amplification presents a minimum, after which is monotonically increasing in the intermediate-to strongcoupling regime.
Conclusions
In this work, we have studied the BKT transition in a 2D superconducting electronic system with valence and conduction bands separated by a tunable energy gap.The electrons form Cooper pairs in the s-wave channel by interacting through an attractive potential with a large energy cutoff, which has been considered to model electronic interactions or electronic bands with relative small bandwidths.We have analyzed the behaviour of the phase stiffness and of the BKT critical temperature as functions of the energy band gap, intra-band, and pair-exchange couplings for different levels of filling of the conduction band.We have found a giant enhancement of the BKT critical temperature in the regime of small energy gap between the bands and of small density of carriers in the conduction band.
Alongside with the BKT transition temperature amplification, the pseudogap region between the mean-field temperature scale and the BKT transition temperature is suppressed in the conduction band low density regime by tuning the energy gap between the bands to small values, while is left unchanged for higher density values.By looking at the pair-exchange couplings instead, the pseudogap region has a non-monotonic behavior showing the presence of a minimum in the intermediate coupling regime, that shifts to the weak-coupling regime by reducing the filling of the valence band.
The pair-exchange and the intra-band couplings, favoured by small energy band gaps, induce the transfer of electrons from the valence band to the conduction band.The consequent presence of the holes in the valence band contributes constructively to the stiffness of the total system, adding up to the stiffness of the conduction band electrons that is boosted by the valence band electrons as well.In the absence of holes, the filled valence band will not create any stiffness and will be inactive for superconductivity.This strong cooperative effect avoids the suppression of the BKT transition temperature for low density of carriers, that occurs in single-band superconductors where only the conduction band is present.
Despite the simplified nature of the pairing potential, our work gives a qualitative insight on the BKT transition in valence and conduction bands 2D superconducting systems and electron-hole superfluid systems, pointing toward optimal parameter ranges for amplification of the BKT transition temperature. | 5,734.4 | 2023-11-29T00:00:00.000 | [
"Physics"
] |
Narrowly avoided spin-nematic phase in BaCdVO(PO 4 ) 2 : NMR evidence
,
INTRODUCTION
One good place to look for exotic quantum phases in magnetic insulators is in frustrated antiferromagnets close to a classical ferromagnetic (FM) instability [1, 2].On very general arguments, in the applied magnetic field these systems either enter saturation in a (weakly) discontinuous transition or exhibit some kind of purely quantum "presaturation phase."One of the most famous examples is the S = 1/2 Heisenberg square-lattice model with FM nearest-neighbor (NN) coupling J 1 and frustrating antiferromagnetic (AFM) next-nearest-neighbor (NNN) interaction J 2 .Near saturation, it has been predicted to support the so-called spin-nematic state [1, 3,4] for a wide range of frustration ratios J 2 /J 1 .The simple physical picture is condensation of bound magnon pairs stabilized by the FM bonds.The energy of such pairs is lower than that of two free magnons.Thus, upon lowering the field in the fully polarized state, they condense before the conventional one-magnon Bose-Einstein Condensation, responsible for the Néel (dipolar) order, can take place.
Of all these compounds the most frustrated and the most promising spin-nematic candidate is BaCdVO(PO 4 ) 2 [6,7].Indeed, recent studies provided thermodynamic and neutron diffraction evidence that this material may have a novel exotic quantum phase in applied fields above µ 0 H c1 ≈ 4.0 T, where Néel order disappears [12,13].Even though the spin Hamiltonian of BaCdVO(PO 4 ) 2 is, by now, very well characterized [14], the origin of this high-field phase remains unclear.
In particular, it persists in the magnetic field all the way up to H c2 = 6.5 T. Such a wide field range is difficult to reconcile even with predictions based on the simplistic magnon-pair mechanism [1,3].Most recent theoretical studies take magnon-pair interactions into account and conclude that the stability range for the spin-nematic phase must be lower by yet another order of magnitude [15].Another concern is the subtle, but relevant, magnetic anisotropies in different AA ′ VO(PO 4 ) 2 crystal structures, such as complex Dzyaloshinskii-Moriya (DM) interactions and patterns of g-tensor canting.
Here, we report the results of an NMR study aimed at better understanding the saturation process.We show that, unlike the presaturation phases of Pb 2 VO(PO 4 ) 2 and SrZnVO(PO 4 ) 2 , the high-field phase in BaCdVO(PO 4 ) 2 has no dipolar order of any kind.At the same time, measurements of NMR relaxation rate independently confirm the high-field continuous phase transition at H c2 .Finally, we show that the in-between phase is characterized by very unusual spin relaxation that is dominated by two-magnon processes.
MATERIAL AND EXPERIMENTAL DETAILS
The crystal structure (insets in Fig. 1) and geometry of magnetic interactions in BaCdVO(PO 4 ) 2 are discussed F r e q u e n c y ( M H z ) 1. Temperature dependence of 31 P NMR spectra of BaCdVO(PO4)2 in a field of 7 T applied parallel to the c axis, from 235 to 2.6 K.Each spectrum is normalized to its maximum and vertically offset according to the temperature in the logarithmic scale.Insets show the crystallographic structure [14] and define the labeling of the P sites and NMR lines: on the left (right) is a side (top) view on the vanadium spin-1/2 planes.In the top view, thick arrows show the lowtemperature up-up-down-down spin order (where only updown-down spins are visible) [16] and the corresponding labeling of the two P1 NMR lines. in detail elsewhere [14].Here, we recall only that at room temperature the space group is orthorhombic, P bca , with lattice parameters a = 8.84 Å, b = 8.92 Å, and c = 19.37Å.There are 2 × 4 = 8 magnetic S = 1/2 V 4+ ions per cell, arranged in proximate square lattice layers in the (a, b) plane, two layers per cell.The symmetry is further lowered to P ca 21 in a structural transition at T s ≈ 240 K, resulting in eight distinct NN and NNN superexchange paths within each layer.Magnetic order sets in at T N ≈ 1.05 K and is of an "up-up-down-down" character (right inset in Fig. 1) [16], naturally following the alternation of NN interaction strengths along the crystallographic b axis [14].
NMR experiments were performed on a green transparent single-crystal sample (3.2 × 2.5 × 0.35 mm 3 ) from the same batch as those studied in Refs.[12][13][14].The sample was oriented to have the c axis parallel to the applied magnetic field H, and was placed inside the mixing chamber of a 3 He-4 He dilution refrigerator for the measurements below 1 K and in the standard coldbore variable-temperature insert (VTI) for temperatures ≥ 1.4 K.
The 31 P NMR spectra were taken by the standard spinecho sequence and the frequency-sweep method.The nuclear spin-lattice relaxation rate T −1 1 was measured by 1 2 0 .7 6 1 2 0 .7 8 1 2 0 .8 0 1 2 0 .8 2 1 2 0 .8 4 I n t e n s i t y ( a r b .u n i t s ) F r e q u e n c y ( M H z ) the saturation-recovery method, and the time t recovery of the nuclear magnetization M (t) after a saturation pulse was fitted by the stretched exponential function, , where M 0 is the equilibrium nuclear magnetization, C ∼ = 1 accounts for the imperfection of the excitation (saturation) pulse, and β s is the stretching exponent to account for possible inhomogeneous distribution of T −1 1 values [17,18].We find β s values close to 1 above 1 K, indicating a homogeneous system, and we find they decrease somewhat at low temperature, e.g., leading to values of 0.7−0.85 at 0.27 K in the 3.2−7.4T magnetic field range, as shown in Fig. 5(b).Only one point in this figure has β s = 0.5, corresponding to the obviously inhomogeneous phase mixture at the phase transition point.
NMR spectra
Figure 2 shows the evolution of the 31 P NMR spectra across the previously mentioned structural phase transition.At 248 K we see two NMR lines corresponding to the two phosphorus sites of the structure (left inset in Fig. 1), as also observed by NMR in the paramagnetic and fully polarized phases of Pb 2 VO(PO 4 ) 2 [11,19] and in SrZnVO(PO 4 ) 2 [9] vanadates: The P 1 site is localized inside the planes of V 4+ spins; it is thus more strongly coupled to them, and corresponds to the broader NMR line at lower frequency.The P 2 site is localized between these planes; it is less coupled to the spins and corresponds to the narrower NMR line at higher frequency.Below the structural phase transition appears a modulation in the b-axis direction which doubles the number of all crystalline sites (see Supplemental Material of Ref. [13]).In particular, this creates an alternation of two different V 4+ spin sites (see Fig. 1 in Ref. [14]) as well as an alternation of the two P 1a and P 1b sites in the planes of the spins and the two P 2a and P 2b sites in between these planes.In the spectra at 235 and 237 K we see the four corresponding NMR lines: there is a low-frequency pair of broader P 1a and P 1b lines and a high-frequency pair of narrower P 2a and P 2b lines, where by index "a" ("b") we label the outer (inner) lines.Finally, as expected for a first-order phase transition, the NMR spectra between 239 and 246 K present a mixture of the high-and lowtemperature spectra, reflecting the mixed-phase region.
As the temperature is lowered, the spectra taken at 7 T (Fig. 1) retain the same shape, whereas their width grows proportionally to the magnetization [5,12] and approaches the low-temperature limit below 3 K: Compared to the spectrum at 2.6 K, the corresponding lowtemperature spectrum measured at T = 0.27 K is only 8% wider, as shown in Fig. 3, which presents the low-T magnetic field dependence of the spectra.In Fig. 3 we see that the shape of the spectra does not change with the field down to 4.1 T, which is a clear signature that above µ 0 H c1 ≈ 4.05 T the system is homogeneously polarized, without any dipolar magnetic order.A quantitative analysis of the spectra collected above H c1 was obtained in a rather straightforward fourpeak fit, where the slightly asymmetric shape of each NMR line was described by a Gaussian-based function that was ad hoc modified by a damped third order term: The asymmetry-defining parameters were optimized on the 5 T spectrum to the b = 0.30 and c = 0.0075 values (fit shown in Fig. 3) and then kept fixed to ensure the same line asymmetry for all fits.The thus obtained peak positions x c are shown in the figure by symbols.They carry local information on the spin polarization, that is, magnetization M : The latter stands in a linear relation with the frequency shift of each line.In order to get the most precise data, the temperature dependence of M was monitored by following the separation between the highest-frequency (P 2a ) and lowest-frequency (P 1a ) lines, i.e., the relative frequency shift of the two lines that are most strongly coupled to the magnetization (see Fig. 1).In adition, using relative shift avoids the uncertainty of the magnetic field calibration.In order to better approach the zero-temperature values and thus exclude thermal effects [20], the line positions of these two NMR lines were remeasured at 0.10 K, and the field dependence of their relative shift for H > H c1 is compared to the Faraday-balance magnetization data from Ref. [13] in Fig. 4(a), where the two y axes are scaled by 1.52 MHz/µ B [21].The agreement is very good, proving the previous observation that H c1 corresponds to a transition towards full saturation and that the magnetization process continues to higher fields.Below H c1 and at low temperature, a spontaneous AFM ordering occurs [13,16], so that each NMR line is expected to be split in two, where the difference in the coupling strength (the hyperfine coupling tensor) is expected to give much stronger splitting for the P 1 lines than for the P 2 lines, as was observed in Pb 2 VO(PO 4 ) 2 [11,19].In BaCdVO(PO 4 ) 2 (Fig. 3) this creates quite complicated spectra with overlapping lines, from which Ref. [13] and the prediction (solid curve) given by Eq. (2) (see the text).(b) Circles present the field dependence of the magnetic order parameter at 0.27 K, deduced from the line splitting of the P1a NMR line below Hc1 (Fig. 3).The curve is a guide for the eye.Squares present the structure factor of the magnetic Bragg reflection (0, 1/2, 0) at 0.04 K, obtained from the neutron data in Ref. [13].
we can reasonably recognize that the P 1a line is indeed strongly split, P 1b is unexpectedly only strongly broadened without visible splitting, and the two P 2 lines are weakly split as expected.In regard to the absence of AFM splitting of the P 1b NMR line, it has a simple explanation in the up-up-down-down stripe type of AFM order [16] (right inset in Fig. 1): P 1 phosphorus sites are approximately centered within 4 neighboring V 4+ spins, meaning that one type of P 1 sites "sees" either 4 up or 4 down spins while the other type always "sees" 2 up and 2 down spins.It is then obvious that the former sites observe strong AFM local field, providing a strong splitting of the P 1a NMR line, while for the latter ones the local field approximately cancels out, leading to the absence of splitting of the P 1b NMR line.
In order to distinguish the overlapping lines, we have selected the NMR spectrum at 3.9 T, which visibly provides the best line separation, enabling us to perform a reliable fit to 7 independent line positions (fit shown in Fig. 3) and thus to fully define all three line splittings.As the splittings are all proportional to the Néel order parameter (OP) of the AFM phase, which is the staggered transverse spin component, their relative size (ratio) should be field independent.We used this additional constraint to stabilize the fits of all the other low-field spectra that are less well resolved: In these fits we fixed the two ratios of the AFM line splitting to the values obtained at 3.9 T. Thus obtained line positions are shown in Fig. 3 by symbols, and the corresponding splitting of the P 1a line, providing an NMR image of the OP, is presented in Fig. 4(b) and compared with the corresponding information given by neutrons: the structure factor (square root of intensity) of the (0, 1/2, 0) magnetic Bragg reflection reported in Ref. [13].One obvious discrepancy between the two data sets is that some residual neutron intensity persists above the transition.That is a known effect due to the finite wave vector and energy resolution of a neutron instrument, which is unable to differentiate between infinitely sharp Bragg peaks due to long-range order and broader scattering around the Bragg peak positions due to critical fluctuations.It is more interesting that, unlike the neutron measurement, our NMR experiment indicates non-monotonic behavior of the order parameter.Here, we observe that both techniques are sensitive to a conceivable rotation of the OP (direction of canting), which is quite possible in the presence of complex DM interactions: NMR line splitting is affected by the angle dependence of the hyperfine coupling constant, and neutron intensities are affected by the so-called polarization factor.Nevertheless, the data shown in Fig. 4(b) rather point to a quite flat field dependence of the Néel OP, which is also what has been theoretically predicted for quantum spin-nematic candidates close to the nematic phase (see Fig. 5 in Ref. [15]).
To better understand the low-energy spin dynamics below and above saturation, we performed measurements of the T −1 1 relaxation rate vs field and temperature.The field dependence at two fixed temperatures is shown in Fig. 5(a) on a logarithmic scale.The most obvious features in the T −1 1 (H) plot at T = 0.27 K are the cusps corresponding to the phase transitions at H c1 and at µ 0 H c2 ≈ 6.50 T. While the latter appears weak on the logarithmic scale, it still corresponds to a twofold increase of the relaxation rate, a clear sign of a continuous phase transition.Its position exactly corresponds to the H c2 transition previously detected as a λ anomaly in calorimetry experiments [13].The big T −1 1 (H) peak obviously corresponds to the critical spin fluctuations related to the phase transition at H c1 and tells us that this transition is essentially of the second order.However, in Fig. 3 we see that the spectrum taken at H c1 corresponds to a mixture of the two phases, and we could even measure the two different T −1 1 rates corresponding to each of these two phases [Fig.5 (a) The T −1 1 relaxation rate measured in BaCdVO(PO4)2 at 1.6 K (squares) and 0.27 K (circles) as a function of magnetic field applied along the c axis.Experimental error bars are smaller than the symbol size.Straight dashed lines show activated behavior with energy gaps that scale with the single-magnon gap ∆(H), and red solid curves are the global three-exponential fit given by Eq. (1), as explained in the text.(b) Field dependence of the stretching exponent βs from the fits to the measured relaxation curves at 0.27 K.
ior was also observed in Pb 2 VO(PO 4 ) 2 [11] and in other quantum spin systems at the phase boundary of the lowtemperature AFM phase, reflecting the weak first-order character of the transition, as predicted for a spin system that presents some magnetoelastic coupling [22,23].
The observed behavior of the relaxation rate above H c1 is superficially similar to that previously seen in SrZnVO(PO 4 ) 2 [9]; however, a quantitative analysis reveals substantial discrepancies.In all cases we consider relaxation via thermal activation across an energy gap that scales proportionately to the single-magnon gap ∆(H) = gµ B µ 0 (H − H c1 )/k B (here in kelvin units), where g = 1.92 is the g factor [12] and µ B /k B = 0.67171 K/T.Furthermore, the log scale of Fig. 5(a) converts an exponential dependence ∝ e −α∆/T into its exponent, which depends linearly on field, and α is defined by the magnitude of the slope of the apparent linear dependence.While for a single-magnon condensation in the three-dimensional (3D) regime just above the saturation field this slope corresponds to α 0 ≈ 3 [9], from Fig. 5(a) we find that the initial value of the slope is very close to 2, which is expected above the saturation field of a spinnematic phase.In more detail, fitting the initial slope at 0.27 K in the 4.05-4.3T range, we get α = 1.90( 9), and at 1.6 K for the 4.05-6.0T range we get the identical value α = 1.91(2).In order to cover by the fit all the 0.27 K data up to the highest field (excluding the peak centered at H c2 ), we used the three-activated-term fit: A(e −α0∆/T + a 1 e −∆/T + a x e −αx∆/T ), (1) where the second term accounts for the expected presence of the single-magnon relaxation and the third term additionally allows for an unexpected spin dynamics showing up at high field values (6.0-7.2T).Using a global, 6-parameter [A(T 1 ), A(T 2 ), α 0 , a 1 , a x , α x ] fit over the both data sets shown in Fig. 5(a) as a function of ∆(H) ∝ H − H c1 ≥ 0, we indeed confirm that α 0 = 2.05( 4) is equal to 2 within its experimental error.Equivalently, fixing α 0 = 2 leads to a nearly indistinguishable fit, which is then preferred because it can be related to the twomagnon spin-nematic fluctuations.For this latter fit we find that the relative size of the single-magnon term amplitude (compared to the leading two-magnon term) is only a 1 = 6.8 (7)% and that the low-temperature highfield term is still much smaller, a x = 0.15(3)%, and has a strongly reduced gap α x = 0.32(2).
This analysis suggests the overwhelming dominance of the two-magnon processes.This is quite unusual and very different from what is seen in SrZnVO(PO 4 ) 2 , where it is the three-and single-magnon processes, expected for the single-magnon Bose-Einstein condensate (BEC) state, that provide the main relaxation mechanisms.Further confirmation for the prevalence of two-magnon relaxation in the high-field phase of BaCdVO(PO 4 ) 2 is found in temperature-dependent measurements at constant fields, as shown in Fig. 6.To analyze these data, we recall that, for a gapped system and purely parabolic ddimensional magnon dispersion, the preexponential factor of the T −1 1 rate is the power law A(T ) = A 0 T β , where β = d − 1.For the real dispersion relation of magnons in a specific material, the exponent β becomes temperature dependent, reflecting the "effective" dimension of the system (see the Supplemental Material of Ref. [24]).We have thus fitted T −1 1 (T ) using the same three-exponential function given by Eq. (1) with the parameters defined in the previous fit, where the common prefactor is taken to be an ad hoc modified power law, A(T ) = A 0 T β(T ) , that allows for a suitable β(T ) = β 0 − β 1 log(T /[K]) dependence.This provided a remarkably precise global fit of the data for all the field values ≥ H c1 , covering as much as 1.5 orders of magnitude in temperature and nearly 5 orders of magnitude in T −1 1 for the 5.5 T data in partic- relaxation rate measured in BaCdVO(PO4)2 as a function of temperature at several values of magnetic field applied along the c axis (symbols).Experimental error bars are smaller than the symbol size.In a wide range above Hc1, the relaxation can be globally fit using the previously defined three-exponential fit [Eq.(1)], where the amplitude is taken to be a modified powerlaw A ∝ T β(T ) , as explained in the text (red solid curves).At the lowest temperatures, T −1 1 (T ) ∝ T (dash-dotted lines).In the Néel phase below Hc1, T −1 1 has a characteristic ∝ T 5 behavior (dashed line).
ular.The fit defines three parameters, A 0 = 76(1) s −1 , β 0 = 1.40(3), and β 1 = 0.82(3), leading to the expected β(T ) dependence: at low temperature the system approaches the 3D regime, and β(0.3 K) = 1.8 is indeed close to the expected β 3D = 2 value, while the expected 2D regime value β 2D = 1 is reached at 3.1 K.The validity of the fit at its high temperature end is limited by the validity of the employed A(T ) dependence and the "T < gap" condition for a description as an activated process.
The available low-temperature data show that at 5.5 T there is a continuity of the T −1 1 (T ) dependence down to the lowest temperature.This points to the continuity of the phase and the absence of any low-temperature ordering.There is only a crossover into an apparent linear dependence, observed below 0.2 K.A similar crossover seems to be observed below 0.1 K at 3.5 T, where the system is in the antiferromagnetically ordered phase, so such behavior may be associated with the Goldstone mode that is expected to show up at the low-temperature end if the axial symmetry is not broken [25].However, considering the similarity with the 5.5 T data and the probable presence of the (subdominant) symmetry-breaking anisotropic terms of the Hamiltonian, such an interpretation is questionable.One might rather think of some common low-temperature physics that might be related to subleading terms of the system's Hamiltonian.Finally, inside the antiferromagnetically ordered phase, for the 3.5 T data in the intermediate temperature range (0.4-0.7 K), we find a strong power-law-like behavior, with the power exponent close to 5, which is usually the case in the BEC-type low-temperature phases of quantum antiferromagnets [11,[26][27][28], reflecting a high-order relaxation process [29].
DISCUSSION AND CONCLUSION
In regard to the phase that appears above µ 0 H c1 ≈ 4.05 T in BaCdVO(PO 4 ) 2 , our NMR results provide, in addition to a solid independent validation of the main findings and conjectures of Ref. [13], microscopic information that is crucial for defining the nature of these phases: (i) The low-temperature NMR spectra have exactly the same four-peak structure as those in the paramagnetic regime (Figs. 1 and 2) and are nearly field independent from H c1 up to the full polarization saturation above 7 T (Fig. 3).This provides clear evidence that the local spin polarization is homogeneous and lacks any transverse dipolar order, which is indeed one of the necessary (but not sufficient) characteristics of the spinnematic phase [30].The field dependence of the NMR line shift confirms and refines the previously observed field dependence of magnetization: The magnetization at H c1 is slightly (by ≈ −6%) reduced compared to full saturation and continues to increase in a wide range of fields above H c1 [Fig.4(a)].With the measurements being carried out at 0.10 K, this dependence is certainly not of thermal origin [20], but it is not necessarily a consequence of some hidden order.It can also originate from a small anisotropy of the main exchange couplings.For an anisotropy of the J xx ̸ = J yy ̸ = J zz type, this has been discussed in the context of the LiCuVO 4 compound [30,31].In the following paragraph we focus on the terms of the DM type, which are expected to provide the largest effect in BaCdVO(PO 4 ) 2 .We give the simplest estimate of this effect in the classical approximation and show that the observed field dependence is compatible with this mechanism [Fig.4(a)].Finally, we observe that recent theoretical predictions for the size of the magnetization variation in the spin-nematic phase [15] speak of a sizable variation, definitely much bigger than the observed 6%.Altogether, the static information is not really favorable to the existence of the spin-nematic phase in BaCdVO(PO 4 ) 2 .
The DM interactions can be rather complex in AA ′ VO(PO 4 ) 2 structures [10].To get a ballpark estimate, let us consider a toy model: We take the classical energy of two sublattices coupled by a single DM vector that is normal to the applied field.In the saturated state, DM coupling leads to a small canting of spins by an angle θ away from the field direction.This is driven by the gain in the DM energy per spin E DM (θ) = −DS 2 sin(2θ)/2.It is stabilized by the corresponding loss of the Zeeman energy per spin E Z (θ) = −gSµ B µ 0 H cos θ, reduced by the modification of the exchange energy E J (θ) = JS 2 cos(2θ)/2, where J is some combination of exchange constants that will depend on how the two sublattices become canted.For small canting, the angle dependence of the total energy is quadratic, E(θ) = e 0 − e 1 θ + e 2 θ 2 , and the equilibrium angle θ 0 is defined by its minimum, θ 0 = e 1 /(2e 2 ), where e 1 = DS 2 and e 2 = gSµ B µ 0 H/2 − JS 2 .The depolarization (1 − cos θ 0 ) is then: where gµ B µ 0 H = 2 JS.Since the classical Néel state is the one that minimizes the exchange energy between its two sublattices, we can expect that H < H c , with H c being the classical saturation field.The latter is equal to the single-magnon instability field and was previously shown to more or less coincide with H c1 in BaCdVO(PO 4 ) 2 [14].This simplest classical model can, indeed, reproduce the magnetization data in Fig. 4(a) (solid line) with the very reasonable values H = 3.75 T and DS 2 = 0.006 meV.The latter corresponds to 1.6% of the average |J 1 | and is comparable to the crude estimate based on the value of the spin flop field [13]: D/J ∼ (H SF /H c1 ) 2 ∼ 1.5%.
(ii) The most important contribution of NMR is certainly the insight into the low-energy spin fluctuations, measured by the T −1 1 relaxation rate.First, the continuity of the T −1 1 (H) dependence measured at 0.27 K definitely confirms the continuity of the phase between H c1 and H c2 [Fig.5(a)].The peak of T −1 1 (H) centered at H c2 is certainly a signature of some second-order phase transition, but the apparent continuity of the data below and above this peak indicates that the nature of the spin fluctuations is probably the same on both sides, suggesting that this might as well be the same phase, whereas the peak is signaling a phase transition appearing at lower temperature.This is strongly reminiscent of the observation of the low-temperature impurity-induced BEC phases in the doped Ni(Cl 1−x Br x ) 2 •4SC(NH 2 ) 2 (DTNX) compound [28,[32][33][34], although in BaCdVO(PO 4 ) 2 it is not clear what the source of such impurities might be.Next, the continuity of the T −1 1 (T ) dependence down to 75 mK measured at 5.5 T (Fig. 6) confirms the absence of any phase transition into an ordered state; only a crossover from an essentially activated to a lowtemperature linear regime is observed at 0.2-0.3K.This suggests that above H c1 we see only a nearly fully polarized phase, presenting, at very low temperature, a crossover into some low-temperature physics, probably related to subdominant terms of the Hamiltonian and/or some defects and/or impurities.
The T −1 1 data and fits shown in Figs.5(a) and 6 provide clear and quantitatively precise identification of the largely dominant two-magnon relaxation process.Such a relaxation, expected above the second critical (= saturation) field of the spin nematic phase, should be regarded as one of the key signatures of this phase.This type of relaxation was already observed above the saturation field of volborthite [35], below which a putative spin-nematic phase was clearly detected [36], but experimental precision of these NMR data was not sufficient to distinguish between the two-and the three-magnon process.In any case, the two-magnon relaxation is not expected above the first critical field of such a phase, which would be the case if such a phase existed in BaCdVO(PO 4 ) 2 between H c1 and H c2 .
Finally, recent theoretical simulations of a spinnematic phase [15] point to a very narrow field range where such a phase is expected, which is clearly not the case for the two critical fields of BaCdVO(PO 4 ) 2 , where H c2 = 1.60H c1 .Furthermore, as already pointed out, H c1 coincides with the theoretical prediction for a single-magnon instability field [14].Altogether, all the evidence points to the fact that H c1 is, in fact, the saturation field of the system, whereas what is observed at H c2 is related to some very low temperature ordered phase of undefined origin unrelated to the dominant terms of the system's Hamiltonian.The observed two-magnon relaxation process above H c1 then implies that the system is very close to the spin-nematic instability, so that the corresponding fluctuations dominate the spin dynamics, but the spin-nematic phase is, in fact, not stabilized in BaCdVO(PO 4 ) 2 .This situation appears to be archetypal for other spin-nematic candidate compounds, and our results establish NMR-based criteria to define the true nature of their high-field phases.
We are grateful to N. Shannon, T. Momoi and M. Zhitomirsky for stimulating discussions and to R. Nath for providing the original magnetic susceptibility data.This work was supported, in part, by the Swiss National Science Foundation, Division II.
FIG. 2 .
FIG.2.Temperature dependence of 31 P NMR spectra of BaCdVO(PO4)2 in a field of 7 T applied parallel to the c axis, focusing on the structural phase transition at Ts ∼ 240 K.
FIG. 3 .
FIG.3.Magnetic field dependence of the low-temperature (0.27 K)31 P NMR spectra, plotted as a function of the frequency shift with respect to the Larmor frequency 31 γµ0H ( 31 γ = 17.236MHz/T) in order to reflect the local spin polarization value.Each spectrum is normalized to its integral and offset vertically according to the field value.Symbols are the peak positions obtained by fitting individual spectra, as described in the text.A few representative fits are shown by dotted magenta lines.The thick curves drawn through the symbols are guides for the eye.
T
FIG. 4. (a)Field dependence of the sample magnetization above Hc1 at 0.10 K, deduced from the relative frequency shift of the two external NMR lines (triangles), in comparison to direct Faraday-balance magnetometry data (squares) from Ref.[13] and the prediction (solid curve) given by Eq. (2) (see the text).(b) Circles present the field dependence of the magnetic order parameter at 0.27 K, deduced from the line splitting of the P1a NMR line below Hc1 (Fig.3).The curve is a guide for the eye.Squares present the structure factor of the magnetic Bragg reflection (0, 1/2, 0) at 0.04 K, obtained from the neutron data in Ref.[13]. | 7,315 | 2024-01-10T00:00:00.000 | [
"Physics"
] |
Superconducting String Theory (Gravity Explanation)
Gravity explained by a new theory, ‘Superconducting String Theory’, inspired on initial string theories and completely opposite from actual fields based. Strengths are decomposed to make strings behave as one-dimensional with universe acting as a superconductor where resistance is near 0 and matter moves inside. Strong nuclear force, with an attraction of 10.000 Newtons is which makes space to curve, generating acceleration. More matter more acceleration. Electromagnetic moves in 8 decimals, gravity is moved from 3 to more than 30 decimals to work as a superconductor.
INTRODUCTION
The 'Theory of Everything' is a hypothetical theory of physics that explains and connects all known physical phenomena into one. There is a possible solution to the origin of gravitational force, postulating it as angular piece of this theory, this solution erases gravity as one of the fundamental forces of nature and unify it with strong nuclear force.
Let's analyze the forces that occur in the universe transforming string theory. It allows to explain many physical behaviors that without its existence would be practically impossible to understand, even so, these strings have not been able to be discovered and are only that, a theory that serves as an important support to the world of physics. One of the best known theoretical applications about them is how their vibration can provoke the creation of matter, but this is not about theories already written, we are going to place these strings in a simpler way to answer some doubts in subatomic world.
This theory use 4 dimensions in space and a behavior as one dimension in strings with superconducting capacities . Like an elastic band between V-shaped sticks where the elastic band moves down, the strong nuclear force, forces these strings to curve to move down.
It's not directly related to electromagnetism .
String theory
String theory is a theoretical framework in which the point-like particles of particle physics are replaced by onedimensional objects called strings. Each string that we cross would be the minimum distance that can be traversed during a displacement.
It's impossible these strings not to act as a superconductor of matter, remember the distance to the most distant object detected by the human being is more than 30 billion light years , that means, there are beams of light which are able to travel that distance without decreasing its speed (they modify only its wavelength). Like light, an object can move in space indefinitely across the universe from one end to the other as long as it does not find a force to stop it. If strings exist, they act as a superconductor of matter with a resistance near 0.
Gravitational waves behaves like ocean' s surface which are similar to an uptight net, these tensions can be decomposed as a one-dimensional structure for its study. Strings could have one dimension or 0 dimensions , like points with bound forces, but in order to generate waves it's easier into a strongly linked structure. Think about these strings as something tenser than any cable that hold the heaviest bridge in the world.
The picture we have drawn would be a s et of extremely tense strings, with a practically infinite matter conduction capacity.
www.gpcpublishing.com/wp ISSN: 2454-7042 542 | P a g e g j p e d i t o r @ g m a i l . c o m
Strong nuclear force
Strong nuclear force is another variable. This force allows the atomic nucleus to remain together, being the strongest of the so-called fundamental interactions (gravitational, electromagnetic, strong and weak). Gluon is in charge of this interaction, it has a scope not greater than 10 raised to -15 meters, preventing the matter to separate and exerting constant attraction strength between quarks of maximum 10.000 N (F).
Fig 2 : Gluon into vacuum
This picture illustrates the four-dimensional structure of gluon-field configurations averaged over in describing the vacuum properties. The volume of the box is 2,4 by 2,4 by 3,6 fm. Contrary to the concept of an empty vacuum, this induces chromo-electric and chromo-magnetic fields in its lowest energy state. The speed of this example is billions of billions fps (frames per second).
Fundamentals
We have created a scenario with a superconductor of matter interacting with a force that makes that matter hold together, but, how can they interact? The most simple is to think about two V-shaped sticks (simulating the strings), and an elastic band that tight them in the most opened side (it would simulate the gluon, with size of 10 to the power -15 meters). What does the elastic band do if sticks are sufficiently lubricated and tense? They will slide to the thinnest side. More elastic bands, more force will be exerted on the sticks to join them, so next bands will slide even faster ( equally, more mass causes more gravity).
We are talking about unknown limits in this world, such as infinite conduction or tensions never seen in materials.
www.gpcpublishing.com/wp ISSN: 2454-7042 543 | P a g e g j p e d i t o r @ g m a i l . c o m
Calculations
Formulas from inclined planes are allowed. Friction is imperceptible and we know acceleration down the plane to simulate this force into our planet. Vertical force is not gravity, its gluon force, it can be calculated considering vertical angle but this is depreciable and gluon force is estimated, so we keep 10.000 N. Unified field theory between gravity and strong nuclear force.
New behavior in dark matter because of differences into density of the superconductor. Lower density generate bigger angle, this implies bigger attraction force . If threads are separated, matter (m2) becomes energy (F1). Some places at universe could have bigger accelerations because of this effect; this means less dark matter than expected. Einstein field equations . We can apply Hooke's law as a rubber band for calculations . F1 = k∆L k = Tensor which can be related to golden ratio, object speed inside universe...
∆L = Unknown displacement
Now we know where the matter is transformed into energy.
E = m2 * F1
Planets or galaxies motion, behaviors like spherical movements due to strings-matter interaction in movement curving space (the hypotenuse from calculations could be insignificantly curved).
Why gluon size is bigger far from earth. | 1,484 | 2017-02-01T00:00:00.000 | [
"Physics"
] |
QCD next-to-leading-order predictions matched to parton showers for vector-like quark models
Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however, rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks thanks to an accurate control of the shapes of the relevant observables and emphasise the extra handles that could be provided by novel vector-like-quark probes never envisaged so far.
Introduction
The Standard Model of particle physics is a successful theory of nature, although it exhibits many conceptual issues, like the hierarchy problem or the strong C P problem, and practical limitations like the absence of a viable candidate for explaining the dark matter pervading our universe. As a result, it is commonly acknowledged as an effective theory stemming from a more fundamental setup that has still to be observed and confirmed experimentally. This effective a e-mail<EMAIL_ADDRESS>description has been recently strengthened with the discovery of a Higgs boson with properties very similar to those expected from the Standard Model in 2012. The null results of all collider searches for new particles predicted by most of beyond the Standard Model theories are, however, at the same time pushing the limits on the mass of these potential new particles to higher and higher energy scales.
Among the viable options for new physics, many extensions of the Standard Model predict the existence of additional quark species that should be observable during the next runs of the Large Hadron Collider (LHC) at CERN. One of the common feature of such theories consists of the predicted vector-like nature of the additional quarks, i.e. their left-handed and right-handed components lie in the same representation of the electroweak symmetry group. These quarks appear for instance in models with extra space-time dimensions that exhibit an extended gauge symmetry or a new strong dynamics giving rise to massive composite states [1][2][3][4][5]. As a result, vector-like quark searches play an important role in the ATLAS and CMS experimental program.
Current searches, relying on signatures induced by both the vector-like quark pair-production and single-production modes, impose strong constraints on the masses of the heavy quarks that are now bounded to be above about 750-1500 GeV [6][7][8][9][10][11][12], this wide range reflecting the wealth of options for describing how the new states decay into a pair of Standard Model particles. Care must, however, be taken with the interpretation of these limits as they are extracted once various simplifying assumptions are accounted for the new physics signal. Most of the bounds indeed assume that the quark partners decay, with a branching ratio of 100%, into third-generation top or bottom quarks, while more general situations where decays into second or first generation quarks are possible are less explored yet. Sizeable couplings to light quarks are still allowed by indirect constraints [13][14][15], which could have a severe impact on electroweak vectorlike quark processes at the LHC [16]. The latter, which induce vector-like quark decays, are driven by the couplings of the extra quarks to the weak and Higgs bosons. They admit a simple model-independent parameterisation [17] regardless of the representation of the new physics particles under the electroweak symmetry group, and this bottom-up approach is now adopted by the experimental collaborations. Most LHC searches for heavy quarks hence turn out to be agnostic of the ultraviolet completion of the model and can rather easily be reinterpreted in any framework, and options for combining different searches can also be considered.
Current vector-like quark searches, as well as most associated theoretical work, are, however, based on Monte Carlo simulations of the new physics signals where hard-scattering matrix-elements are evaluated at the leading-order (LO) accuracy in QCD. In addition, event samples featuring different final-state jet multiplicities are sometimes also merged in order to get a better control on the shapes of the key differential distributions. The formal precision of these calculations is nonetheless rather limited, which directly impacts the extraction of any limit on the properties of the new particle properties or the corresponding measurements in the case of a discovery. In this work, we build up a new procedure that allows us to make use of the MadGraph5_aMC@NLO framework [18] to compute total rates and differential distributions at the next-to-leading-order (NLO) accuracy in QCD for processes involving vector-like quarks. This more precisely concerns vector-like quark production on the one hand, and the production of Standard Model particles when vectorlike quark diagrams contribute, regardless of the strong or electroweak nature of the Born process.
Our methodology relies on the joint use of the Feyn-Rules [19] and NLOCT [20] packages, the latter making use of FeynArts [21], for automatically generating a UFO library [22] that contains all tree-level vertices and counterterms necessary for NLO QCD computations. This UFO library can then be further used by Mad-Graph5_aMC@NLO for event generation, at the LO and NLO accuracies in QCD as well as for loop-induced processes. Virtual loop contributions are numerically evaluated through the MadLoop module [23] and combined with the real-emission diagrams following the FKS subtraction method as implemented in MadFKS [24,25], and the matching to parton showers is finally achieved according to the MC@NLO prescription [26].
In Sect. 2, we detail how we have modified the modelindependent parameterisation of Ref. [17] to make it suitable for NLO calculations in QCD matched to parton showers for vector-like quark processes. We also define a series of benchmark scenarios for the phenomenological studies performed in Sect. 3. We investigate vector-like quark pair production and single production (in association with either a jet or a weak gauge or Higgs boson), as well as diboson production. We summarise our work in Sect. 4.
A model-independent parameterisation
for vector-like quark models
Model description
Vector-like quarks appear in many extensions of the Standard Model. They are usually included as fields lying in the fundamental representation of SU (3) c and carry colour charges similar to those of their Standard Model counterparts. However, they can lie in various representations of the weak interaction symmetry group SU (2) L and be assigned different hypercharge U (1) Y quantum numbers. Focusing on phenomenologically viable minimal models that comprise a single Standard Model Higgs field Φ, only weak triplets, doublets and singlets of vector-like quarks are allowed [27]. Consequently, the particle content of the theory can solely include four species of extra quarks, which we denote by X , T , B and Y , their respective electric charges being Q = 5/3, 2/3, −1/3 and −4/3. Although vector-like and Standard Model quarks with the same electric charge mix, the mixing pattern and the resulting phenomenology can be simplified when minimality requirements are imposed. Since we consider that the Higgs sector contains a single Standard-Model-like scalar Higgs field Φ, quark mixings are solely generated by its Yukawa interactions. The mass splitting between the vectorlike quarks is consequently also constrained to be small and connected to the Higgs vacuum expectation value v, so that the extra quarks will always directly decay into a gauge or Higgs boson and one of the Standard Model quarks [27]. A model-independent effective parameterisation apt to describe the phenomenology of this vector-like quark setup has been recently proposed [17], but it is not suitable for higher-order QCD calculations. The reason is that in the latter parameterisation, the strengths of the interactions of the vector-like and Standard Model quarks with a single Higgs boson depend on the masses of the model particles. As a result, the renormalisation of the quark masses and the one of the couplings are related, which prevents all ultraviolet divergences that arise at the NLO from cancelling. We therefore modify the modelling of Ref. [17] so that all the couplings of the vector-like quarks to a gauge or a Higgs boson are free parameters, and we have the following effective Lagrangian: being supplemented to the Standard Model Lagrangian L SM . The terms in the first and second lines consist of gaugeinvariant kinetic and mass terms for the vector-like quark fields (taken in the mass eigenbasis) after restricting the covariant derivatives to their QCD component, The coupling parameter g s denotes the strong coupling constant, G μ the gluon field and T (and f for further references) the fundamental representation matrices (the structure constants) of SU (3). Although the electroweak pieces of the covariant derivative could have been included, they have been omitted in order to simplify our model description since they are model-dependent and are expected to yield a negligible effect with respect to their strong interaction part. The third and fourth lines of Eq. (1) collects the effective interactions of the physical Higgs boson h with one Standard Model quark and its vector-like partner, generation indices being understood for clarity. As mentioned above, such interactions are yielded by the (flavour-changing) Yukawa couplings of the Higgs doublet Φ with the up-type (q u ), downtype (q d ) and vector-like quarks that induce a mixing of the Standard Model and the new physics quark sectors. The relevant elements of the mixing matrices have been included in the strengths of the effective interactionsκ.
In the last six lines of Eq. (1), we include the weak interactions of the Z -boson and W -boson with one Standard Model quark and one vector-like quark. In our conventions, we have factorised out the weak coupling g which represents the overall interaction strength, and the κ andκ parameters include the relevant elements of the quark mixing matrices, as for the Higgs interactions. Moreover, the c W parameter stands for the cosine of the electroweak mixing angle.
The L VLQ Lagrangian above is equivalent, at the tree level, to the one of Ref. [17] once we impose (3) In this notation, f stands for a generation index and Γ Q X denotes the kinematic factor of the partial decay width of the extra quark Q into a final state containing an X boson. The κ Q parameter encodes the magnitude of the coupling of the extra quark Q to the different electroweak bosons, while the ζ parameters refer to the mixing of the vector-like quarks with the Standard Model quarks, In this expression, we have represented the left-handed and right-handed 4 × 4 mixing matrices between the new quarks and the three Standard Model quarks by V L ,R . Finally, the ξ parameters of Eq. (3) determine the relative importance of the various decay modes of the vector-like quarks, their sum being equal to one. The calculation of differential and total cross sections for LHC processes at the NLO accuracy in QCD necessitates to evaluate, on the one hand, real-emission squared amplitudes and on the other hand, interferences of tree-level with virtual one-loop diagrams. The ultraviolet divergences that arise in the latter case are absorbed through the renormalisation of the fields and parameters appearing in L VLQ . This is achieved by replacing all fermionic and non-fermionic bare fields Ψ and Φ and bare parameters y by the corresponding renormalised quantities, where we truncate the renormalisation constants δ Z and δy at the first order in the strong coupling α s = g 2 s /(4π). While the wave-function renormalisation constants of the Standard Model quarks are not modified by the presence of the vectorlike quarks, the one of the gluon field is given, when the on-shell renormalisation scheme is adopted and when we include n f = 5 massless flavours of quarks, by where the B 0,1 and B 0,1 functions stand for standard twopoint Passarino-Veltman integrals and their derivatives [28]. The left-handed and right-handed wave function renormalisation constants δ Z L ,R Q and the mass renormalisation constants δm Q of a vector-like quark Q (with Q = T , B, X , Y ) are similar to the top-quark ones and read In these two expressions, C F = (n 2 c − 1)/(2n c ) is the quadratic Casimir invariant associated with the fundamental representation of SU (3), with n c = 3. Finally, in order to fix the renormalisation group running of α s so that it originates from gluons and the n f = 5 active light quark flavours, we renormalise α s by subtracting, from the gluon self-energy, the contributions of all massive particles evaluated at zeromomentum transfer, where the ultraviolet-divergent pieces are written in terms of 1 = 1 − γ E + log 4π with γ E being the Euler-Mascheroni constant and being connected to the number of space-time dimensions D = 4 − 2 . The first term in the right-hand side of Eq. (8) results from the Standard Model massless parton contributions, while the second term is connected to the massive states, namely the top quark and the four considered vector-like quark species. In Sect. 3, we will compute predictions at the NLO accuracy in QCD for processes involving vector-like quarks. We will rely on a numerical evaluation of the loop integrals in four dimensions, which necessitates the calculation of rational terms associated with the -dimensional pieces of the loop integrals. There exist two sets of such rational terms that are, respectively, connected to the loop-integral denominators (R 1 ) and numerators (R 2 ). While the former are universal, the latter are model-dependent and can be seen as a finite number of counterterm Feynman rules derived from the bare Lagrangian [29]. Starting from the L VLQ Lagrangian of Eq. (1), several R 2 counterterms with external gauge bosons are modified with respect to the Standard Model case, with In addition, the new R 2 counterterms involving external vector-like quarks are given by In the conventions of Eqs. (9) and (11), c i , μ i , and p i indicate the colour index (which can be associated either with the adjoint or the fundamental representation of SU (3)), the Lorentz index, and the four-momentum of the ith particle incoming to the R ...i... 2 vertex, respectively. Moreover, an explicit summation upon q, Q and f implies a summation over all quark species, the extra quark species and the Standard Model quark species, respectively.
In the phenomenological study undertaken in Sect. 3, the virtual one-loop contributions to the NLO predictions are evaluated with the MadLoop module [23], and then combined with the real contributions by means of the FKS subtraction method [24] as implemented in the MadFKS package [25]. Both MadLoop and MadFKS being part of Mad-Graph5_aMC@NLO [18], the entire calculation is entirely automated from the knowledge of the bare Lagrangian of Eq. (1) and the specification of the process of interest [30]. Technically, the translation of the model Lagrangian into a UFO library [22] that contains ultraviolet and R 2 counterterms and that could be used by MadGraph5_aMC@NLO is automatically performed with the FeynRules [19] and NLOCT [20] packages, the latter program taking care of the calculation of the one-loop ingredients of the model files. The corresponding FeynRules and UFO models have been made publicly available and can be downloaded from the webpage http://feynrules.irmp.ucl.ac.be/wiki/NLOModels.
Benchmark scenarios
Throughout our phenomenological analysis, we adopt several series of benchmark scenarios in which one single vectorlike quark is light enough so that it could be reachable at the LHC. Moreover, for the sake of simplicity, we enforce its decay to proceed via a single channel. We denote each class of scenarios by the acronym QVi where the symbol Q can be either T , B, X or Y and refers to the nature of the relevant extra quark, the symbol V refers to the nature of the gauge boson which the vector-like quark Q decays into and i is a generation number related to the family which the quark Q mixes with. For instance, the scenario TW2 would correspond to a setup in which the Standard Model is supplemented by an extra up-type quark T that decays into a final state made of a W -boson and a strange quark with a branching ratio equal to 1.
These types of scenarios are motivated by several considerations. The mixings of the extra quark with the Standard Model sector are severely constrained by flavour-changing neutral current probes [17,31,32], LEP data [33,34] and atomic parity violation measurements [35]. Taking the parameterisation of the κ,κ andκ parameters of Eq. (3), sizeable mixings with all three generations are only allowed when the κ Q parameters are below 10 −2 -10 −3 [17]. Those bounds can, however, be relaxed when the mixing pattern is restricted to involve one or two quark generations. In our study, we enforce the vector-like quark mixing to only deal with one specific generation of Standard Model quarks, and we fix the values of the κ Q parameters to their current experimental limits of 0.07, 0.2 and 0.1 for mixings involving the first, second and third generation, respectively.
LHC phenomenology
In this section, we compute total cross sections and differential distributions both at the LO and NLO accuracy for several processes involving vector-like quarks. We study the genuine effects of the NLO corrections, as well as the induced reduction of the theoretical uncertainties. We then investigate the effects of matching fixed-order calculations to parton showers, both at LO and NLO. In Sects. 3.1 and 3.2, we, respectively, focus on vector-like quark pair and single For the considered processes, the central (total and differential) cross-section values are computed after setting the renormalisation and factorisation scales to the average transverse mass of the final-state particles and by using the NLO set of the NNPDF 3.0 parton density functions (PDF) [36] accessed via the LHAPDF 6 library [37]. Scale uncertainties are derived by varying both scales independently by a factor of two up and down, and the PDF uncertainties are extracted following the recommendations of Ref. [38], and both contributions to the theoretical uncertainties are added in quadrature.
3.1 Vector-like quark pair production at the LHC Vector-like quark pair production is in general dominated by QCD contributions, which has the advantage to be independent of the model details. Model-dependent electroweak diagrams induced by the last four lines of the Lagrangian of Eq. (1) may, however, be non-negligible, in particular when the final state of interest can be produced from the scattering of one or two valence quarks. We focus on the production of a pair of vector-like quarks including the cases where they have the same electric charge, with Q being either T , B, X or Y . While the first of these three subprocesses receives both strong (diagrams of the first line of Fig. 1) and electroweak (first diagram of the second line of Fig. 1) contributions, the latter two subprocesses can only be mediated by the t-channel exchange of a weak or Higgs boson (last two diagrams of the second line of Fig. 1). NLO corrections to the strong contributions to the production of a pair of vector-like quarks (the diagrams of the first line of Fig. 1) can be automatically calculated within the MadGraph5_aMC@NLO framework. Thanks to the upgrading of the model to NLO as explained in the previous section, it is now sufficient to type in the program shell, taking the example of TT production, import model VLQ_NLO_UFO generate p p > tp tp˜ [QCD] output launch With this set of instructions, we first import the UFO model associated with the Lagrangian of Eq. (1) and then start the calculation of the cross section and the generation of Monte Carlo events, at the NLO accuracy in QCD, for the production of a pair of TT quark and antiquark (whose UFO names are tp and tp˜). Other vector-like quark processes can be obtained by replacing the tp symbol by bp (for a B quark), x (for an X quark) and y (for a Y quark). LO event generation can be achieved in the same way once the [QCD] tag is omitted. We recall that the syntax is case insensitive.
For the electroweak channels (the diagrams of the second line of Fig. 1), the command to be typed in reads generate p p > tp tp˜QCD=0 [QCD] still for the same example of TT production. Other channels (including the production of a pair of heavy quarks carrying the same electric charge) can be addressed similarly. Care must, however, be taken as mixed electroweak and QCD loops appear at NLO. In our approach, we focus on NLO calculations in QCD, and not in QED or in the context of the electroweak theory. As a result, MadGraph5_aMC@NLO automatically discards Feynman diagrams tagged as an electroweak correction to a QCD graph, although in our case all diagrams should be kept for a proper cancellation of all divergences. One possible way to cure this issue would be to include in the UFO model library all ultraviolet and R 2 counterterms that would be necessary for undertaking mixed NLO calculations in QCD and QED and to implement in MadFKS the necessary subtraction terms. This, however, goes beyond the scope of this work, so that in order to maintain automation from the user standpoint, we have instead released a public script that should be called for diagram generation in order to prevent MadGraph5_aMC@NLO from discarding any loop diagram that would be tagged as an electroweak correction to a QCD Born diagram. More precisely, the script allows for the inclusion of all box diagrams containing at least two strong interaction vertices, but it removes weakboson or Higgs-boson loop contributions that consist of an electroweak correction to a QCD Born process. Additionally, diagrams exhibiting the t-channel exchange of a weak or Higgs boson but with an additional gluon are kept. Not using the script instead yields the removal of several necessary box diagram contributions.
Finally, QCD and electroweak diagrams can interfere. Because of the mixing of QCD and electroweak interaction orders at the one-loop level and the missing counterterms and subtraction terms that have been mentioned above, MadGraph5_aMC@NLO cannot currently be used for the calculation of these interferences beyond the LO accuracy. We therefore rely on LO simulations and reweight instead the results to include a K -factor approximating the effect of the QCD corrections, denoted by K (int) NLOQCD , taken as the ratio of the NLO to the LO (differential) results. We choose this K (int) NLOQCD factor to be the geometrical average of the K (QCD) NLOQCD and K (EW) NLOQCD K -factors obtained in the context of the QCD and electroweak diagrams taken independently, where our notations explicitly indicate the nature of the corresponding underlying Born process. We have additionally checked that, for the central value, our procedure yields numerical differences that are of at most 2%. The related MadGraph5_aMC@NLO command allowing for generating interference events is, again for the example of TT production, which is a standard command for LO event generation in MadGraph5_aMC@NLO.
All MadGraph5_aMC@NLO scripts necessary for differential and total cross-section calculations and event generation are available from the webpage http://feynrules. irmp.ucl.ac.be/wiki/NLOModels, together with the UFO and FeynRules models.
In Fig. 2 and Table 1, we present LO and NLO total cross sections for the production of a pair of vector-like T quarks for the different scenarios introduced in Sect. 2.2, and we depict the dependence of the cross sections on the vector-like quark mass. We first focus on the pure QCD contribution that is independent of the vector-like quark nature (diagrams of the upper line of Fig. 1). The genuine NLO contributions are found to be important as they first induce a shift in the cross section of about 50% within the entire probed m Q range and next reduce the dependence of the rate on the unphysical factorisation and renormalisation scales. The uncertainty band is indeed significantly reduced when NLO effects are accounted for, the scale dependence being reduced to the level of about ±10% over the entire mass range. At the NLO, the scale dependence induced by the virtual contributions indeed partially compensates for the one stemming from the Born and real-emission diagrams. Our results for the pure QCD case (strong production Born diagrams only) agree with the literature, and we recall that this corresponds to the production of a top-quark pair with a different top-quark mass [39]. It is in contrast the first calculation including the impact of the electroweak diagrams at the NLO accuracy in QCD.
In the considered scenarios, electroweak contributions to the production of a pair of vector-like quarks possibly carrying the same electric charge are found to be important only for the TH1 scenario, the associated results being shown on Fig. 2 as dashed and dotted bands. These bands are, respectively, related to the production of a pair of vector-like quarkantiquark (dashed) and to the production of a pair of heavy quark regardless of their electric charge (dotted). In this last case, the contributions from the three processes of Eq. (12) are summed over. The TH1 scenario features an extra quark that mixes with the first generation of Standard Model quarks so that parton density effects could lead to an enhancement of the production rate due to quark-antiquark, quark-quark and antiquark-antiquark (electroweak) scattering diagrams involving one or two initial valence quarks. This is particularly pronounced for setups featuring heavy vector-like quarks that require one to probe large Bjorken-x phase-space regions. As a consequence, the central cross-section values and the scale and parton density uncertainties are different from the pure QCD context since non-QCD diagrams (featuring a different initial state) dominate. This is illustrated in Table 1 for a few mass choices. Whereas the production rates are always larger, the uncertainties can be either smaller or larger than in the QCD case. We observe a huge gain in cross section for TH1 scenarios with a very heavy extra quark. This stems from Eq. (3) that shows that the coupling of the extra quark to the Higgs boson and a lighter quark has been taken proportional to the vectorlike quark mass, and is thus enhanced for large values of m Q . In principle, such a coupling should also be proportional to the related mixing matrix element ζ that compensates this enhancement, as shown in Eq. (3) and in Eq. (4). Setting ζ = 1, this feature is translated into the adopted value for the κ Q parameters.
Similar properties can be found in the context of the production of B, X and Y vector-like quarks, as illustrated in Appendix A.
Accurate differential distributions are often helpful for setting more precise exclusion limits and refine the experimental search strategies. Our implementation in the Mad-Graph5_aMC@NLO platform can be used to this aim, and we present in Fig. 3 several differential distributions in several observables, including NLO and parton-shower effects. We have chosen the BH2 class of benchmark scenarios with a vector-like quark mass set either to 500 GeV (upper series of curves on the figure) or 1500 GeV (lower series of curves on the figure). In our calculations, we have made use of the MadSpin [40] and MadWidth [41] programs for automatically taking care of the heavy quark decays in a way in which both off-shell and spin correlation effects are retained, matched the fixed-order calculation with the parton-shower and hadronisation infrastructure as modelled by the Pythia 8 package [42], and we have reconstructed all final-state jets by means of the anti-k T algorithm [43] with a radius parameter set to 0.5 as implemented in FastJet [44]. As shown on Figs. 3 and 4, our predictions confirm the total cross-section findings of Table 4 (see Appendix A). The contributions of the electroweak diagrams are, respectively, negligible and significant for light and heavy vector-like quarks. Moreover, the parton density uncertainties dominate for setups exhibiting a large M B value, rendering the theoretical predictions barely reliable.
In the two upper panels of Fig. 3, we study the properties of the first produced B-quark and show its transverse momentum p T (B 1 ) and pseudorapidity η(B 1 ) distributions. In the case of a light B-quark, the parton-shower effects (green and red solid lines) slightly affect the shapes of the spectra predicted by the fixed-order calculations (blue and brown solid lines), both at the LO and NLO accuracies. For heavier vector-like quarks, slight modifications can be observed in the small p T and large |η| region although the accurate modelling of the first extra jet does not yield any impact at the level of the individual B-quarks. These differences are largely covered by the theoretical uncertainties stemming from the poor knowledge of the parton densities in the relevant phase-space regions. These PDF uncertainties are dominant so that the reduction at the level of the scale uncertainties has a small impact. However, PDF uncertainties are expected to improve in the coming years thanks to new LHC data, so that it is mandatory to have NLO calculations available to get a better control on the predictions. In contrast, parton-shower effects are directly visible when distributions related to the two B-quarks considered as a pair are considered (Fig. 3). Focusing on the related transverse-momentum distributions, the fixed-order predictions (for which only the p T (B 1 + B 2 ) = 0 bin is populated at LO) diverge at small p T due to soft and collinear radiation giving rise to large logarithms that must be resummed to all orders to obtain reliable predictions. This resummation is effectively achieved by matching the fixed-order calculations to parton showers, and the resulting distribution exhibits a reliable behaviour with a peak for p T (B 1 + B 2 ) of about 10-20 GeV. Uncertainties originating from the choice of the shower algorithm and its inherent free parameters are, however, not estimated. The magnitude of the electroweak diagrams is also studied (dashed lines). In the case of lighter B-quarks, the theoretical predictions are essentially driven by the QCD contributions so that no differences can be noticed. This contrasts with the heavy B-quark case where electroweak diagrams enhance the total rate by about 30% (see Appendix A) and also impact the differential distributions both in terms of normalisation and shape. This originates from the different topologies of the electroweak diagrams that feature a t-channel colourless boson exchange. We now include the vector-like quark decays into a Higgs boson and a strange quark and reconstruct the final-state jets as detailed above. Considering only hard central jets for which |η| < 2.4 and p T > 30 GeV, we present the transverse-momentum distributions of the three leading jets as well as the spectrum of the H T variable defined as the scalar sum of the transverse momenta of all final-state jets and leptons in Fig. 4, the generated events being inclusive in the Higgs-boson decays. Leptons are included only if their transverse momentum is larger than 30 GeV, their pseudorapidity smaller than 2.4 in absolute value and if they are isolated from any jet by an angular distance in the transverse plane ΔR of at least 0.5. Focusing on fixed-order predictions matched to parton showers, we observe that the first two leading jets are in general hard, as they result from the decay of two heavy coloured particles. In contrast, the structure of the p T dependence of the third jet is more representative of the one expected from a radiation jet, this jet being most of the time arising from initial-state or final-state radiation. Turning to the H T distribution (lower right panel of the figure), one notices a peak for very small H T values when the B-quark mass is fixed at 500 GeV. This feature arises from events where the two jets originating from the heavy quark decays are mis-reconstructed, the leading jet being thus the radiation jet so that the associated jet activity in the events is not significantly large.
Single vector-like quark production in association with jets
Single vector-like quark production mechanisms are of a pure electroweak nature. The associated predictions are thus model-dependent as the sizes of the electroweak vectorlike quark couplings are free parameters of the model. Comparing vector-like quark single and pair production, the latter gets an enhancement originating from the presence of strong diagram contributions (first line of Fig. 1) together with a phase-space suppression for large vector-like quark mass values. In contrast, electroweak single vectorlike quark production is less suppressed for large vectorlike quark masses, which could compensate the weakness of the involved interaction vertices and make this channel the main LHC discovery mode for a heavy vectorlike quark. As a results, several ATLAS and CMS vector- like quark searches also target their single-production mode [10,[45][46][47][48].
In this section, we focus on vector-like quark single production in association with jets, Other single-production mechanisms exist, with, for instance, a final-state gauge or a Higgs boson, but we refer to Sect. 3.3 for the latter. A representative set of Feynman diagrams related to single vector-like quark production with jets is shown on Fig. 5, and NLO cross-section calculation and event generation can be achieved with MadGraph5_aMC@NLO by typing in the program interpreter the command for single T production. Other processes with a different final-state vector-like quark can be accounted for with a similar syntax, and LO event generation only necessitates to remove the [QCD] tag. As for electroweak contributions to vector-like quark pair production, mixed QCD and electroweak loop diagrams appear at the NLO level and must be treated consistently for getting ultraviolet-finite results. This step being not automated, we provide scripts that steer the event generation process on the UFO model webpage. In Fig. 6 and Table 2, we present LO and NLO total cross section for single B quark production for the BZ1, BZ2, BW1 and BW2 scenarios introduced in Sect. 2.2 for which single vector-like B quark production occurs via Z -boson or Wboson exchanges. This consists of the first NLO predictions for a single vector-like quark production process. Although all depicted total cross sections exhibit a similar order of magnitude, a small hierarchy is observed. It is driven by an interplay of the parton densities that enhance mechanisms involving first generation quarks and of the new physics coupling strengths that are much larger for vector-like quark mixings with second generation (κ Q = 0.2) than with first generation quarks (κ Q = 0.07). For vector-like B-quark masses smaller than 500 GeV, single-production cross sections are slightly suppressed by a factor of 2 or 3 with respect to the strong production of a pair of B quarks (see Appendix A), but feature a similar order of magnitude for m B ∈ [500, 800] GeV. In contrast, single-B production always dominates for heavier vector-like quarks by up to two orders of magnitude for larger values of m B . For scenarios with smaller κ Q values, the changes in the relative importance of the two production channels can, however, be shifted towards higher masses.
In Fig. 7, we turn to the study of differential distributions related to inclusive single T -quark production at the LHC, focusing on the TZ1 scenario as an illustrative benchmark point and for a vector-like quark mass m T fixed to 500 GeV or 1500 GeV. Event generation and reconstruction is performed following the guidelines mentioned in Sect Fig. 7 Differential distributions depicting the properties of a singly produced vector-like T (orT ) quark. We present its transverse momentum (upper), rapidity (lower left) and pseudorapidity (lower right) spectrum and compare fixed-order predictions at the LO accuracy (purple curve) and NLO accuracy (blue curve), as well as prediction including the matching of these two calculations to parton showers (green and red curves for the LO and NLO cases, respectively). We have fixed the heavy quark mass either to 500 GeV (upper series of curves) or to 1500 GeV (lower series of curves) we present the transverse momentum p T (T ), rapidity y(T ) and pseudorapidity η(T ) spectrum of the vector-like quark. We show predictions both at the fixed-order (purple and blue curves for the LO and NLO accuracy, respectively) and after matching the results to parton showers (green and red curves for the LO and NLO accuracy, respectively). Theoretical uncertainties originating from scale variations and parton densities are included for the matched predictions. In general, NLO effects only moderately affect the shape of the different spectra but in contrast drastically reduce the theoretical scale uncertainties and allow for a better prediction of the spectrum normalisation. Similarly to the pair-production case, matching to parton showers only mildly impacts fixed-order predictions for the properties of the produced heavy quarks.
Regardless of m T , the transverse-momentum distribution of singly produced T quarks exhibits a typical steeply falling behaviour with increasing values of p T (T ) (with a peak at low p T , which is invisible due to the binning choice), and the vector-like quark is quite forward by virtue of the process topology, with |η| > 2 in average. In Fig. 8, we study the properties of the decay products of the heavy quarks that consist of a Z -boson and an up quark in the TZ1 scenario. We first present the transverse-momentum spectrum of the three leading jets with a pseudorapidity satisfying |η| < 2.5 and a transverse-momentum larger than 30 GeV. Being inclusive in the Z -boson decay, the jets can originate either from the heavy T -quark decay, from the Zboson decay or from initial-or final-state radiation. Focusing isolated lepton transverse momenta. We compare LO (green) and NLO (red) predictions after matching the fixed-order calculations to parton showers. We have fixed the heavy quark mass either to 500 GeV or to 1500 GeV on the leading jet p T distribution, we observe that it peaks at half the T -quark mass, which shows that the leading jet is often issued from the heavy quark decay. In contrast, the second jet transverse-momentum distribution exhibit a plateau extending up to half the T -mass, which we conclude that it could alternatively originate either directly from the T -quark decay, or from the Z -boson decay. The third jet finally shows a different behaviour, the distribution peaking this time at a lower p T value, so that it is likely to be connected to the hard process.
In addition to the leading jet transverse-momentum distributions, we also present the distribution in the H T variable defined as the scalar sum of the transverse momentum of the final-state jets and isolated leptons, the latter being only considered if their pseudorapidity fulfils η| < 2.4 and their transverse momentum p T > 30 GeV. Moreover, we require the leptons to be well separated from any jet, imposing the angular distance ΔR to be larger than 0.5. The H T variable exhibits a non-trivial structure which peaks both at the Tquark mass m T and at m T /2, the second peak arising from cases where the Z -boson decays invisibly.
In all cases, the shapes turn out to be only slightly affected by the NLO effects and the uncertainties are drastically reduced. for di-Higgs and diboson production, respectively, using the $$ symbol to remove any possible intermediate resonance.
Other processes impacted by
In the last case, the pair of symbols v v stands either for w+ w-or z z according to the weak boson under consideration. LO event generation can finally be achieved by removing the [QCD] tag.
Additional single vector-like quark production processes where the heavy quark is produced in association with a Standard Model boson can be considered as extra mechanisms useful for seeking for vector-like quarks (diagrams shown in the rightmost part of Fig. 9). Such processes can be simulated with MadGraph5_aMC@NLO, by typing in the commands generate p p > tp h $$ tp˜[QCD] add process p p > tp˜h $$ tp [QCD] for T /T production, as an example. Once again, the $$ symbol is used in order to avoid intermediate resonances. V Q production can be undertaken similarly.
As an illustrative example, we show in Table 3 and Fig. 10 total rates at LO and NLO for Higgs-boson (upper) and Zboson (lower) pair production, respectively, for scenarios featuring a T quark interacting with the first generation of Standard Model quarks and either the Higgs boson or the Zboson. We observe gigantic K -factors in the case of di-Higgs production. This enhancement is connected to an interplay of two effects. Turning back to the observations made in Sect. 3.1, the vector-like quark coupling to light quarks and a Higgs boson has strength that is proportional to the heavy quark mass, and thus becomes stronger and stronger with increasing vector-like quark masses. In addition, a new channel where the initial state is comprised of a gluon and a quark opens up at NLO. This component of the NLO cross section turns out to dominate due to the large gluon density in the proton. As a result, the total cross section for vector-likequark-mediated di-Higgs production is more or less constant with the heavy quark mass at the NLO QCD accuracy, which contrasts with the LO case.
Conclusions
We have modified a previously introduced model-independent parameterisation suitable for the study of the vector-like quark phenomenology at the LHC so that it is now suitable for NLO calculations in QCD matched to parton showers within the MadGraph5_aMC@NLO framework. We have illustrated its usage in the context of vector-like quark pair production and vector-like quark single production in association either with a jet or with a weak or Higgs boson. For all showcased processes, we have considered QCD and electroweak diagram contributions and investigated NLO and parton-shower effects on the normalisation and shapes of the associated kinematical distributions.
We have found that NLO K -factors are important, both globally (at the total-rate level) and at the differential distribution level and could hence potentially impact limits currently extracted from vector-like quark search results of the ATLAS and CMS collaborations. We have in particular noticed the existence of potentially huge K -factors for new physics scenarios involving the coupling of a heavy vector-like quark to first generation quarks and a Higgs boson due to new production channels that open at the NLO accuracy. This motivates further investigations, in particular to assess how an experimental analysis including detector effects could benefit from the gain in cross section stemming from the new topologies that dominate at NLO and to determine the impact on the current vector-like quark limits and LHC discovery potential. quarks for the different scenarios introduced in Sect. 2.2. We depict the dependence of the cross sections on the vector-like quark mass and recall that the pure QCD contributions can be found in Table 2 as they are independent of the vectorlike quark nature. Results for X and Y pair production are not shown as non-QCD contributions are negligible for all considered scenarios.
Appendix B: Total cross sections for the single production of a T , X or Y quark In Fig. 12, Table 5 and Table 6, we present LO and NLO total cross sections related to the production of a single vectorlike T , X and Y quark for different scenarios introduced in Sect. 2.2. In each case, we depict the dependence of the cross sections on the vector-like quark mass and study the uncertainties stemming from scale variation. | 10,211.6 | 2016-10-14T00:00:00.000 | [
"Physics"
] |
Stochastic and deterministic mathematical model of cholera disease dynamics with direct transmission
In this paper we develop a stochastic mathematical model of cholera disease dynamics by considering direct contact transmission pathway. The model considers four compartments, namely susceptible humans, infectious humans, treated humans, and recovered humans. Firstly, we develop a deterministic mathematical model of cholera. Since the deterministic model does not consider the randomness process or environmental factors, we converted it to a stochastic model. Then, for both types of models, the qualitative behaviors, such as the invariant region, the existence of a positive invariant solution, the two equilibrium points (disease-free and endemic equilibrium), and their stabilities (local as well as global stability) of the model are studied. Moreover, the basic reproduction numbers are obtained for both models and compared. From the comparison, we obtained that the basic reproduction number of the stochastic model is much smaller than that of the deterministic one, which means that the stochastic approach is more realistic. Finally, we performed sensitivity analysis and numerical simulations. The numerical simulation results show that reducing contact rate, improving treatment rate, and environmental sanitation are the most crucial activities to eradicate cholera disease from the community.
Cholera is an acute diarrheal illness caused by the gram-negative bacteria Vibrio cholerae which lives in an aquatic environment (Cui et al. [5]).
The symptom of cholera includes extreme vomiting, dry mouth, irregular heartbeat, painless watery stools, little or no urine output, and low blood pleasure (Fatima et al. [7]). It can be transmitted either by direct and indirect transmission pathways (Fakai [6]). Human-to-human (direct) way of cholera transmission is from the infected individual to the other individuals (touching, biting, and sexual intercourse). Whereas indirect (environment-to-human) way of transmission of cholera is through ingesting vibrio cholera bacteria from contaminated foods and waters (Wang and Modnak [14]). In this paper we concentrate only on the direct transmission path way by incorporating the stochastic nature of the disease.
Mathematical modeling has been an important tool in analyzing the spread and control of infectious diseases and also in making decision as regards the intervention mechanisms for the control of disease (Adewale et al. [1]). Mathematical epidemiology contributed to the understanding of the behavior of infectious diseases, its impacts and possible future predictions about its spreading. Mathematical models are used in comparing, planning, implementing, evaluating, and optimizing various detection, prevention, therapy, and control programs (Bubniakova [3]). Several mathematical models on cholera were developed by different authors. The study done by Codeco [4] focused mainly on endemicity of cholera and suggested two controlling mechanisms. The other study done by Codeco [4] proposed an SIR-B deterministic model by adding an environmental component into the regular SIR model. The study done by Fatima et al. [7] in Nigeria also proposed a deterministic mathematical model for the control of cholera. Additionally, a lot of authors like Fakai [6], Adewale et al. [1], Beryl et al. [2], Javidi and Ahmad [8], and others developed mathematical model of cholera to explore the transmission dynamics and controlling strategies. But none of them consider some stochastic environmental factors that can cause cholera outbreaks like, water, rain fall, air temperature, etc. In this paper we consider environmental factors and develop a stochastic model of cholera dynamics.
The rest of the paper is organized as follows: In Sect. 2, the cholera model is described and formulated in deterministic as well as stochastic approach. In Sect. 3, qualitative analysis and sensitivity analysis of the model are discussed. In Sect. 4, we use MATLAB software to investigate numerical simulation results. Finally, our discussions and conclusions are presented in Sect. 5.
Model formulation and description
The model considers a total human population size (P(t)) and is divided into four compartments, namely susceptible represented by S(t), infected (I(t)), treated (T(t)), and recovered (R(t)) classes. Susceptible individuals (S(t)) are those individuals that cannot be infected, but can get infection sometime in the future, infected individuals (I(t)) are individuals who have developed the symptom of cholera and are able to transmit the disease, treatment class (T(t)) includes those individuals that get treatment at a time t for t > 0 after they have been infected with cholera, and recovered compartment (R(t)) includes those individuals that have recovered from cholera disease and got temporary immunity.
Population in the susceptible compartment will be increased from the recovered compartment with a rate of δ by losing temporary immunity and also with a recruitment rate of π . However, its number decreases by the natural causing death rate of μ and also moving to the infected compartment with the rate of α. Population in the infected compartment will be increased by the contact rate of α, and also its number decreases by the natural causing death rate of μ, cholera causing death rate τ , and moving to the treatment compartment with the treatment rate of σ . Population in the treatment compartment increases from the infected compartment with the treatment rate of σ and decreases with the recovery rate of γ and the natural causing death rate of μ. Population in the recovered compartment also increases by the recovery rate of γ , but its number will decrease by the natural causing death rate of μ and by losing their immunity at the rate of δ.
The model is guided by the following assumptions: the size of homan population is constant, the birth rate and death rate are not equal, all parameters are nonnegative, all individuals are susceptible in the community, therapeutic treatment is applied to the infectious individuals, the treated individuals (individuals that are on treatment) do not transmit cholera disease to the susceptible human population, on recovery there is temporary immunity and there is the disease induced death (disease causing death). By the above descriptions and assumptions, our model is expressed diagrammatically in Fig. 1.
Since the above deterministic model in equation (2) does not consider stochastic environmental factors and lacks realistic conditions, we extended it to a stochastic model.
To extend, we introduce Brownian motion (B i (t)) and the intensity of stochastic environmental factors (β i ) on equation (1) and multiply it by dt. Then we obtain the following (3): where β 1 , β 2 , β 3 , β 4 ≥ 0 denotes the intensity of Brownian motion and B 1 , B 2 , B 3 , B 4 are independent Brownian motions.
Qualitative analysis
In this section, some basic qualitative behaviors of the model, including the invariant region, positivity of the solution, disease-free equilibrium point, basic reproduction numbers, local and global stabilities of disease-free equilibrium, endemic equilibrium point, stability of endemic equilibrium, and sensitivity analysis, are discussed.
Invariant region
In this subsection we obtain invariant regions of model equations (1) and (2).
Invariant region for deterministic model
To get invariant region, we consider the total population at a time t given by P(t) = S(t) + I(t) + T(t) + R(t). Then differentiating P(t) with respect to time t on both sides, we get Substituting dS(t) dt , dI(t) dt , dT(t) dt , and dR(t) dt from equation (1) into equation (3), we get If there is no infectious individual due to cholera disease, which means (τ = 0, g = 0), then equation (4) becomes By the separation of variables of differential in equality (5), we obtain where C and D are constant. After solving and evaluating equation (6) as t goes to ∞, we have lim t → ∞P(t) = π μ , which implies that 0 ≤ P(t) ≤ π μ . Therefore, the model is biologically meaningful and bounded in the domain = (S, I, T, R) ∈ R 4 + : 0 ≤ P(t) ≤ π μ .
Invariant region for stochastic model Theorem 1
The region is almost surely positive invariant of our stochastic model equation (2).
Proof Let us take K 0 as a large integer so that if (S 0 , I 0 , T 0 , R 0 ) ∈ R 4 + , then every component of (S 0 , I 0 , T 0 , R 0 ) lies in the interval [ 1 k 0 , 1]. For each integer K ≥ k 0 , we can define stopping time: We want to prove P(τ = ∞), which is P(τ < M) = 0 for M > 0, so that it allows us to show lim t→∞ sup P(τ k < 0) = 0. Now consider a Lyapunov function V technically: Then, by using Ito's formula, for Hence, we have Let , then equation (7) becomes Now, rearranging equation (8), we get Then, by integrating both sides from 0 → τ k ∧ M implies that where τ k ∧ M = min{τ n , t}. Taking expectation of the above inequalities yields Since V (X(τ k ∧ M)) > 0, then Now, for τ k , there are some components of Thus, from equation (13) and the above equation, we get From (12)- (14) it follows that Letting k → ∞ and by taking limit sup to equation (16), for all M > 0, we obtain that Therefore, lim t→∞ sup P(τ k < M) = 0, this completes the proof of the theorem.
Positivity of the solutions
In this subsection, we obtain nonnegative solutions for future time with their respective initial conditions. Proof To prove Theorem 2, let us take the first equation of model (1): Then, after solving equation (17) by using the separation of variables and applying the initial conditions, we get After some steps, we get Next, let us take the second equation of model (1): Then, solving equation (19) by using the separation of variables and applying the initial condition, we obtain As t → 0, then e -(μ+d+θ+τ )t → 1, this implies Similarly, we took the third and fourth equations of model (1). Now, taking similar steps to the above gives This completes the proof of the theorem. Therefore, our model solutions of equations (1) and (2) are positive for future time.
Disease-free equilibrium point
To obtain a disease-free equilibrium point, set model equation (1) to zero and there are no infectious individuals in the population, which means I = 0, T = 0, R = 0.
Basic reproduction number
To determine the basic reproduction number (R 0 ) of equations (1) and (2), we use the next generation matrix method by Beryl et al. [2]. We have two basic reproduction numbers (deterministic and stochastic).
Basic reproduction number for deterministic model
In view of that, first let us take the newly infectious class Now, by the principle of next generation matrix approach, we obtain The next step is obtaining the Jacobian matrix of f and v with respect to I at E 0 = ( π μ , 0, 0, 0). Let F and V be the Jacobian matrix of f and v, respectively, then Then, evaluating F and V at a disease-free equilibrium point (E 0 = ( π μ , 0, 0, 0)), we obtain Then FV -1 becomes The eigenvalue of FV -1 can be obtained by Here, by the next generation matrix principle, the largest eigenvalue is the basic reproduction number.
Therefore, our basic reproduction number for the deterministic model is
Basic reproduction number for stochastic model
By taking the infected class of (2), we obtain the basic reproductive number of stochastic approach: Using twice differentiable function of Ito's formula, we can derive our stochastic basic reproduction number. Let us take f (t, I(t)) = ln(I(t)), then its Taylor series expression becomes where ∂f ∂t = 0, ∂f Let a = αS(t)I(t) -(μ + τ + σ )I(t) and b = β 2 I(t), then df t, By applying the chain rule, we get Then df t, By using the next generation matrix, let .
where R D 0 = απ μ(μ+τ +σ ) . Therefore, our basic reproduction number for stochastic model is From equation (28) we see that R S 0 < R D 0 , which means the stochastic approach is more realistic than the deterministic approach.
Local stability of disease-free equilibrium
In this subsection we show the local stability of disease-free equilibrium in the case of deterministic as well as stochastic models. Proof To prove Theorem 3, first we construct a Jacobian matrix of model equation (1).
Then the Jacobian matrix evaluated at E 0 becomes From equation (31) the eigenvalues are evaluated as follows: The characteristic polynomial of equation (32) becomes By factorizing the characteristic polynomial equation, eigenvalues are as follows: Since λ 1 < 0, λ 2 < 0, and λ 3 < 0, but we do not know the sign of λ 4 . However, to be stable all the eigenvalues have to be negative and λ 4 has to be less than zero. This means We know that from equation (24) we have R D 0 = απ μ(μ+τ +σ ) . Hence, equation (34) becomes Therefore, our disease-free equilibrium point is locally asymptotically stable if and only if R D 0 < 1.
Local stability of disease-free equilibrium in the case of stochastic model Theorem 4
If R S 0 < 1, then for any initial values of (S 0 , Proof Let us take F(t, I(t)) = ln I(t), by Ito's formula df t, Integrating equation (35) on both sides, we have Then evaluating equation (36) at E 0 , we obtain where a martingale G(t) = t 0 β 2 dB(t). By the strong law of large numbers of martingale, we have lim t→∞ sup G(t) t = 0 almost surely.
Then divide both sides of equation (37) by t. By letting t → ∞, we get Taking lim t→∞ sup on both sides of equation (38), we obtain Obviously (μ + τ + σ ) > 0, therefore R S 0 -1 less than zero: Therefore, our disease-free equilibrium point is locally asymptotically stable if and only if R S 0 < 1.
Global stability of disease-free equilibrium
Theorem 5 If R D 0 < 1, then E 0 is globally asymptotically stable in .
Therefore, by LaSalle's invariance principle, every solution of model equation (1) with a given initial conditions in approaches to E 0 at t and goes to infinity whenever R D 0 ≤ 0. Hence, disease-free equilibrium is globally asymptotically stable in the region .
Endemic equilibrium point
In this subsection, we obtain the equilibrium point at which no disease is present in the population.
The endemic equilibrium point of our model is denoted by E 1 = (S * , I * , T * R * ). The endemic equilibrium can be obtained by equating all the expressions of model equation (1) to zero: Let us take the second equation of system (44), we get αIS -(μ + τ + σ )I = 0.
Then, solving for S * , we have From the third equation in (44), we solve for I * : Now, solving for R * and T * by substituting equation (45) and (46) into the first equation of (33), we obtain Then, by taking simultaneously the fourth equation of (44) and equation (47), we get Finally, from equation (48), we obtain Then substituting equation (49) into the second equation of (48), we have Now substituting equation (50) into equations (46) and (49), we obtain Therefore, endemic equilibrium point of the model is
Stability of endemic equilibrium point
In this subsection, we discuss the stability of endemic equilibrium E 0 by considering a Lyapunov function L technically as follows: L S * , I * , T * , R * = S -S * -S * ln S + I -I * -I * ln I Now, differentiating both sides of equation (51) with respect to t, we get Then, simplifying equation (52), we obtain the following equation: Now, by simplifying and rearranging (the positive part on one side and the negative part on the other side) equation (53), we have Now, replacing U for the positive terms and V for the negative terms of equation (54), we obtain U = π + αIS * + μS * + αIS + (μ + τ + σ )I * + (μ + γ + g)T * + (μ + δ)R * , and Then, equation (54) is replaced by U and V, and we have If U < V , then dL dt ≤ 0 and dL dt = 0 if and only if our equilibrium points are (S = S * , I = I * , T = T * , R = R * ).
From this, we observe that E 0 is the largest set of compact invariant singletons in {(S = S * , I = I * , T = T * , R = R * ) ∈ : dL dt = 0}. Therefore, E 0 (endemic equilibrium) is globally asymptotically stable in if U < V .
Sensitivity analysis
In this subsection, we obtain sensitivity analysis of the model to determine the effect of each parameter on basic reproduction number (R 0 ). To perform the sensitivity analysis of (R 0 ), we use the following formula: where n i is the parameters of R 0 (Tilahun [13]).
Interpretation of sensitivity analysis
The interpretation of the sensitivity indices given in Table 1 is as follows. Those parameters that have positive sensitivity indices (π, α) have a big contribution to the expansion of cholera disease in the human population if their values are increased by keeping the rest of parameters constant. And those parameters that have negative sensitivity indices (μ, τ , σ ) show a great effect in bringing down the disease from the population if their values are decreased by keeping the rest of parameters constant. Due to the reason that R 0 (basic reproductive number) increases as its parameter value increases, the average number of secondary infection increases in the population; and R 0 decreases as its parameter value decreases, which means that the average number of secondary infection decreases in the human population.
Numerical results and discussion
In this section, some numerical results are presented to demonstrate how change in parameters of the model influences various performance measures of the system. Models (1) and (2) were simulated using MAT LAB software to obtain the following graphs. Additionally, we used S(0) = 12, I(0) = 8, T(0) = 5, and R(0) = 3 as initial values, where the initial values are estimated. In Table 2, the listed parameter values are used for numerical simulation purposes.
Comparison of deterministic and stochastic trends of the model
In Fig. 2, the numerical results of the comparison of deterministic and stochastic trends of the model in the community are displayed by keeping all parameters unchanged. By taking the whole compartments of the model for deterministic figure and by adding a white noise to a deterministic equation, we illustrate a stochastic figure. Now, from this figure, we see that running the simulation results for the stochastic model is slower than for the deterministic model, this is due to environmental factors. The amount of infectious population decreases and the number of individuals who get treatment increases after a certain point of time due to treating the infected people in the community with the rate (σ ) in the stochastic case as well as in the deterministic one. Moreover, the stochastic behavior of the curves shows the real life behavior compared to the deterministic approach. We conclude from this figure that the stochastic solution is closer to the real solution of the cholera model than the deterministic approach. So, using the stochastic model is better than the deterministic one because the stochastic model considers a white noise or stochastic environmental factors.
Effect of contact rate on cholera infected population
In this subsection, we obtain the numerical simulation results of the impact of contact rate (α) on the number of infectious individuals I(t). The simulation results that are displayed in Fig. 3 are obtained by different values of contact rate (α) from α = 0.001 to α = 0.017 and keeping the rest of parameters constant. From Fig. 3, we see that as the contact rate increases the number of infectious individuals in the community increases, which means if the susceptible individuals contact with infected individuals either by shaking hands or eating foods with the same materials this would increase the infectious population.
In addition, we observe that the stochastic curves show a sound wave property due to environmental factors or random behavior of the disease, but in the deterministic model the curve looks like a smooth line, which implies it does not consider any factors in the environment for cholera disease. Hence, the disease persists in the community when there is an increase in the contact rate, even though the rest of the parameters are kept constant. Due to this, healthy workers must control this parameter.
Effect of treatment rate on cholera infected population
We investigated the effects of treatment rate on the number of infectious population. In Fig. 4, the experimental results are obtained by taking different values of treatment rate σ and by keeping the rest of the parameters constant. The simulation results in Fig. 4 reveal We also observe in Fig. 4 that in the case of deterministic approach as well as stochastic approach the decrements of infectious population are obtained as the treatment rate in the population increases. Therefore, increasing the treatment rate σ plays a vital role in the reduction of cholera disease dynamics in the community.
Effects of recovery rate on recovered individuals
In this subsection, we use the impact of recovery rate γ on the size of recovered individuals to display Fig. 5 by varying γ (recovery rate) from γ = 0.03 to γ = 0.7 and keeping the rest of the parameter values fixed. In the case of deterministic approach, Fig. 5 shows that the graph goes up smoothly as the recovery rate γ increases, which means that if the infectious individuals get treatment and recover more with the recovery rate γ , the number of recovered individuals increases in the community. Also, in the stochastic case, it displays that the number of recovered individuals increases as the value of recovery rate increases, with the graph going up and down. These ups and downs show the random behavior of the model. From this, we conclude that the recovered population becomes bigger by increasing the recovery rate (γ ), and the stochastic approach is more advisable than the deterministic one because the stochastic approach considers environmental white noise.
Discussions and conclusions
In Sect. 2, we proposed and described briefly an SITRS cholera disease dynamics model with two approaches: deterministic and stochastic. In Sect. 3, we analyzed the qualitative behavior of the model by obtaining the invariant region, existence of a positive invariant solution set, equilibrium points, the basic reproduction number for deterministic as well as stochastic model; local stability of disease-free equilibrium for both models was checked; the global stability, which is obtained by a Lyapunov function, sensitivity analysis, and their interpretation were studied. In Sect. 3, the two reproduction numbers were obtained by using the next generation matrix method and by using twice differentiable Ito's formula for stochastic reproduction number. Out of these two reproduction numbers the stochastic reproduction number is much smaller than the deterministic one. This implies that the stochastic approach is more realistic (close to the accurate solution) than the deterministic approach, because the stochastic model considers stochastic environmental factors or takes the randomness process. In Sect. 4, the numerical simulation results were discussed and analyzed by using MATLAB software by comparing the deterministic approach with the stochastic one. From our simulation results in Sect. 4, we conclude that increasing cholera treatment rate has a big contribution to eliminating cholera disease from the community, and increasing the recovery rate contributes to the reduction of the infection. Other results that were obtained in this section are as follows: decreasing the contact rate has a big influence on controlling cholera disease dynamics in the community. Therefore, for healthy workers as well as policy makers we recommend improvement of the treatment: by decreasing the contact rate and by increasing the recovery rate we can eradicate the disease from the community. | 5,685.4 | 2020-11-30T00:00:00.000 | [
"Mathematics",
"Medicine",
"Environmental Science"
] |
Constraints on composite quark partners from Higgs searches
In composite Higgs models, the generation of quark masses requires the standard model-like quarks to be partially or fully composite states which are accompanied by composite quark partners. The composite quark partners decay into a standard model-like quark and an electroweak gauge boson or Higgs boson, which can be searched for at the LHC. In this article, we study the phenomenological implications of composite quarks in the minimal composite Higgs model based on the coset SO(5)/SO(4). We focus on light quark partners which are embedded in the SO(4) singlet representation. In this case, a dominant decay mode of the partner quark is into a Higgs boson and a jet, for which no experimental bounds have been established so far. The presence of SO(4) singlet partners leads to an enhancement of the di-Higgs production cross section at the LHC. This will be an interesting experimental signature in the near future, but, unfortunately, there are no direct bounds available yet from the experimental analyses. However, we find that the currently available standard model-like Higgs searches can be used in order to obtain the first constraints on partially and fully composite quark models with light quark partners in the SO(4) singlet. We obtain a flavor- and composition parameter independent bound on the quark partner mass of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ {M_U}_{{_h}} $\end{document} > 310 GeV for partially composite quark models and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ {M_U}_{{_h}} $\end{document} > 212 GeV for fully composite quark models.
Introduction
The recent discovery of a Higgs-like boson at the LHC [1,2] represents a remarkable success for the Standard Model (SM) of particle physics. However, within the SM the Higgs mass is subject to additive renormalization, implying that a large hierarchy between the electroweak scale and the Planck scale is technically unnatural [3][4][5][6]. One of few motivated models beyond the Standard Model (BSM) addressing this so-called fine-tuning problem is the framework of composite Higgs models [7][8][9][10][11][12][13][14][15] with the Higgs as a pseudo-Nambu-Goldstone-boson associated with the spontaneous breakdown of an approximate global symmetry of a sector which becomes strongly coupled at a scale f . The global symmetry is explicitly broken by Yukawa couplings of the Higgs to the quarks and their composite partner states. Integrating out the quark partners yields an effective potential for the Higgs. The effective potential strongly depends on the embedding of the top quark partners into the global symmetry group as well as the top partner mass, while the lighter quark partners typically play a minor role. Concrete realizations of composite Higgs models [16][17][18][19][20], electroweak precision constraints [21,22], and top-partner phenomenology [23][24][25][26][27] have already been studied, while only few articles focussed on bottom partners [22,28] or partners of other light quarks [29].
JHEP05(2014)123
Although typically not majorly contributing to electroweak symmetry breaking, light quarks need to be accompanied by composite partners in order to generate (small but nonvanishing) masses of the SM-like light quarks. 1 The quark partner phenomenology depends on the embedding of the quarks and their partners into the global symmetry group of the composite sector, as well as their masses and couplings. In ref. [29], quark partners of the up and charm quarks were studied within the minimal composite Higgs model based on SO(5)/SO (4). The right-handed quark partners were embedded in the 5 of SO (5), which comprises of a 4 and a singlet under the SO(4) which is unbroken by strong dynamics. While bounds for the partners in the 4 were obtained, the singlet partner remained unconstrained because of its suppressed couplings to electroweak gauge bosons. It was furthermore shown, that the presence of a light singlet state can substantially weaken the constraints on the partners in the fourplet, due to a combination of smaller production rates of fourplet states, and the opening of cascade decay processes of fourplet states via the singlet state.
In this article, we focus on constraints of quark partners in the SO(4) singlet in composite Higgs models based on the minimal coset SO(5)/SO (4). In section 2 we discuss the two basic phenomenologically viable setups for composite quarks in the SO(4) singlet, where the right-handed quark is either realized as an elementary quark which mixes with a composite partner (partial compositeness) or as a chiral state of the composite sector (full compositeness). We establish effective Lagrangians for both these models and use those in order to discuss their LHC phenomenology in section 3. In particular we show that both setups can result in a substantial increase of di-Higgs production above the SM background if the quark partners are light. In the current absence of direct bounds on the di-Higgs production channel at the LHC, we use the ATLAS bounds on differential cross sections of the Higgs di-photon decay in order to obtain constraints on the partially and fully composite light quark models in section 4, where we present our results in terms of the effective models discussed in section 2. We conclude in section 5.
Models
As a generic setup, we consider the fermion sector of the minimal composite Higgs model [15] based on the coset structure SO(5)/SO (4). We follow the conventions and notation of ref. [29] and use the setups presented there for an initial discussion. We start the discussion with the embedding of up-type partners. For concreteness, we embed the lefthanded elementary quark doublet q L in an incomplete 5 of SO(5) [29,37] as Often, the composite sector is assumed to be flavor-blind in order to avoid constraints from flavor changing neutral currents (cf. e.g. ref. [30]). Such a choice would imply the light quark partners are degenerate with the top quark partners, up to Yukawa-suppressed corrections. However, as has been pointed out in [31], partners are allowed to be non-degenerate within models of flavor alignment [32][33][34]. It was also shown that while electroweak precision tests put severe constraints on the degree of compositeness of the SM quark doublets [30,35], due to an approximate custodial parity [36], the bounds can be much weaker for the SM quark singlets. In this article we therefore allow for non-degenerate quark partner masses and treat them as free parameters.
JHEP05(2014)123
For the up-type partners, the q L carries a U(1) X charge of 2/3. The U(1) X is included in order to generate the correct hyper charge of the quarks, which is obtained by gauging Y = T 3 R +X, where T 3 R is the diagonal generator of the SU(2) R in SO(4) ≃ SU(2) L ×SU(2) R . The fermionic partners of the up-type quarks are also included in a 5 of SO(5) (with U(1) X charge 2/3) as which can be decomposed as a fourplet, Q, with a mass scale M 4 and a singlet,Ũ , with a mass scale M 1 of the SO(4) which is unbroken by the strong dynamics. In this article, we are studying the singlet partner and therefore take the limit M 4 → ∞ in which the fourplet partners decouple, while the singlet partnerŨ remains as the only BSM particle.
The interactions of theŨ depend on the embedding of the right-handed quarks. They can be either embedded as a chiral composite SO(5) singlet u R (in "fully composite" models) [38] or as an incomplete representation of SO(5) (in "partially composite" models) which is a singlet state in terms of the SO(4). For concreteness, we embed u R into the 5 of SO(5) as Such an embedding is termed partial compositeness, because the mass eigenstate is a linear combination of the "elementary" quarks and the composite partnerŨ . In both -fully and partially composite -embeddings, the right-handed quarks have a U(1) X charge of 2/3. Down-type partners can be embedded analogously with a U(1) X charge of −1/3 for q 5 L , d R and ψ d . 2 The above embedding is not unique for composite Higgs models based on SO(5)/SO(4). Other embeddings of left-handed and partner quarks discussed in the literature include the symmetric 14 representation of SO (5) [19,20,39,40]. 3 However, we will be focussing on the SO(4) singlet partnerŨ (or the equivalent partner for down-type quarks). Our results are to a large part independent of the SO(5) representation in which the quarks are embedded in, as long as it contains the SO(4) singlet. In the following, we derive an effective description for the SO(4) singlet, using partners in the 5 as a guideline and comment on how this can be applied to other representations of quark partners.
2 For down-type partners, the embeddings of the qL and ψ d read Refs. [37,41] also discuss embeddings into the anti-symmetric 10 representation. However, the 10 decomposes into 6 ⊕ 4 in terms of SO(4) representations and does not contain an SO(4) singlet partner as is considered, here.
Partially composite quark models
We start with the partially composite model outlined in ref. [29]. Using the Callan-Coleman-Wess-Zumino (CCWZ) formalism [42,43], the fermion Lagrangian of the partially composite model is where e µ and d i µ are the CCWZ connections (cf. appendix A of ref. [29] for the explicit expressions), and where F and G are functions of the Goldstone matrix with appropriate SO(5) index contractions such that the action is SO(5) invariant. For u L , ψ and u R in the 5 these are simply F = G = U i5 gs . In the limit M 4 → ∞ in which the SO(4) fourplet states Q decouple, one obtains In the above, the functions F and G are F = − sin((h + v)/f ) and G = cos((h + v)/f ) for u L , ψ and u R in the 5. In order to obtain the final expressions, we expanded the Lagrangian in ǫ = v/f which yields From eq. (2.8) we read off the effective fermion mass terms The couplings of the mass eigenstates to the Z bosons follow from rewriting in the mass eigenbasis (u l , U h ). Note that the couplings arising from the U(1) X gauge couplings are universal, and a rotation into the mass eigenbasis of these terms does not
JHEP05(2014)123
induce any "mixed" interactions of the Z to u l and U h . Such an interaction can only arise from the SU(2) L component of the Z in the first term, but as the mixing in the left-handed sector is small (of order m 2 /M 1 ), the "mixing" couplings are negligible. The same holds for mixing couplings of the W gauge boson which follow from the analogous charged current interactions.
The Higgs couplings to the quark mass eigenstates follow from The coupling λ eff does not contribute to the production or decay of the partner quarks, and we neglect it for the effective description of the model. Collecting all other terms, the effective partially composite quark model Lagrangian is given by The Lagrangian (2.20) and the definition of the effective coupling (2.18) has been derived for up-type partners. The analogous calculation for down-type partners yields the same Lagrangian with the charge factors 2/3 being replaced by −1/3 as directly follows from the U(1) X charge assignments. For illustration we embedded q L , ψ and u R in the 5 of SO(5) in the above, but from the derivation it is apparent, that the effective Lagrangian holds more generally. As long as the quark partner multiplet contains one SO(4) singlet, and in the limit in which all other partner states are decoupled, the only dependence on the chosen representation enters through the functions F and G in eq. (2.6) via the functions F and G in eq. (2.8) into the effective coupling λ eff mix in eq. (2.18). As an illustration, table 1 shows the corresponding functions and the effective coupling for an embedding of ψ and u R in the 5 and q L in the 14. The extension to other embeddings which contain an SO(4) singlet partner is straight forward.
We emphasize that for concrete realizations of partially composite quarks, the parameters λ eff mix and M U h are correlated. In particular, the mass of the quark partner is such that a light partner mass M U h implies an upper bound on M 1 and y R which in turn implies an upper bound on λ eff mix (for fixed M U h and f , and depending on the embedding chosen). In section 3.1 and 4.2, we study the phenomenological implications of the more general effective model defined by eq. (2.20) in terms of the parameters λ eff mix and M U h and point out the implications for specific quark embeddings.
Fully composite quark models
We repeat the analysis presented in the last section for the fully composite quark model. The fermion Lagrangian in this case is [23,29] Table 1. Relations between the effective Lagrangian and parameters given in eqs. with and where (for q L and ψ in the The mass terms read Note that there is no symmetry argument differentiating between the m 2 and m 3 term, implying that they should be of the same order. The lighter eigenvalue of the mass matrix has to be identified with the SM quark mass. To obtain a small eigenvalue (for first or second family quarks or the bottom), we have to choose m 2 ≪ M 1 , implying that m 3 is also naturally small. Then, the mass eigenvalues are approximately given by
JHEP05(2014)123
The mass matrix is diagonalized by a rotation T R = ½ + O(m 2 3 /M 2 1 ) while in the lefthanded quark sector, the mixing angle is to leading order given by tan(ϕ) = m 3 /M U h . The situation is therefore opposite to the partially composite case: the right-handed mass eigenstates are gauge eigenstates while the left-handed gauge eigenstates mix.
Analogous to the partially composite setup, the couplings of the quark mass eigenstates follow from eq. (2.16) when transforming into the mass eigenbasis (u l , U h ), which for fully composite quarks yields where L Z,SM contains the SM interaction of the light quark with the Z boson, and The third term in eq. (2.29) is suppressed by O(m 2 q /M 2 U h ) and hence does not lead to relevant corrections of the partial Z width. The "mixing" interaction in the last line of eq. (2.29) is small (of order O(m q /M U h )), but it will play an important role in the determination of the branching ratios of U h decays.
Analogously, the couplings of the quarks to the W boson are described by Finally, the "mixed" Higgs interactions follow from with λ eff mix = λ 3 . To simplify the discussion, let us note that Using this relation, the effective Lagrangian for SO(4) singlet partners of fully composite light quarks reads The "mixed" interactions are suppressed by m q /M 1 and therefore do not play a role for the production of composite quark partners, which is thus dominated by QCD pair JHEP05(2014)123 production. They are, however, important for the branching fractions of the U h . The partial widths for the decays U h → Xj are given by the branching ratios of the U h for decays into W , Z, and h plus jet are 50%, 25% and 25%.
Like for partially composite models, the Lagrangian (2.35) holds also for down-type partners when replacing the charge factors 2/3 with −1/3. Again, the effective Lagrangian can be applied to other embeddings of q L and ψ into SO(5), as long as the quark partner contains an SO(4) singlet and as long as the other partner states are decoupled. As an example, we list the respective functions and the expression for the effective coupling for q L embedded in the 14 and ψ embedded in the 5 of SO(5) in table 2.
U h decays in the presence of higher dimensional operators
The U h decay rates determined in eq. (2.36) are proportional to |λ eff mix | 2 . As opposed to the partially composite setup discussed in section 2.1, the mixing parameters in the fully composite setup are strongly suppressed, |λ eff mix | ∝ O(m q /v), in order to reproduce the quark masses. Therefore, higher dimensional operators in the strongly coupled sector which are induced by integrating out other strongly coupled resonances can play the dominant role for U h decays and strongly modify the branching ratios. 5 In the absence of additional symmetries, the lightest colored vector resonance ρ can couple to U h and u R via which, upon integrating out ρ, yields a four-fermion operator
JHEP05(2014)123
implying a possible decay U h → uuu. 6 The resulting partial decay width for U h → uuu is which exceeds the partial decay widths eq. (2.36) for all light quark flavors. Therefore in typical fully composite light quark models, U h decays into three jets, making a Higgs final state search obsolete. The three-jet decay, as well as the d-term interactions in eq. (2.22), violate a U(1) symmetry u R → e iφ u R which is present at the leading order composite Lagrangian. If we make a very special UV-dependent assumption that the strong dynamics preserves this symmetry, enforcing gŨ u = 0, then the Yukawa terms in L mix eq. (2.24) are the only source of chiral symmetry breaking. In this case, in the limit y L → 0, the U(1) symmetry is restored and forbids the decay of U h into SM particles. As a result of it, any operator that induces U h decay must be suppressed by y L , on top of additional suppression factors for higher dimensional or loop-induced operators.
Under the special assumption that the strongly coupled sector does not break the U(1) symmetry, the leading operators are therefore those given in eq. (34), and the U h decays into W u, Zu, and hu with branching ratios of ∼ 50%, ∼ 25%, and ∼ 25% while for a generic strongly coupled sector, the main decay channel is U h → uuu.
Phenomenology of composite quarks with SO(4) singlet partners
The phenomenology of quark partners in partially and fully composite quark models is very different, which is due to the differing allowed parameter ranges of λ eff mix , as well as the differing branching ratios for partner states decaying into hj, W j and Zj final states. In the following, we discuss the production, decay, and different search channels for both setups.
Quark partners in partially composite models
Partners states of partially composite light quarks couple to gluons via the strong interaction, and to the Higgs and light quarks via the coupling λ eff mix . This mixing coupling can be sizable whilst still reproducing a small mass for the SM quark.
The main production channels involving the composite light quark partner states are shown in figure 1. 7 We present the graphs involving an up-type partner. The graphs for other light quark partners are analogous. The QCD pair production channels shown in panel figure 1 (d). Again, this process depends on both, the partner quark species, as well as on λ eff mix . The only available decay channel for SO(4) singlet partners in partially composite models is a decay into a Higgs and a SM quark. For partners of the u, d, s, and c quark, this implies a Higgs and a jet per produced quark partner in the final state, 8 while for partners of the third family, the final state top or bottom can be tagged.
A very promising signature for the discovery of SO(4) singlet partners is therefore two Higgses with two associated high p T jets or bottoms. Figure 2 shows the cross section for this process as a function of the partner mass M U h . Here, we assume the presence of a partner of only one SM quark. The QCD production channel provides a λ eff mix -independent contribution to the cross-section, shown as the black line (the lowest of the three curves). The upper two lines indicate the QCD plus non-QCD cross-section, assuming a reference value of λ eff mix = 1 for the respective quark partner. For u (first line from the top, red) and d (second line from the top, orange) quark partners, the non-QCD contribution can be substantially increased, while for other quark partners, the non-QCD contribution is PDF suppressed. Bearing in mind that the non-QCD cross section scales like ∼ (λ eff mix ) 4 , the cross sections for other values of λ eff mix can easily be inferred. Singly produced quark partners typically yield a final state with two Higgses and one high p T jet or b-jet. Figure 3 shows the cross sections for this channel for a u, d, s, c and b partner with λ eff mix = 1. The single production cross section scales like ∼ (λ eff mix ) 2 . Finally, the t-channel exchange of U h shown in figure 1 (d) contributes to di-Higgs production, in this case without associated high p T jets or b-jets. Figure 2. Production cross section for a pair of quark partners in partially composite quark models as a function of the partners' mass M U h for LHC at 8 TeV (left) and 14 TeV (right). The first two lines from the top correspond to the pair production cross section with λ eff mix = 1 for partners of the up (red) and down (orange), while the third line (black) denotes the QCD pair production cross section. The non-QCD pair production cross sections with λ eff mix = 1 for partners of the s, c and b quarks are PDF suppressed. Thus, the pair production cross section for these quark partners is to a good approximation given by the QCD pair production cross section. pends on the partner quark species and scales like ∼ (λ eff mix ) 4 . Figure 4 shows the di-Higgs production cross sections associated to this channel for a reference value λ eff mix = 1. From the above discussion it is apparent that the di-Higgs channel (with two, one or zero associated high p T jets or b-quarks) provides a golden search channel for SO(4) singlet partner quarks. However as of now, no direct constraints on the di-Higgs channel are published by ATLAS or CMS, such that in this article, we attempt to obtain constraints on the model from SM Higgs searches in section 4.1. It is also important to notice that the currently available LHC studies for the single Higgs plus jets relevant for our analysis in the next section has been performed for the SM Higgs searches. In the near future, dedicated searches for boosted Higgs signals [47][48][49][50][51][52][53][54] in the currently accumulated LHC data set can significantly improve the bound we determine, here.
Partners of fully composite quarks
As shown in section 2.2, fully composite light quark models naturally require a small coupling λ eff mix of order ∼ m q /v. Therefore for quark partners, all non-QCD production processes shown in figure 1 are suppressed, and the QCD pair production processes figure 1 (a) are the only production channels relevant for LHC phenomenology. 9 Partners of fully composite quarks are therefore produced in pairs. The production cross section is independent of the quark species and insensitive to λ eff mix . In generic fully composite models, a SO(4) singlet partner quark dominantly decays into three jets, as outlined in section 2.2.1. In this case, the CMS search for three-jet resonances [57] implies a bound of M U h 650 GeV. 10 This bound is avoided when assuming that the strongly coupled sector respects a U(1) symmetry u R → e iφ u R which is only broken via the Yukawa terms in eq. (2.24). In this special case, quark partners have dominant decay channels into a SM quarks and a W , Z, or Higgs boson with the branching ratios of these three channels being ∼ 50%, ∼ 25%, and ∼ 25%, as shown in section 2.2. The final states arising from pair production of partners of the first two families (and their approximate branching fractions) are then hhjj (1/16), hW jj (1/4), hZjj (1/8), W W jj (1/4), W Zjj (1/4), and ZZjj (1/16). b partners yield the analogous final states with b-jets instead of jets.
Before focussing in the next section on final states containing at least one Higgs boson, let us summarize the current status of searches for the final states involving W or Z bosons, and their implications for the SO(4) singlet quark partners in fully composite quark models. Searches relevant to partners of the first two families include the ATLAS search for 9 For partners of fully composite quarks, additional non-QCD production channels exist because the partners also have effective mixing couplings with electroweak gauge bosons and a light quark. The mixing couplings are however proportional to λ eff mix (cf. eq. (2.34)), such that these production channels are also suppressed as compared to QCD production. 10 In [57] the bound of Mres > 650 GeV was obtained for pair produced gluinos which decay into three jets in an R-parity violating SUSY scenario, but given the analogous production channels for the fully composite model, a similar bound is to be expected, if the three-jet decay channel dominates.
JHEP05(2014)123
the W W jj final state [55], a recast of the W W jj final state performed in ref. [29] based on the CMS leptoquark search [56], and the ATLAS inclusive search for two 3-jet resonances [57]. A naive comparison of these bounds with the QCD production cross section shown in figure 1 (times the respective branching ratios given above) indicates no bounds on quark partners of the first and second family quarks.
Searches relevant to bottom quark partners include the ATLAS search for two 3-jet resonances with at least one b-jet [57], the ATLAS search for pair-produced b quark partners decaying into Zb [58], and the CMS searches for pair-produced b partners in the lepton+jets final state [59] and in the dilepton+jets final state [60]. The strongest bounds on partners of fully composite b quarks are M U h > 645 GeV [58] and M U h > 700 GeV [59].
Constraining composite light quark partners with Higgs searches
For models with only SO(4) singlet partners of light quarks, the interactions of the light quarks to the Higgs are SM-like. 11 Also, the composite quark partner U h does not yield sizable BSM contributions to the one-loop induced hγγ and hgg vertices. The dominant contribution arises from a triangle diagram with U h in the loop. The coupling λ eff mix of the Higgs to two U h quarks is Yukawa-suppressed, such that this contribution is negligible for light quark partners. "Mixed" triangle-diagrams with a U h and a u l or d in the loop do not exist. As apparent from the Lagrangians eq. (2.20) (for partially composite quarks) and eq. (2.35) (for fully composite quarks), "mixing" gluon or photon interactions with one light and one heavy quark are absent. For the same reasons, one-loop induced BSM corrects to the hW W and hZZ vertices are small as well. 12 Therefore, the couplings of all SM states to the Higgs are insensitive to the presence of SO(4) singlet partners of light quarks. The production channels of the Higgs via gluon fusion and vector boson fusion are SM-like, and all decay rates of the Higgs are SM-like as long as the quark partner U h is heavier than the Higgs. Hence for composite light quark partner models, Higgs events of at the LHC can be separated into a SM background and a BSM production of Higgses via the production and decay of heavy quark partners or t-channel exchange of heavy quark partners in the processes discussed in section 3.
The events discussed in section 3 increase the Higgs production cross section. Moreover, the topology and kinematical distributions of events with a Higgs which result from the decay of a U h quark differ from the SM Higgs events. In the SM processes, the Higgs boson is typically produced at threshold, implying low Higgs p T , while Higgses from U h decays are boosted when the U h is heavier than the Higgs. Furthermore, U h pair production and subsequent decay into hhjj leads to a higher number of jets -in particular with high p Tas compared to the SM processes for Higgs production. Thus, even in the current absence of dedicated searches for di-Higgs events or searches for BSM signals in the invariant mass JHEP05(2014)123 distribution of the Higgs and the leading jet, measurements of the differential cross section of Higgs events can be used to constrain Higgs events which result from U h decays.
In ref. [65], the ATLAS collaboration presented results on differential cross sections of the Higgs in the h → γγ channel. In particular, ref. [65] studies the p γγ T , N jets , and the highest p jet T distributions which are in good agreement with the SM predictions. We use the experimental results of ref. [65] in order to derive constraints on the parameter space of partially and fully composite light quark models.
Simulation and data evaluation
To simulate the BSM contribution to the differential cross sections of the Higgs, we implemented the Lagrangians eq. (2.20) (for partially composite quarks) and eq. (2.35) (for fully composite quarks) using FeynRules 2.0 [66,67]. For the SM part of the Lagrangian we used the SM implementation provided by [68], interfaced with with the effective Higgs implementation by [69] which we adapted in order to reproduce the total width and the branching ratios of the Higgs for a mass m H = 125 GeV given in [70]. With the implementation of the models, we generated parton level Monte Carlo (MC) event samples for the BSM Higgs production channels discussed in section 3 for proton-proton collision at a center-of-mass energy of 8 TeV, using MadGraph 5 [71], interfaced with CTEQ6L1 PDFs [72]. After performing the parton showering and the hadronization with Pythia 6.4 [73], the generator-level MC events have been processed with Delphes 3 [74] and the jet clustering procedure is performed via FastJET [75,76].
The MC generated data is selected according to the particle level fiducial definitions in ref. [65]. The selection criteria are as follows: the two highest-E T , isolated final state photons, within |η| < 2.37 and with 105 GeV < m γγ < 160 GeV are selected. The isolation criterion is the sum of the p T of all stable particles excluding muons and neutrinos is required to be less than 14 GeV within ∆R = (∆η) 2 + (∆φ) 2 < 0.4 of the photon. After the pair is selected, a cut on E T /m γγ > 0. 35(0.25) for the two photons is applied. Jets are selected using the anti-k t jet clustering algorithm [77] with a distance parameter of R = 0.4. The resulting jets are required the have a transverse momentum p T > 30 GeV, and rapidity |y| < 4.4.
With the events which pass the selection criteria outlined above, we simulate the p γγ T and the N jets distribution as well as the p j 1 T distribution of the most energetic jet of the BSM events. 13 For p γγ T and p j 1 T , the distribution is determined in p T bins chosen according to the bins in ref. [65], while for N jets , we consider bins with events with N jets = 0, 1, 2 and ≥ 3. The event numbers obtained in each bin at the particle level are divided by bin-by-bin unfolding correction factors The other observables studied in ref. [65] are rapidity |y γγ | of the Higgs boson, the helicity angle | cos θ * | in the Collins-Soper frame, the jet veto fractions σN jets =i/σN jets ≥i for jet multiplicities i, the azimuthal angle between the leading and the subleading jet ∆φjj and the transverse component of the vector sum of the momenta of the Higgs boson and dijet system p γγjj T . We simulated the corresponding distributions for our BSM channels and found that the effect of composite light quark partners on them is less relevant. provided in ref. [65] in order to correct the particle level data to a reconstructed data set. For the following analysis, we assume unfolding correction factors for the experimental data and MC data of the SM and BSM to be equal. As an example, figure 5 shows the p γγ T , the N jets and the p j 1 T distribution resulting from a partially composite down-quark partner model with mass M U h = 300 GeV and effective coupling of λ eff mix = 1.1. As to be expected, the highest excess over signal occurs in the p γγ T overflow bin. However, ref. [65] used the overflow bin as a control bin and does not provide a unfolding correction factor for it, such that we conservatively ignore the p γγ T overflow bin in our parameter space constraints presented in the remainder of this section. For reference, we also show the constraints resulting from including the p γγ T overflow bin in appendix A. Ignoring overflow bins, the dominant excesses arise from the highest p γγ T bin and the N jets ≥ 3 bin. The highest p j 1 T bin also shows an excess, but it plays less of a role in the determination of constraints on the composite Higgs models, because the error on this bin is large. 14 To JHEP05(2014)123 obtain a bound on the mass and the coupling of the composite quark models, we perform a simplistic χ 2 test on the p γγ T , p j 1 T , and N jets bins
JHEP05(2014)123
whereN i is the number BSM events obtained from the MC plus the number of SM events from ref. [65] in the respective bin, and N i and σ i are the respective measured values and errors. Figure 6 shows the 95% confidence level (CL) exclusion bounds on partners of partially composite light quarks in the λ eff mix vs. M U h parameter space, which result from p γγ T bins, p j 1 T bins, N jets bins, and the combined bound. For reference, the dashed line in each of the plots shows the coupling above which the width of the quark partner exceeds 1/3 of its mass, so that the narrow-width-approximation is not applicable anymore. 15 The dependence of the bound on λ eff mix which is reflected in all three observables can be understood from the quark partner production cross sections shown in figures 2 and 3. For small λ eff mix , QCD pair production dominates for partners of all quark flavors. For non-suppressed λ eff mix , non-QCD single and pair production play a role mainly for up-and down-quark partners for which the initial qq, qq and qq are not substantially reduced by the PDFs. In parallel with the increased production of quark partners, the di-Higgs production due to t-channel exchange of a u or d quark partner is increased for the same reason (cf. figure 4). Comparing the different observables, the strongest bound in our analysis arises from the N jets ≥ 3 bin. The reason is that distribution of p γγ T (and also for p j 1 T ) for high M U h masses is centered above the upper cutoff of the highest p T bin of 200 GeV (140 GeV), such that the majority of BSM events lies in the p γγ T (or respectively p j 1 T ) overflow bin which we conservatively ignore in our main analysis. For an estimation of the constraints including the p γγ T and p j 1 T overflow bin we refer the interested reader to appendix A. As we have discussed before, the flavor-independent bound arises from small λ eff mix region, in which QCD production dominates. Considering the all bins combined exclusion limit, the quarks partners are excluded up to
Results for partners of partially composite light quarks
The results in figure 6 are shown in terms of the effective mixing parameter λ eff mix and the physical mass M U h and apply to any model described by the effective Lagrangian eq. (2.20). For concrete realizations of partially composite quark models, like the embedding of partners in the 5 or 14 of an SO(5) multiplet, the parameters λ eff mix and M U h are correlated, as discussed in section 2.1. Obtaining a quark partner mass substantially below the composition scale f implies an tight upper bound on |λ eff mix | at this mass. for a mass of M U h = 310 GeV and a compositeness scale f = 750 GeV, using the expressions for the effective coupling in table 1 together with the expression for M U h in eq. (2.14) yields |λ eff mix | 0.07 for a partner embedding into the 5 and |λ eff mix | 0.18 for a partner embedding into the 14. For both these embeddings, the current bounds from figure 6 are thus dominated by QCD pair production and given by the flavor-independent bound.
Results for fully composite light quarks
As compared to partially composite quark models, the exclusion limit on fully composite models is weaker. The main reason is that for decays of partners of fully composite quarks,
JHEP05(2014)123
the branching fraction into a Higgs and a jet is only ∼ 25%. Furthermore, partners of fully composite quarks are only produced via QCD pair production. Quantitatively we find no significant signal excess over background in any of the p γγ T and p j 1 T bins. A χ 2 -fit to the N jets data only results in a very weak bound of M U h 212 GeV at 95% CL. (4.4)
Summary and conclusions
In this article, we discussed the phenomenology of SO(4) singlet partners of the u, d, s, c, and b quark in minimal composite Higgs model. In section 2, we derived the effective Lagrangian for light quark partners in a setup with partially composite right-handed quarks (cf. eq. (2.20)) and a setup with fully composite right-handed quarks (cf. eq. (2.35)). 16 Based on the effective description, we discussed the dominating quark partner production channels and the single-and pair production cross section for the LHC at 8 and 14 TeV in section 3 (cf. figures 2 and 3). Both models predict an excess in di-Higgs production which represents a very promising discovery channel. In addition, partners of fully composite quarks also yield an excess in channels with two hard jets (or b-jets) and two weak gauge bosons. The implications of existing ATLAS and CMS searches for the non-di-Higgs final states are discussed in section 3.2. By these searches, only the mass of a fully composite bottom partner is constrained to M U h > 700 GeV [59]. In order to obtain the current LHC bounds on the other partners of fully composite as well as partially composite quarks, we used the ATLAS measurements of differential cross sections of the Higgs boson in the h → γγ channel [65]. Amongst the distributions provided in ref. [65], our simulation shows the largest deviations of signal as compared to data in the p γγ T , the N jets and the highest jet p T distribution. Our main results for partially composite quark models are displayed in figure 6 where we show the 95% CL bound resulting from the afore mentioned distributions individually, as well as the combined result. As a λ eff mix and flavor independent bound on the singlet partner mass we obtained M U h > 310 GeV. In models which allow for sizable λ eff mix , the mass bound is enhanced for up-and down-quark partners, while strange-, charm-, and bottom partner bounds only have a weak dependence on λ eff mix . 17 For partners of fully composite quarks, we found a constraint of M U h > 212 GeV at 95% CL which is independent of flavor and λ eff mix . In order to determine the bounds discussed above, we conservatively did not include the signal excess in the p γγ T and p j 1 T overflow bins on which partial information is provided in ref. [65]. We refer the interested reader to appendix A for the "projected" bounds which include the overflow bin data. Our study shows that one can improve the bounds presented here, once a dedicated high p γγ T Higgs search is available.
JHEP05(2014)123
We also showed the need for dedicated experimental searches in the di-Higgs channel (with two, one or zero associated high p T jets or b-quarks) at the LHC, which will be the golden channel for exploring the relevant theory space for the SO(4) singlet partners of partially composite quarks. Apart from direct di-Higgs searches, searches for a large number of high p T b-jets in the final state are promising to lead to an indirect constraint. Finally, we also expect that dedicated searches for boosted Higgs signals in the currently accumulated LHC data set can significantly improve the bound we derived in this article. | 9,700.6 | 2014-05-01T00:00:00.000 | [
"Physics"
] |
Beyond Reporting Verbs: Exploring Chinese EFL Learners’ Deployment of Projection in Summary Writing
Adopting the framework of projection from Systemic Functional Linguistics, the present study explored the deployment of projection in summary writing by three levels of college EFL learners from a university in mainland China. Data were collected from one summary writing by three classes of different levels’ learners in an English program from a university in the southern part of mainland China. Quantitative analysis showed that projections increased dramatically from Year 1 to Year 2 and dropped slightly from Year 2 to Year 3. Qualitative analysis revealed that the use of projecting verbs showed huge differences among the three levels of learners. Year 1 students used only a very limited range of projecting verbs. Year 2 learners used a more comprehensive range of such verbs but tended to use them repetitively and inappropriately. In contrast, year 3 students used a much more comprehensive range of projecting verbs in their summary writing and construed projection at different levels. It is recommended that more attention should be paid to the teaching of projection at phrase and text levels in EAP.
Introduction
It is widely recognized that reporting verbs are one of the essential resources in academic writing, especially in building up support and expressing authorial stance (e.g., De Oliveira and Pagano, 2006;Ignatieva & Rodríguez-Vergara, 2015; Thompson & Yiyun, 1991). In particular, reporting verbs help authors integrate information from different sources to support arguments or personal claims (Liardét & Black, 2019). In the past decades, reporting verbs in academic writing have been studied widely with different foci ranging from types of reporting verbs (Hyland, 1999;Thompson & Yiyun, 1991), stance (Hyland, 1999), native and non-native speakers' differences in reporting verbs (Liardét & Black, 2019;Pickard, 1995), and disciplinary differences in the use of reporting verbs (Thompson & Tribble, 2001). These findings have deepened our understanding of the teaching and learning of reporting verbs in the field of English for academic purposes.
From the perspective of Systemic Functional Linguistics (hereafter SFL), reporting verbs represent only one of the many grammatical resources to construe what is referred to as "projection." Projection is defined as relating "phenomena of one order of experience (the processes of saying and thinking) to phenomena of a higher order (semiotic phenomena -what people say and think)" (Halliday & Matthiessen, 2014, p. 441). Although this definition refers only to projection at the clause level, elsewhere, they make it clear that they consider projection to be a semantic domain that spreads across a wide range of grammatical units, including clause complexes, nominal groups, prepositional phrases, adjuncts, etc. (which will be explained in Section 3). This extended view of projection goes beyond simply looking at the use of reporting verbs at the clausal level to a broader system of resources for representing speech and ideas that contribute significantly to the academic meaning-making process. Therefore, in researching this, it is necessary to go beyond "reporting verbs" and the structures they are part of and use the broader notion of projection to gain a fuller picture of how this critical aspect of students' academic writing skills develops over time. To contribute to this line of research, the present paper explores 1093356S GOXXX10.1177/21582440221093356SAGE OpenChen et al.
research-article20222022
1 Guangdong University of Finance, Guangzhou, P.R. China 2 The Hong Kong Polytechnic University, SAR China the deployment of projection in summary writing by different levels of college EFL learners in China. The learners are from a university in mainland China, majoring in business English in their undergraduate studies. Data are from one summary written by three classes of learners from year 1, 2, and 3, respectively. The summary task is a timed writing activity based on a 900-word reading task.
Reporting Verbs and Academic Writing Studies
Over the past decades, investigations drawn upon different linguistic frameworks have enriched our understanding of reporting verbs methodologically and epistemologically (Hyland, 2002;Thomas & Hawes, 1994;Thompson & Yiyun, 1991). For example, the study of reporting verbs used in different disciplines showed how reporting verbs were used in different genres (Monreal & Salom, 2011;Thomas & Hawes, 2001). Reporting verbs were found to manifest disciplinary differences (Swales, 2014;Moore, 2002). For example, the use of reporting verbs was quite different in politics and material science (Charles, 2006). Charles (2006) found that while "argue, note, and suggest" were frequently employed in politics, "find, observe, and show" were the top reporting verbs that are used in material science.
Reporting verbs have also been studied from the perspective of appraisal, looking at how authors express stance and evaluative meanings by adopting different reporting verbs (Hood, 2010). For example, Liardét and Black (2019) noted that different verbs had very different evaluative meanings to capture the stance or authorial voices in the argument. Their findings suggested that teachers of EAP should make the teaching of these verbs explicit and let the learners have a more thorough understanding on how to deploy them. A study by Nguyen (2017) examined the use of reporting verbs in TESOL master dissertations written by Vietnamese students in a Vietnam university. Her findings showed that those Vietnamese TESOL MA students were not competent in using reporting verbs that critique other researchers' work. Therefore, they usually used reporting verbs that indicated a neutral attitude when engaged in an academic argument.
Reporting verbs have also been extensively studied in the academic writing of non-native English speakers. For example, Pickard (1995) compared ESL learners' use of reporting verbs of "saying" with that of expert writers. She found that students enrolled in ESL writing courses had a limited range of "saying" reporting verbs, and they overused the word "say" when they reported or cited other writers in their academic writing. However, expert writers had better control of a range of reporting verbs and used fewer instances of "say" in their academic writing. Liardét and Black (2019) compared ESL and English L1 learners and experts' use of reporting verbs in their academic writing. In contrast to the findings of Pickard, they found that there is no difference between ESL and English L1 learners in terms of the types of reporting verbs they used in academic writing. Both of the two groups of learners in Liardét and Black's (2019) study deployed the same set of reporting verbs with exactly almost the same frequency in their academic writing.
Several studies have also examined the use of reporting verbs in different contexts. Swales (2014) found that the top five reporting verbs were "show, find, suggest, propose, and argue" in the biology assignment written by students. In addition, his findings showed slight differences in citation practices between undergraduate and graduate students in biology. Undergraduates preferred to use non-academic materials, such as Wikipedia and online resources, while graduates never used such resources in their writing. In another study, Thomas and Hawes (1994) focused on the reporting verbs from medical science and found that threefourths of the reporting verbs were deployed in the results and findings sessions from medical science. Cognition verbs were correlated with reaching an agreement in scientific circles, while what they called "discourse verbs" were associated with the conclusion and reporting of findings.
Projection and Academic Writing Studies
The studies reviewed above have all focused rather narrowly on the use of reporting verbs. Only a few studies have been done using the wider framework of projection. The importance of using this wider perspective in studying academic writing has gradually been recognized in recent years (Xuan & Chen, 2020). However, even studies claiming to take this perspective have still often focused almost entirely on the deployment of "verbal process" structures at the clause level, that is, reporting verbs and their structures. Such studies have focused on the projection used in different genres (Ignatieva, 2011;Ignatieva & Rodriguez-Vergara, 2015;Zhao, 2014); and comparison between L2 learners of English and native English speakers' use of projection (Zhou & Liu, 2014). For example, Ignatieva (2011) investigated the two genres, "question and answer" and "essay," written by Mexican students, and she found that there were registerial differences in terms of deployment of verbal processes. More verbal processes were used in the question and answer texts than in the essay texts. Findings from a subsequent study by Ignatieva and Rodriguez-Vergara (2015) showed that verbal processes played an essential role in construing emotion and opinions in academic writing in linguistics and humanities. Moreover, genre, the writing topic and the students' academic writing experiences directly influenced the appropriateness of deployment of verbal process in their academic writing. In another study, Zhao (2014) found that while book reviews in finance tended to deploy projection to contract dialogic space (Martin & White, 2005), book reviews in linguistics tended to use projection to expand dialogic space.
In terms of projection from the more comprehensive framework mentioned above, Zeng (2007) studied the deployment of projection in academic writing from grammatical metaphor. She concluded that grammatical metaphors of projection helped authors to construe complicated and sophisticated meanings. In a subsequent study, Zeng and Liang (2007) introduced the concept of multisemiotics to broaden studies of projection and they concluded that projection could be realized by various non-linguistic semiotic resources such as figures, tables, footnotes, hyperlinks, code-switching, etc.
Few of these studies concentrated on learners' texts. However, Zhou and Liu (2014) compared the use of projection in the academic writing of English learners in China and native speakers in the US. Their findings showed that Chinese EFL learners deployed more projections in their writing than the American native speakers did. In addition, the authors found that Chinese learners used projection to build up agreement and confirmation with the readers but failed to build up different stances or opinions.
So, to summarize, to date, even studies of projection from the SFL perspective have primarily focused on reporting verbs per se and have not related them to other functionally relevant resources in English. Also, while academic writing was the most investigated register in the previous studies, few studies focused on one specific academic genre, like a summary. Moreover, most of the extant studies only focused on projection realized at the clausal level, and they have ignored projection construed at other levels, such as phrasal, and textual levels. A more comprehensive understanding of the deployment of projection at different levels is lacking. Furthermore, the previous studies focused mainly on one level of learners. Studies concerning the different levels of learners' use of projection in students' writing are scant. It is essential to know more about how EFL learners develop their use of projection at different proficiency levels. However, there are rare opportunities to observe the intensive use of projection in the ordinary college English writing tasks given to the students before they really approach academic writing. That is the reason why we designed the summary writing task in this study to steer the students in question to intensively use projection for our observations. Hence, a developmental study of reporting verbs used in summary writing by Chinese college EFL learners from the perspective of projection will add new knowledge in projection and EAP writing studies.
Research Questions and Theoretical Framework
To address the research gap, this study aims to answer the following two research questions: (i) How do Chinese EFL learners deploy the range of projection resources in their academic writing at three different years? (ii) Are there any developments across the 3 years? If there are, what are they?
The coding scheme of our study is based on the systematic description of projection in SFL (see in particular Halliday & Matthiessen, 1999Mattheissen, 1995). The term projection was first introduced by Halliday (1977) as a type of logico-semantic relationship, alongside expansion, between two clauses. A formal explanation of projection is offered in Halliday (1985) and Halliday and Matthiessen (2014) as follows: (1) Expansion: the secondary clause expands the primary clause, by (a) elaborating it, (b) extending it, and (c) enhancing it.
(2) Projection: the secondary clause is projected through the primary clause, which initiates it as (a) a locution or (b) an idea.
Thus, projection is the term used in SFL to refer to quoting and reporting of saying and thinking. It is typically realized as a clause complex consisting of a projecting clause and a projected clause. However, the notion of projection is different from the traditional notion of reporting in that it is a semantic category that is general enough to cover all grammatical items representing the representation of language. Halliday and Matthiessen (2014) conceptualized projection as a semantic domain that spreads across a range of grammatical environments. In this study, we pay particular attention to projection realized as clause complexes, verbal groups, prepositional phrases, and embedded clauses because they are highly relevant to acknowledge the sources of ideas in the summary writing tasks. For instance: (a) He says he will come. (Clause complex) (logical projection) (b) He wants to come. (Verbal group) (experiential projection) (c) According to him, he will come. (Prepositional phrase) (projecting circumstance: angle) (d) He talks about coming. (Prepositional phrase) (projecting circumstance: matter) (e) It is said that he will come. (Embedded clause) (fact projection) Examples (a) and (b) represent projection realized through complexing, one of the logical resources in language. (a) Is more or less equivalent to what in traditional grammar is called "reported speech." Examples (c) and (d) belong to what is called "projecting circumstance," forming a relationship between the main clause and the mini-clause (prepositional phrase). Example (e) demonstrates fact projection in the form of embedding. Based on the observations above, we developed two coding schemes in the UAM corpus tool, one for projection at the clause level and the other for the phrase level, as follows: It should also be noted that Zeng (2016) takes one step further to model projection beyond the clause rank. Based on her observations on English translations of Lunyu (The Analects of Confucius), she posits that "in real texts there exist not only projection clause nexuses but also projection paragraphs or projection texts." Projection text, in her model, consists of hyper-clause complex projection, paragraph projection, cross-paragraph projection, and complex group projection. Here is an example from Zeng (2016, p. 49): Confucius remarked, "A wise man who is not serious will not inspire respect; what he learns will not remain permanent." "Make consciousness and sincerity your first principles." "Have no friends who are not as yourself." "When you have bad habits do not hesitate to change them." In this case, several paragraphs that constitute the projected message share the same source of projection in the first paragraph as in Confucius remarked. In this study, we will employ the notion of projection text in our qualitative analysis to refer to the construal of projection beyond clause complexes.
Participants
Participants of this study were 91 English majors (35 students from Year 1, 28 students from Year 2, and 28 students from Year 3) from a university in South China. The school is a provincial public undergraduate university specialized in six disciplines including economics, management, law, art, science, and engineering, with finance as the leading major. In 2019, 51.37% of the English majors in this university passed TEM 8 (the Test for English Majors Band 8), comparatively high in comparison with an average of 34.96% of all Chinese English majors nationally. These students, in this sense, were slightly higher in their English proficiency than the average English majors in mainland China. The curriculum for the English majors in this university includes a range of writing courses, such as Fundamental English Writing, Academic Writing, Business Correspondence Writing, and Thesis Writing, etc. All are compulsory courses, which means that the participants of this study have received basic training in academic writing. In addition, this group of learners show great enthusiasm in learning it. Therefore, when we started to recruit participants for this study, they all showed great interest and promised to participate in the research immediately. In return, we promised to provide some feedback on their summary writing and offer consultations on academic writing.
Methods
Aiming at exploring Chinese university EFL students' deployment of projection in summary writing, the present study employs a mixed-method approach that integrates a quantitative and qualitative analysis of projection used in Chinese undergraduate students' summary writing. Quantitative analysis refers to our observation of the data generated from the UMA corpus tool that reveals the distribution of various projection units. Qualitative analysis means we read through all the texts closely to identify features that may be associated with the quantitative results.
Protocol. The study selected an argumentative essay (a review of different voices concerning some agricultural risks) of 912 words from IELTS reading comprehension as the reading materials. The participants were given 80 minutes to read the text and write a summary in class. The length of the summary was to be about 300 to 500 words. The reason for choosing this argumentative text was that it is information-dense and has numerous arguments and viewpoints. Given this particular feature, inviting the participants to write the summary provided good opportunities to observe how they use projection to report or cite different opinions or information from the source text.
Procedure. The research comprises four stages, namely, summary writing assignment, manual coding, software-assisted analysis, and manual analysis of the texts. At the first stage, an argumentative essay with 912 words was given to the students in class, who were then instructed to summarize the essay in 300 to 500 words within 80 minutes. The instruction of the assignment is as follows: This writing task is designed to assess your ability in writing up a summary based on a piece of reading material. In this task, you need to read a passage and summarize its main ideas.
Your Task: 1. Read the passage provided. 2. Write the summary (300-500 words) Include the following parts into your summary: Part 1: An introduction of the passage you read: (1) Introduce the passage you have read, such as the title, the author etc.; (2) Sum up the thesis of the passage in one sentence.
Part 2: A detailed and logical summary of the main ideas from the passage.
Format:
Write in paragraphs.
Once the summary texts were collected, they were checked by the three researchers of this project. At this stage, all instances of projection were identified and annotated manually through the UAM corpus tool following the coding schemes we developed based on the SFL framework (as shown in Figures 1 and 2). The UAM Corpus Tool is a stateof-the-art environment for the annotation of text corpora. The tool provides functions of coding scheme creation, text annotation, corpus searching as well as statistical studies. It can be accessed through the website http://www.corpustool. com/index.html. Each instance of projection was annotated with its projecting process (verbal/mental), the type of projected clauses (complex/embedded), and, where relevant, the projecting circumstance (angle/matter), and the choice of projecting verb. The UAM then would generate relevant statistics automatically for further observations and analysis.
Coding of the data. Three researchers of this study conducted a pilot annotation of 10 samples for coding accuracy and consistency. The inter-rater reliability was 91%. The major discrepancies occurred in the coding for embedded projection (traditionally referred to as "a noun clause," realized as a noun plus a "that-clause"). More specifically, expressions such as "hold the view that," "harbor the view that," "put forward its view that" were coded as clause complex by one and embedded clause by another. It depends on whether the analyst treats the expression as a phrasal verb (an idiom) or a nominalized mental process followed by a "that-clause." Finally, we came to agree that these expressions are relatively fixed constructions that are semantically equivalent to mental processes, and thus coded as clause complex. Then we had to decide whether the verbs "hold," "harbor," "express," etc. should be coded as a mental process (actions of thinking) or verbal process (actions of saying). Our final criterion was that the verbs that carry a sense of keeping within, such as "hold" and "harbor," were coded as mental processes whereas the ones with the meaning of giving out, such as "put forward," "express," and "share" were coded as verbal process. It turned out that, in our data, only the common "it is . . . that" construction, such as "it is believed that," was coded as embedded projection. Another issue was that one researcher did not recognize the category "matter" that occurs in relational clauses as in "it is mainly about . . ." In addition, when all the data coding was finished, we double-checked each other's coding to ensure the accuracy of the data analysis. These problems were all solved before the massive analysis of annotations through the UAM corpus tool.
Quantitative Results
Statistics for projection at the phrase level. Projecting circumstances, typically realized as prepositional phrases, are important grammatical resources for projection. Figure 3 displays the number of projecting circumstances (/1,000 words) including those of angle and matter from year 1 to year 3. In particular, the proportion of angle circumstances increased from 1.56 at year 2 to 2.07 at year 2 while that of matter circumstances decreased from 3.61 to 2.48. However, the pattern of angle and matter circumstance used by year three students was substantially different. As indicated in Figure 3, the number of angle circumstances was the lowest among the 3 years (0.72/1,000 words). In contrast, the number of matter circumstances grew from 2.48 to 5.55.
Statistics at clause level. Figure 4 shows the number (/1,000 words) of projection units realized as clause complex and embedded clause across the 3 years. The frequency of locution and idea both increased from year 1 to year 2 and dropped at year 3. In other words, the year two students are inclined to use more projection than those in the other 2 years. The numbers of verbal processes and mental processes in projecting clauses were apparently higher at year 2 than at year 1. Similarly, the number of verbal and mental processes at year 2 were higher than those at year 3. Figure 5 shows lexical variations for verbal and mental processes across the 3 years. Contrary to our expectation, no obvious development could be identified except a slight increase in the number of verb types realizing the mental process. In this sense, the quantitative data show no obvious developmental features in the use of projection among the three levels of learners.
Tables 1 and 2 display the top ten words chosen to construe verbal process in projection across the 3 years. The general distributions across the 3 years show no obvious variation.
It is evident that "suggest," "argue," and "say" are the most frequent words used by students of all the 3 years (c.f. Liardét & Black, 2019;Yang et al., 2019). However, the frequency of "say" and "mention" reveals a salient pattern in the students' academic writing over the years. The frequency of "say" dropped remarkably from 12.5% at year 1 to roughly 5% at years 2 and 3. The frequency of "mention" declined from 7.14% at year 1 to 4.17% at year 2, and further down to 2.83% at year 3. A similar pattern can be observed in the use of mental processes. The 3 years share the top three choices of words, viz, "think," "believe," and "hold." However, looking at the trend of the frequency across the years, we find that the frequency of "think" decreases drastically from 51.02% at year 1 to around 30% at years 2 and 3. The frequency of "hold" (usually expressed as "hold the view that"), on the other hand, increases from 10.20% to 20.75%. Moreover, some sophisticated reporting verbs such as "harbor (the view that)" and "conceive" occur only in the writing of year 3. This distribution of specific words for projecting process resonates with Liardét and Black's (2019) idea that use of a wide range of projecting verbs is an important indicator of the students' academic writing development.
Qualitative Findings
Development of projection usage across the 3 years. Qualitative analysis shows that the use of projection developed across the 3 years. For example, we found that there were four students in year 1 who didn't use any projection in their summary writing, which was quite a big difference compared with the other 2 years. Students from year 2 and 3 used projection more evenly and we found that most of the students from these 2 years used it. Hence, students from year 2 and 3 have developed the concept of using projection in academic writing, while year 1 students have relatively weaker awareness.
Developmental features at phrase level. Some qualitative features could be found, which reflect the development of the students' use of projection at the phrase level.
Double projecting units. Some of the year 1 and year 2 students construe the projecting unit repeatedly through the pattern "angle circumstance ^ projecting clause." Consider the following example: (1) As far as the author is concerned, he believes cultural values are highly entrenched in food and agricultural systems worldwide. (Year 1) In example (1), the projecting units are construed in an unnecessarily repeated manner. For instance, the angle "as far as the author is concerned" could be combined with the mental clause "he believes" into a projecting unit like "the author believes." This type of inappropriate use of projection is not found in the year 3 data. This partly explains the higher frequency of angle circumstances at the early stage of writing.
Use of "say" in circumstances of angle. A close examination reveals that there are a few cases in which the use of circumstances of angle has been affected by their mother tongue (Chinese in our case). For example, The two cases demonstrate that the year 1 students are aware of the meaning potential of construing projection through circumstance (See Halliday & Matthiessen, 1999;Mattheissen, 1995). However, they employ the word "say" in the circumstantial element as in "From what the author said" and "according to what Nwanze said." This type of projecting circumstance, though grammatically correct, seldom occurs in academic citation. For instance, the search of "according to what * [say]" in COCA generates only a total number of 19 samples. However, the "according to . . . say" is a common pattern in Chinese (see Chen, 2016 on circumstances of angle in English and Chinese). This pattern of projecting circumstance vanishes in writings of the second and third years. Therefore, the word "say" may not only indicate the lower delicacy of the word choice for reporting process, but may also reflect the influence of the first language (language transfer) in projection construal at the early stage of the undergraduate's academic writing.
Incremental use of circumstances of matter. In our data, the circumstances of matter occur mainly in three types of grammatical environments, and they are found across all the 3 years. First, they occur with verbal or mental processes, serving as the circumstantial equivalent of verbiage or phenomenon (cf. Halliday & Matthiessen, 2014). For example: (4) Not only are they required to think about weather, . . . (Year 1) The circumstance of matter in the example 4 goes with the mental process (think about).
Secondly, the unit occurs as the qualifier of a nominal group, functioning similarly as an embedded projection. See the examples below: Thirdly, the unit serves as the attribute in a relational clause. For instance: (7) Next part is concerning solutions to lower the risks. The use of circumstances of matter in the grammatical environments above reflects the students' ability to compress information into a nominal group (Xuan & Chen, 2020). In example 7, the information of the article has been "packed" into a nominal group "different views online," and then the messages of the views have been summarized with another embedded clause "how to reduce the risks farmers face." The corresponding lines in the original text is as follows: In his essay, Kanayo F. Nwanze, President of the International Fund for Agricultural Development, argued that governments can significantly reduce risks for farmers by providing basic services like roads to get produce more efficiently to markets, or water and food storage facilities to reduce losses, Sophia Murphy, senior advisor to the Institute for Agriculture and Trade Policy, suggested that the procurement and holding of stocks by governments can also help mitigate wild swings in food prices by alleviating uncertainties about market supply. This is a rather intricate paraphrasing/summarizing process for a second language writer. Thus, the increasing number of matters, as shown in Figure 3, mirrors the development of the students' writing skills.
Developmental features at clause level. The deployment of projecting verbs shows development in students' writing. Starting from the first year, we find that several students didn't use projection in their summary at all. The statistics show year 1 students tend to use simple projecting verbs such as "say," "think," etc. but not make more delicate choices among projecting verbs (cf. Liardét & Black, 2019; see example 9). This sometimes makes the style of writing seem a little colloquial. For example, (9) In terms of the external power, people think the governments, public welfare, and social safety should provide greater help. . . .. . . They say farmers must alter their productive mode from individuals to collective action groups. . . .. . . In addition, they also think that farmers are able to take full advantage of tools, like private insurance, commodity futures markets, rural finance, and policies. (Year 1) The colloquial tendency is also reflected in the use of "come up with" by the first-year students. The colloquialism of the phrasal verb is evidenced in COCA, which shows that the phrase occurs most frequently in spoken section and most infrequently in the academic section (as illustrated in Figure 6). Consider the two examples: (10) Seeing the title at the first sight, I came up with some information that it may intend to talk about the risks the agriculture faces in developing countries but I wonder what the risks are and what measures will the governments take to protect their agriculture from destruction. (Year 1) (11) thus Marcel Vernooij and some others came up with a proposal that we could use what we have commanded to create more value with all stakeholders like business, government, etc. (Year 1) The two cases reveal that year 1 students are capable of employing the grammatical resource of embedded clauses to construe projection as in "information that . . ." and "a proposal that . . ." This ability has been further developed in the following years as shown in the use of "hold the view that" in years 2 and 3 data. However, the first-year students in the cases above choose the idiom "come up with" as the reporting verb, which shows their lack of proficiency in choosing the most appropriate words for the academic context. No such instance has been found in the writings of students in higher years. Furthermore, students at year 2 use a limited range of projecting verbs in quoting without much lexical or grammatical variation (see example 12).
(12) First of all, on account of the long-term climate change and extreme weather, the output of food product is probably decreasing sharply, which may lead to the widespread hunger in the developing countries. Aside from the natural factors, for human, the infrastructure, financial systems, markets, knowledge, and technology are supposed to be supported. Secondly, some participants argued that whether sufficient food are able to be ensured depended on fossil fuels and the improved government policies. When the food production is sufficient, basic services should be provided by the government, such as food storage facilities and transportation. Kanayo F. Nwanze, President of the International Fund for Agriculture Development thought that these indispensable basic services play an essential role on reducing the losses. (Year 2) In contrast, students at year 3 use a wider range of verbal/ mental clauses to realize projection (see example 13).
(13) In terms of state intervention, some argued that it was the government that should be responsible for providing convenient infrastructure, . . .. . .As for the establishment of welfare programed in the poor countries, people held their views that this implementation simply benefit this traders and capitalists . . .. . . Regarding the so-called private risk management tools, financial scheme, by employing high-input agricultural practices, expert argued that from a long-term perspective, it was not able to cope with the issue but triggered food insecurity. Some accentuated the transparency of market, while others insisted that it attributed to those agribusiness companies. . . .. . ., author viewed that the development of various crop is vital which contradicted the scaling down crop field tactic. However, confronting climate change, a parallel perspective showed that the diversity of plants and animals species was absolutely essential. In addition, . . .. . . Others emphasized the importance of integration among government, citizens, and involved organizations. (Year 3) Developmental features above clause level. The functions of projection also show differences among the three levels beyond the clause level. At years 1 and 2, students merely restate or report what they have read from the original text.
In contrast, students from year 3 begin to use projection as examples or illustration to support their viewpoints or ideas summarized from their reading.
(14) A large number of essayists stated that governments are supposed to play a dominate part in mitigating the risks farmers face. Kanayo F. Nwanze insisted that providing basic services is a good way for governments to help reduce losses. Sophia Murphy suggested that the procurement and holding of stocks by governments also work efficiently. Shenggen Fan held up social safety nets and public welfare programs. However, some commentators argued that this plan would not increase food security. In fact, it is shown that the main beneficiaries of subsidies are not for the poor themselves. (Year 2) (15) In light of these statements, some measures are mentioned to diminish the risks. According to online debate, some individuals conceive that it is most significant and challenging to tackle why agricultural system fails to protect the food security. Besides, many essayists suggest that state government should interfere and control more to remit the problems confronted by farmers. Kanayo F. Nwanze proposes that infrastructure should be advanced and Sophia Murphy advises that government should take part in the market. Additionally, Shenggen Fan supports the destitute people in some areas of southern hemisphere, however, many critics harbor that it is dispensable because most of those who receive subsidies are not poor in reality. (Year 3) As exemplified extracts (14) and (15), students in year 1 seldom use paraphrase in projection but only restate what they have read. In contrast, the writing of year 3 students shows that they use projection to contribute to the cohesion and coherence of their writing at the level of paragraphs. For example, in excerpt (15), this student firstly summarizes the idea as a topic sentence at the beginning of this paragraph (using "A large number of essayists" as the projector). He then continues to illustrate the main ideas by citing examples from his reading, synthesizing the information he obtained from the original text. Another noticeable feature is that students at year 3 begin to use projection at the level of paragraphs, formulating what is called projection paragraphs (Zeng, 2016). Such a phenomenon is found in two texts in the year 3 data. Consider the following example: Excerpt (16) shows that some students can use projection in their writing to contribute to the development of the text. In this excerpt, the student uses projection to indicate the information that they are going to elaborate on. The author summarizes the suggestions provided by the authors in the original text. By using such a projection, the author makes the logic of the summary crystal clear and logical, which directly contributes to the coherence of the writing. This shows that advanced students can deploy projection in their writing beyond clause level. It also echoes what Zeng (2016) argues in her paper: projecting paragraph.
Discussion
The model of projection as a semantic domain offered in SFL has important pedagogical implications. The function of projection can be realized at the lexicogrammatical level (SFL deems lexis, or vocabulary, and grammar, or syntax, lying at the same level interfacing with the other, hence the term lexicogrammar or lexico-grammar) through various grammatical units at different levels. This diverse manifestation pertains to grammatical variations in students' writing which are often considered indicators for "advancedness" in many academic writing tests (Ryshina-Pankova, 2011). The variation in the students' use of projection at different levels as shown in our data also displays their increasing "advancedness." However, this progress is quite limited except in very few students (see section 5.2, example 19). Therefore, it is recommended that we should pay more heed to the teaching of projection at phrase and text levels, making the projecting resources explicit to the learners. As mentioned earlier (Pickard, 1995;Swales, 2014), teachers of academic English typically focus almost exclusively on projection at the clause level, that is, so-called "reported speech" and "reported idea," often introducing a relatively restricted range of reporting verbs presented as a list with insufficient contexts for students to grasp the often subtle differences among them. Although other structures at other levels, such as prepositional phrases using for example "according to" maybe introduced, they are seldom presented within a systematic framework for realizing projection, including exploring the contexts in which they use of structures at one level might be more appropriate than structures at a different level.
At the text level, our qualitative analysis reveals that even at the most advanced level (year 3), most of the students did not use projection with adequate awareness of its textual or rhetorical functions. That is, they simply gave an inventory of projected or quoted ideas and presented them without sorting out the relations between those ideas. For instance, as mentioned earlier, only a few students at year 3 used projection as supporting arguments for a general point of view, and there are only two instances in which projected paragraphs are used. A similar finding has been reported by Kwon et al. (2018). Since summarizing and grouping others' propositions is a vital skill in academic writing, there is a need for activities in our writing lessons to guide the students to explore how different sources of ideas are related to each other (Jones & Lock, 2011;Lock, 1996).
Conclusion
The present study aimed to investigate the deployment of projection in summary writing by college EFL learners in China. The findings showed that development in the use of projection in the students' writing across the 3 years of university study is obvious qualitatively but not quantitatively. Quantitatively, there was no obvious development observed across the three levels of the learners, since students from year 2 deploy more instances of projection than those from year 3. Qualitatively, however, the higher the level of the learners, the more accurate and appropriate was their use of projection. In addition, the rhetorical and textual functions of projection used by the learners also show development. Based on these findings, we have proposed some pedagogical suggestions on improving the teaching of projection used in academic writing, particularly at the phrasal and textual levels.
However, before generalizing the findings, we should be cautious about the following limitations of the present study. First, we only looked at three classes of students' writing from one university in mainland China and the sample was quite small. The findings might be a little different if we change aspects of the context, such as the student's background, the school culture, and the school's location. Those factors will affect the reliability and representativeness of the findings. Second, we only investigated summary writing of one text type. Third, we utilized cross-sectional data rather than longitudinal data, which may have not fully captured the nature of development in this area.
For future studies, we suggest that more participants from different schools should be included to be able to generalize across different contexts. Secondly, a more comprehensive range of writing tasks should be adopted as test materials, motivating learners to use projection across different contexts. Finally, a longitudinal approach could complement the current design.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 9,410.8 | 2022-04-01T00:00:00.000 | [
"Linguistics",
"Education"
] |
Structural transition in the collective behavior of cognitive agents
Living organisms process information to interact and adapt to their surroundings with the goal of finding food, mating, or averting hazards. The structure of their environment has profound repercussions through both selecting their internal architecture and also inducing adaptive responses to environmental cues and stimuli. Adaptive collective behavior underpinned by specialized optimization strategies is ubiquitous in the natural world. We develop a minimal model of agents that explore their environment by means of sampling trajectories. The spatial information stored in the sampling trajectories is our minimal definition of a cognitive map. We find that, as cognitive agents build and update their internal, cognitive representation of the causal structure of their environment, complex patterns emerge in the system, where the onset of pattern formation relates to the spatial overlap of cognitive maps. Exchange of information among the agents leads to an order-disorder transition. As a result of the spontaneous breaking of translational symmetry, a Goldstone mode emerges, which points at a collective mechanism of information transfer among cognitive organisms. These findings may be generally applicable to the design of decentralized, artificial-intelligence swarm systems.
additional moves in response. The player, in other words, will sample hypothetical sequences of steps (i.e. trajectories) in the abstract space of moves within the chessboard.
The player's strategy can be cast into a simple, general form: the maximization of future options to move without losing the king. The number of moves the player is able to contemplate ahead is a direct numerical measure of her cognitive competence. In the present study, we adopt a straightforward generalization of this measure by defining cognitive competence as the ability of an agent to determine the number of possible moves within a given environment. This ability depends upon the agent's cognitive map, and we may assume that, similarly to the chess player, the agent will seek to maximize its number of future options to move.
We will now describe a mechanistic implementation of the above ideas which has the twofold advantage to (i) make our ideas concrete, and (ii) provide a connection to the field of active matter 10,11 . We consider a set of freely mobile, identical, spherical particles with diameter σ. Their only 'cognitive' activity is to explore the surrounding space. This exploration is performed via hypothetical random walks of a certain length, starting from the agent's current position, and by evaluating the shape of such walks they gain knowledge about the location of other particles or confining boundaries. More precisely, each agent performs a fixed number N ω of such walks and evaluates how elongated the hypothetical trajectory of this walk is, thus allowing the inference of the location of confining objects, where the trajectory is likely to be more compact, and empty areas, where the trajectory can be more elongated and spacious. As measure for quantifying the configuration of each hypothetical trajectory we use the radius of gyration (see Methods). Clearly, the larger the cognitive competence of the agent is (i.e. the longer these hypothetical walks can be made), the larger the cognitive map of the environment will be.
A formal analogy between random walks and ideal polymers 12,13 can help us understand what to expect. If the random walks were really executed, with no crossings allowed, the resulting interaction would be analogous to the repulsion experienced between star-copolymer molecules with n ω chains. The mutual interaction of polymer chains is known to be characterized by isotropically repulsive entropic forces 14 . We will demonstrate below that by replacing the real random walks with hypothetical, sampling walks, hence introducing cognitive maps into the system, the collective behavior changes dramatically.
For each agent, we consider a set of random, hypothetical sampling trajectories, {Γ τ (t)}, each of total duration τ, that the agent may traverse to explore its environment. Following an earlier work where these hypothetical walks have been introduced 15 , we explicitly indicate the dependence on time t because the cognitive map is dynamically updated as information is acquired. Starting from its initial position, r 0 , the region probed by the agent, and thus the size of its cognitive map, is directly proportional to τ.
We can build a probabilistic description of this cognitive map by considering the probability density function P(Γ τ (t)|r 0 ) associated to an ensemble of trajectories all starting from r 0 . Our probabilistic description should however represent mathematically the information acquired in building a cognitive map. According to Shannon 16,17 , −PlnP is the most general functional form that obeys the constraints of continuity, nonnegativity, and additivity of information. It is then natural to express the information content stored in the cognitive map as which is a path integral over the hypothetical trajectories of an agent at position r, building up the cognitive map {Γ τ (t)} (see Methods and ref. 15,18 ), and where k B is Boltzmann's constant to give dimensions of entropy. The central assumption of the present work shall be that intelligent agents tend to maximize the information content stored in their cognitive maps. This assumption, together with Eq. (1), immediately implies that a cognitive agent tends to maximize the diversity of possible future trajectories. As a cognitive agent acquires information about the external environment, which is by nature of the process limited and partial, the most unbiased decision the agent can make is the one corresponding to the maximum of entropy, because it uses all the available information without any additional assumptions. Mathematically, it assigns a positive weight to every situation which is not excluded by the given information 19,20 .
Maximization of can then be represented by a force acting on the agent of the form where the coupling parameter θ (with dimensions of temperature) quantifies the cognitive competence of the agent, that is, how strongly the agent responds to the environment (see Methods). In order to maximize the information content of the cognitive map, an agent will move by following the gradient of .
An intuitive understanding of the principle stated above can be gained by considering again the predator-prey system discussed above. The prey will choose a path which maximizes the future available options (and hence its survival probability). More formally, the approach presented here is indebted to a number of contributions in complex system theory. The well-informed reader will recognize echoes of Kauffman's hypothesized 'fourth law of thermodynamics' 21 , stating that autonomous agents maximize the average secular construction of diversity 21 , e.g. organisms tend to increase the diversity of their organization. In the context of information-processing systems, related approaches have been expressed by Linsker 22 with his "Infomax" principle which was used to demonstrate the emergence of structure in models of neural architecture [23][24][25] , or Ay et al. 26 with the maximum of a predictive information -a relation between future states and past ones-biological infotaxis 2 , sensorimotor systems 27 , and control theory 15 . Our approach is based on the idea that cognitive systems entail some mechanism of prediction 28,29 . We therefore consider a finite duration τ of the hypothetical trajectories.
The motivation for our approach is that an optimal information-processing dynamics should indicate the level of competence of the agent to respond to complex stresses and stimuli. To name only a few examples where maximization of information or entropy has been found empirically and might constitute a fundamental mechanism, maximization of information has been measured as a characteristic of human cognition 30,31 ; a pair-wise www.nature.com/scientificreports www.nature.com/scientificreports/ maximum entropy accurately models resting-state human brain activity 32 ; patients with ADHD exhibit reduced signal entropy as compared with healthy individuals 33 . Figure 1 shows a schematic of a few agents moving on a two-dimensional space, whereas the vertical dimension represents time. Agents interact with each other and with the environment. Each agent explores the available configuration space and acquires information about its structure, and in so doing builds its cognitive map, and optimizes its behavior through responding to the surrounding. The hypothetical trajectories, exploring the available space, have an envelope characterized by a spatial extension λ, and temporal extension τ. In the overlap regions of the forward cones the agents have a probability to collide. This possibility gives rise to the effective force F(r; τ). The overlap regions and the corresponding effective forces appear when the distance between any two agents becomes shorter than the average linear length of the agents' hypothetical trajectories, which relates to the size of the cognitive map.
Our definition of information entropy satisfies the following criteria. First, is based on the information content of the system because agents retrieve and process information about the presence of other agents. Second, it does not require any specific goal or strategy, such as rules for taxis of bacteria in chemo-attractant concentration fields. Third, it obeys the laws of information theory and information processing, essential to build cognitive maps. Fourth, it obeys causality because the current state of the cognitive map influences the agent's future dynamics.
Results
We carried out simulations of N identical agents in a two-dimensional, continuous system of size L×L, where agents interact with each other via the cognitive force F and via hard-core repulsion when their distance is less than the agent's diameter σ. The agents' configurations evolve continuously from a random initial distribution towards steady-state configurations for different sizes λ of the cognitive map. We define the size of a cognitive map as the average distance between start and end of hypothetical where N Ω is the total number of hypothetical sampling trajectories. Figure 2 shows the steady-state configurations of the system as the size λ of the map increases. At low values of cognitive map size λ with respect to the inter-agent separation, most agents are isolated and randomly distributed throughout the system (Fig. 2a). The agents try to stay as far apart as possible from each other in an attempt to maximize their available space. For a horizon of linear size λ, the available space scales as λ 2 . As λ increases, we observe the spontaneous formation of short linear chains of agents ( Fig. 2b). At λ = 5.6σ, the chains grow longer and outline a labyrinthine pattern in the system (Fig. 2c). The emergence of this spatial organization can be understood as a way to increase the available configuration space in the direction normal to the chain-like structures. Consider chains of typical length . Once chains are formed, the space available to their horizons scales approximately as λ . Thus, the ratio of available space between chains and the disordered configuration scales www.nature.com/scientificreports www.nature.com/scientificreports/ approximately as /λ. The agents can therefore increase their available space by increasing , and forming long chains.
Upon further increase of λ, the pattern continuously turns into a cellular structure (Fig. 2d,e), which we find well developed at λ = 10.8σ (Fig. 2f). Consider now the (entropically) advantageous strategy of agents to form chains (e.g. for λ = 5.6σ). With increasing λ agents tend to arrange such that they keep larger distances from other chains of agents within proximity. Due to the fixed filling fraction and size of the system this leads to chains connecting to joints and ultimately cellular structures, of roughly hexagonal symmetry, which provides the optimal tiling of the plane.
A similar sequence of patterns can be observed when we vary the filling fraction, φ ≡ Nπσ 2 /(4L 2 ). The phase diagram of the system is shown in Fig. 3a. The transition line from short chains to more complex patterns is well fitted by a relation φ λ −2 which suggests that the transition is triggered as the mean inter-agent distance becomes comparable to the cognitive map size λ.
In order to analyze the complex morphology of the patterns, we employ the anisotropy parameter , defined in terms of the eigenvalues β 1 and β 2 of the Minkowski tensor 34,35 , which is a measure of anisotropy of configurations of particles (see Methods for more details). Figure 3a shows as a heat map the association of the phase diagram with the anisotropy α of the configurational patterns. At fixed filling fraction φ, the system exhibits the largest anisotropy α when linear chains start to connect with each other for intermediate values of the size λ of the cognitive map. In contrast, at low λ, where agents are isolated, the system is trivially isotropic. At large values of λ, where cognitive maps significantly overlap and cellular patterns emerge (Fig. 2f), the associated anisotropy decreases to values that are however larger than in the case of low λ. This indicates that the system gains again isotropy on the larger scale of the cells. Figure 3b shows the anisotropy α of the pattern. It exhibits a sharp maximum at λ σ . 5 5 where the linear chains are most pronounced and the system is at the threshold of forming the labyrinthine patterns. The transition from isolated particles to cellular structures appears to be continuous, as no structural or dynamical observable shows discontinuous behavior. The transition occurs when α is considerably larger than zero, that is, for λ σ 2 . www.nature.com/scientificreports www.nature.com/scientificreports/ Our results so far illustrate that as the cognitive maps of the agents overlap, interesting collective behavior emerges. A moment's reflection will show that agents perceive each other's presence via their respective cognitive maps, and thus information about each other's presence must be exchanged, and this information flow, in turn, dynamically modifies the cognitive maps of the agents. Quantifying the information flow among agents will instruct us on the origin of the collective behavior.
This information flow can be quantified via the notion of mutual information [36][37][38][39][40][41][42] . The mutual information for two random variables a and b is given by , where P(a) represents the probability distribution of a, and P(a, b) is the joint probability distribution. Because we are interested in isolating the causal interaction between agents that underpins the update of the cognitive maps, we consider the positions (x i (t), y i (t)) of the i-th agent at time t and compute The total mutual information is then where 〈i, j〉 represent all pairs of agents which are within a local neighborhood of distance 4σ (larger values of this cutoff do not qualitatively change the results), and N p is the total number of neighbor pairs (i, j) included in the sum in Eq. (3). Figure 4(a) shows the dependence of the mutual information on the size of the cognitive map λ. At very small λ, the cognitive agent system exhibits a mutual information which is nearly vanishing on account of the nearly independent motion of each agent. As the size of the cognitive map increases beyond 2 0 λ σ . , increases steadily, while the system develops labyrinthine and cellular patterns. At 9 5 λ σ . , reaches a plateau value, corresponding to well-developed cellular patterns. These results show that upon increasing the size of the cognitive maps, an indirect exchange of information takes place among the agents, which in turn leads to the formation of complex structures.
The information exchange among the agents is correlated to their structural transition. To quantify the degree of correlated motion, we turn to a standard tool for the analysis of the dynamic response of many-body systems 43 : the displacement covariance matrix 44 C ij ≡ 〈δr i (t)⋅δr j (t)〉 t , where δr i (t) ≡ r i (t) − 〈r i 〉, and the angle brackets with subscript t indicate an average over time; the eigenvalues and eigenvectors of C ij provide information about the coherent www.nature.com/scientificreports www.nature.com/scientificreports/ motion in the system. Provided that the particles have well-defined average positions 〈r i 〉 -and this is in fact the case once complex patterns emerge-the spectral properties of C ij can describe effective excitation modes 45 . Figure 5a shows the eigenvalues of C ij for δx and δy, and the comparison with the uncorrelated motion generated with a random matrix model of Gaussian-distributed displacements. The first mode of the cognitive system is considerably above the random Gaussian model, and corresponds to a large wavelength mode propagating through the system. This collective mode is shown in Fig. 5b. This is the Goldstone mode associated to the structural transition and corresponds to excitations of the ordered state. The situation is reminiscent of equilibrium systems where the spontaneous breaking of translational symmetry (Galilean invariance) is associated with the emergence a massless Goldstone mode, propagating through the system with a scale-free correlation length. Goldstone modes have been also identified in models and observations of active collective behavior [46][47][48] . In practice, this means that in certain configurations of the system some fluctuations propagate very quickly throughout the system, and they do not depend on the system size. Figure 6 shows the spatial correlation of displacements between agents defined via the correlation function G(d) = 〈δr i (t)⋅δr j (t)〉 t, (i,j) , where d ≡ |r i (t) − r j (t)|, and the angle brackets with subscript t, (i, j) indicate an average over time and pairs of agents (i, j) separated by distance d. The correlation in the agents' motion decays with distance with a power law (the oscillations with decaying amplitudes are due to the cellular pattern of the system).
In summary, our study furnishes a first step forward in the understanding of nonequilibrium transitions in a system of cognitive agents that dynamically interact with their environment, and respond to it with cognitive competence by maximizing the information content of their cognitive maps. The transition from isolated particles to complex patterns is characterized by different degree of overlap of the cognitive maps. The continuous change of as the system develops complex patterns, together with the change of the anisotropy parameter 1 2 point at a transition in cognitive-agent systems. We have identified a Goldstone mode propagating through the system that is generated by the spontaneous symmetry breaking of the structural transition as complex patterns emerge.
Apart for its significance for investigations of complex organisms whose active response to environmental stimuli is based on various levels of cognition, from eusocial insects to mammals, our results are relevant to artificial systems like autonomous micro-robots, and swarm robotics 49,50 , which are explicitly designed to autonomously mimic the collective behavior of living organisms.
Simulations. Every agent obeys the following equation of motion
where v is the velocity of the agent, m its mass, γ the viscous drag, F(r;τ) the cognitive force, and h(r) is the short-range repulsion among agents, modeled via a repulsive linear spring when |r i − r j | < σ, where σ is the hard-core diameter of the agents.
As described above, the calculation of the cognitive force F(r;τ) [Eq.
(2)] requires calculating a set of hypothetical sampling trajectories {Γ τ (t)}. The simulation algorithm is based on the following two steps: (i) generation of the hypothetical trajectories, resulting in the construction of the cognitive map, and computation of the cognitive force F(r;τ) (see below for details); (ii) update of the agent's position according to Eq. (4). During the generation www.nature.com/scientificreports www.nature.com/scientificreports/ of the hypothetical trajectories, all agents in the system remain fixed in their current positions. The dynamics of the system evolve by repeating the two steps above.
The agents are initially placed randomly with a uniform distribution within the system and without any overlap of the agents' hard cores. The system size is, unless otherwise specified, fixed at L = 80σ.
Construction of the cognitive map. The calculation of P(Γ τ (t)|r 0 ) is performed by generating hypothetical sampling trajectories, each of which represents a virtual evolution during the time [0,τ] of the agent with constraints fixed at the present configuration and not depending on time. The hypothetical trajectories are generated using Langevin dynamics where v, m, γ, and h(r) have the same meaning as in Eq (4), ξ(t) is a random noise with zero mean and 〈ξ i (t)ξ j (t′)〉 = 2γk B Tδ ij δ(t − t′).
Any interaction of a hypothetical sampling trajectory with another agent is hard-core repulsive, that is the trajectory is reflected elastically by the other agent.
Derivation of the cognitive force.
Here we show the derivation of an expression for the entropic force we use to calculate the system's dynamics. This is a derivation adapted and simplified from 15 . We start by using Eqs (1) and (2) and consider the gradient in (2) with respect to position space coordinates at present time r(t = 0) = r(0) and arrive at We can assume deterministic behavior within one small sub-interval [t, t + ε]. Therefore a conditional path probability can be decomposed into the probabilities of its intervals in the following way: t r r r Pr( (0)) P r( ( )) Pr( (0)), where Γ ε denotes a path of length ε and τ = Nε. Accordingly, we can express the gradient of the probability as Since Γ ε can be seen as the path from r(0) to r(ε) in one step, the gradient in probability of jumping from r(0) to r(ε) with respect to r(0) is equal to the negative gradient in probability of jumping from r(0) to r(ε) with respect to r(ε): By Taylor expanding the position r(ε) we find To estimate the probabilities Pr(Γ τ |r(0)) we use N Ω Brownian trajectories exploring the available space for a finite time (horizon) τ. Every sampling trajectory starts from the current system state r(0). We assign a uniform probability for all paths within a neighborhood of a sampled path based on the volume Ω n explored Γ | Ω ∫ ≡ ∂ provides a measure of anisotropic morphologies 34,35 . It is a second rank symmetric tensor, where G 2 = (κ 1 + κ 2 )/2 is the local curvature, r the position vector, n the normal vector to the surface ∂C of a body, and is the symmetric tensor product of vectors a and b. The anisotropy parameter 1 , respectively, gives a measure of anisotropy of the pattern. | 5,184.2 | 2019-08-28T00:00:00.000 | [
"Computer Science"
] |
Stochastic approach for radionuclides quantification
Gamma spectrometry is a passive non-destructive assay used to quantify radionuclides present in more or less complex objects. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the calibration coefficient are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modelling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometric shape, matrix composition, matrix and source distribution. Some of them are strongly dependent on package data knowledge and operator backgrounds. The French Commissariat à l’Energie Atomique (CEA) is developing a new methodology to quantify nuclear materials in waste packages and waste drums without operator adjustment and internal package configuration knowledge. This method suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability densities of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum, and outside dimensions of interest objects. The methodology is testing to quantify actinide activity in different kind of matrix, composition, and configuration of sources standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology.
I. INTRODUCTION
T HE quantification of nuclear materials is crucial in many branches of nuclear industry as nuclear plants, nuclear criticality safety or nuclear decommissioning.Many nuclear detection techniques like neutron detection methods, gamma spectrometry, and alpha particles detection methods are carried out to accurately quantify radionuclide masses included in a large number of more or less complex objects.Well-known detection systems using gamma spectrometry principles are germanium gamma ray detectors and more precisely the High-Purity Germanium one (HPGe) [1], [2].Compared to NaI detectors, the dominant feature of HPGe detectors is their excellent energy resolution around 1 keV, very useful for identification and quantification of radionuclides such as 239 P u or 241 Am.A numerical method [3] has been developed to propose reliable and accurate characterization of HPGe detectors by combining 3D Monte Carlo particle transport simulation codes such as MCNP [4] with applied mathematical tools.This detector numerical model is built in order to reduce global uncertainties of final quantification results by increasing detection capability knowledge of HPGe detectors.
Actually, to identify and quantify gamma emitter nuclides included in different kind of objects, such as nuclear waste packages, or glove boxes, one needs to calculate the activity A appearing in (1) [5] which leads to the interest radionuclide mass m by (2): where: • E : Energy (MeV) • A : Activity of the measured object (Bq) • S(E) : Net counting rate of the full energy peak at the energy E (counts) • I γ (E) : Branching ratio of radionuclide at the energy E • t : Acquisition duration (s) • ǫ(E) : Absolute efficiency calibration coefficient at the energy E, also called attenuation law where: • m : Interest mass included in the measured object (g) • A : Activity of the measured object (Bq) • A m : Mass activity of the interest radionuclide (Bq/g) The ǫ(E) coefficient is related to the capability of a measured object to reduce the gamma signal coming from an interest nuclear material.It depends on object features such as internal layout, screens, density, position of the gamma source relative to the HPGe detector, internal materials and energy.Though the S(E) spectrum extracted areas can be more or less easily determine with accuracy [6], calculation of ǫ(E) is a hard process for complex objects in terms of internal layout and composition.Current and common calculation methods use Monte Carlo simulation codes as MCNP to model measured scenes and approach as soon as possible the real values of the ǫ(E) coefficients.For few years, new kind of nuclear quantification numerical methods dealing with applied mathematics and stochastic approaches [3] propose to emulate outcomes of interest as ǫ(E) to solve the inverse problem of quantification and estimate an interest radionuclide mass distribution.
Section II presents a new approach of the mass quantification The quantification process which is considered here aims to estimate the probability density function (PDF) of an interest radionuclide mass, so-called m.Let suppose the interest radionuclide is a multi gamma-emitter.Hence, via specific software applied on gamma spectrum analysis, it is possible to obtain extracted areas from the measured gamma spectrum.Let suppose there are N extracted areas gathered in vector S = {S(E n ), n ∈ 1, N }, relatively to (E n ) n∈ 1,N energies of the multi gamma-emitter radionuclide, and X ∈ Ξ D represents a D-dimension vector of inputs for the absolute efficiency calibration coefficients vector ǫ(X) estimation.Hence, (1) and (2) allow us to provide the following equation: Let suppose that Y obs is the observation vector defined in (3): In (3), ξ represents the Y obs observation uncertainty vector.All of its coefficients are related to normal distributions centered at zero with σ obs = (σ obs,n ) n∈ 1,N as standard deviation vector.In considering Y obs , m, ǫ(X), and X as random variables, or vectors composed by random variables, the marginal PDF for m mass given Y obs , expressed as f (m|Y obs ), is given by Bayes theorem written in terms of PDFs [1].The probability density f (m|Y obs ) depends on joint PDF f (m, Y obs ) and f Y obs (Y obs ) the marginal PDF of Y obs [7], as expressed in following equation: The distribution π(m) is called prior distribution [7] and based on hypothesis, experience, or subjective opinion about m mass.To add X dependencies lead to obtain (5):
III. HYPOTHESIS
The objective is to obtain an estimation of f (m|Y obs ), the probability density of m mass.Let suppose some hypothesis about necessary PDFs to provide the mass distribution: • Y obs is a vector composed by N independent random variables Y obs n related to normal distributions: with µ n (m, X) and σ n (m, X) respectively the mean and standard variation obtained after the propagation of all uncertainty sources.• Let suppose there is not any correlation between m and X, and between components of X.Hence, the ǫ(X) efficiency does not depend on the m radionuclide mass.In this case, with X of D-dimension composed by (X d ) d∈ 1,D , prior distributions can be expressed as: • Each prior distribution provides more or less information about its own random variable [7].A good known variable is associated with tight normal distribution, and so, it brings information about variable behaviour.A bad known variable with no real information about, such as m mass or ρ density of the object, is associated with uniform distribution defining on its own definition domain.This method allows us to control real knowledge about the measured object and test different hypothesis on unknown variables.
IV. MCMC SAMPLING AND SURROGATE MODELS
The calculation of the m mass conditional PDF f (m|Y obs ) leads to obtain the integral of X appearing in (5).Nevertheless, X is composed by unknown variables such as ρ object density, and to obtain directly the integral, and of course f (m|Y obs ), becomes too hard even impossible.A solution to work around the problem is to sample m mass conditional PDF in using Markov Chains Monte Carlo (MCMC) methods.The Metropolis-Hastings algorithm allows us to sample m mass and X components PDFs in using Markov Chains theory [7].This method needs several ten thousands of calls to the simulation code which provides the absolute efficiency calibration coefficients vector ǫ(X).Because of the impossibility to rapidly evaluate ǫ(X) directly with the particle transport code MCNP, a Kriging surrogate model [8], [9] is proposed to emulate it.Surrogate model [9] requires a limited number of calls to the code, which are gathered in a design of experiments (DoE), in order to first accurately and quickly build an outcome of interest function M , and finally predict new values of interest such as: A well-chosen and good-known DoE building technique is the Latin Hypercube Sample one (LHS) [10].This kind of DoE gives very interesting space filling properties by dealing with different criterion maximization such as the minimax one.
Due to its calculation process which uses Kriging emulation method presented below, the ǫ(X) random variable can be written as following: ) n∈ 1,N the predictive mean vector of ǫ(X) and σ M (X) = (σ M n (X)) n∈ 1,N its standard deviation vector coming from the M Kriging regression surrogate model from (8). e model (X) represents the model error.It is noted that this uncertainty is supposed to be insignificant but kept in the calculation process.Please note that uncertainties on necessary MCNP values for Kriging model construction are taken into account during its regression process.Consequently, this error doesn't appear in (11).
Hence, (10)(11) allow us to express Y obs PDF from Y obs n PDFs: To finish, in considering (5-7-11-12-13), the m mass conditional PDF can be expressed as following: VI. ABOUT THE ξ RANDOM VARIABLE Henceforth, the aim is to consider m mass PDF and sample it with MCMC methods as the Metropolis-Hastings algorithm.Nevertheless, a remaining variable has to be approached.Because of different difficulties to accurately quantify the σ obs standard deviation vector appearing in (3) and depending on the spectrum area extraction method, two hypotheses (H 1 and H 2 ) about it are tested and compared: The first hypothesis H 1 proposes to put σ obs vector proportional to observable values (15).In the second one H 2 , σ obs is constant and equal to an arbitrary value σ for all N components (16).In order to explore the capabilities of the nuclear material quantification method exposed in sections II to VI, a design of experiments related to the measurement of different more or less complex objects has been built.It concerns four gamma spectrometry experiments, so-called E 1 , E 2 , E 3 and CO.All of them consider the measurement of standard cylindrical objects used in usual processes of nuclear waste package management and contained various quantities of gamma-emitter 239 P u masses.The four objects are filled with different attenuation materials and densities.Experiment features are gathered in Tab.1: In Tab.1:
A. Experimental protocol
The vector X used for the construction of the detection efficiency surrogate model ǫ(X) (section IV) consists of the gamma ray energy E, the radius r and the height h of the cylinder, the distance from the detector to the object L, the density ρ and the mixture of three materials: iron, vinyl, and plutonium (Fig. 1).Each of these is defined as follows in the LHS design of experiment used: • h ∈ [10, 110](cm) Experiments E1 and E3 represent low attenuation cases because of their respective low apparent densities and their vinyl composition.Their difference lies in the definition of the source term.The plutonium source of E1 is punctual and placed in front of the detector and in the center of the object.On the other hand, the plutonium mass of E3 is divided into 16 parts which are distributed homogeneously throughout the cylindrical object.For these two experiments, the plutonium is in the form of a liquid solution.As for them, the experiments E2 and CO are more exotic.They represent greater attenuation cases by their apparent bulk densities and their partially metallic composition.Although their respective source terms are located in a single point and therefore considered as point sources, plutonium is here in metal form and present in larger quantities.The effect of self-abortion is theoretically more expressed for these two experimental cases than for the two previous ones.
A unique characterized HPGe detector is considered for all experiments.Its characterization has been built with a numerical method developed by N. Guillot and N. Saurel [3].The intrinsic calibration efficiency coefficient of the interest HPGe detector and its uncertainty are taken into account in the proposed quantification method.As explained in section IV, the interest conditional PDF mass f (m|Y obs ) (14) is estimated by Monte Carlo sampling in using the Metropolis-Hastings algorithm.
As explained in Section III, each parameter of the problem has its a priori probability density.In the present case, since the radius of the cylinder, its height and the distance from detector to object are available and known, they possess a priori Gaussian probability densities centered on their real values with low variances.However, the mass of plutonium to be estimated and the bulk density possess uniform noninformative probability densities.This means that one does not have a priori information on these values.
In order to propose ergodic chains [7], thirty-five Markov chains are generated in parallel.For each of them, initial values of m mass and X input components are randomly distributed relatively to their prior distributions exposed in section III.Chain burn-ins [7] are not taken into account in mass estimated distribution analysis.Both of the H 1 and H 2 hypothesis presented in section VI have been tested to different values of their parameters.
B. Quantification results
For all of the four experiments, the thirty-five Markov chains are gathered to form the estimated conditional PDF mass.These distributions appear in Fig. 2 with empirical densities.For these figures, the hypothesis H 1 is used with its parameter equal to 0.05, ie α = 0.05 (15).First, it is noted that the expected actinide masses of Table .1 are included in each of the distributions.With respect to the E 2 and CO experiments, the mass PDFs are formed from a single distribution whose modal value is close to the magnitude of the expected mass.Gaussian, gamma and lognormal fits were attempted but without success.Moreover, the variance of the distribution of the experiment E 2 is greater the CO one.As for them, the experiments E 1 and E 3 propose a bimodal mass distribution.In both cases, one of the two modal values is close to the expected mass value and has greater probability amplitude.Several mixtures of fits have been tested but none showed any interesting results.
Second, the influence of hypotheses H 1 and H 2 on the form of mass distributions has been tested by varying their parameters.Both hypotheses show similar results for the E 2 and CO experiments.Their variances increase as the uncertainty associated with H 1 and H 2 increases as well.Nevertheless, concerning the experiments E 1 and E 3 , it is noted that the ratios of the probability amplitudes of their respective modals decrease as the uncertainty of the observations increases.In other words, the estimated masses from the two modals are more or less probable depending on the value of the uncertainty of the observations as shown in Fig. 3.
C. Discussion
The expected mass values of the four experiments are included in the estimated probability densities and are close to the modal values.This shows the overall consistency of the method.However, there is a significant systematic bias of about 15% between expected and estimated values.This 6 can be explained by a poor ability to predict the surrogate model M of ǫ(X) efficiency.An optimization of this model is necessary and under progress.This method of quantification makes it possible to take into account all sources of uncertainty and to easily propagate them.Other sources of uncertainty may also be added.The optimization of the efficiency proposed above may also provide a reduction in its variance of prediction resulting from the technique of emulation by kriging.
Considering the experiment E 1 under the hypothesis H 2 with σ equal to 10 exposed in Fig. 3a, the variance of the estimated probability density of the mass is almost entirely due to the variance of the model of the efficiency.Indeed, given the values of S obs , in this case we have: Equation ( 17) indicates that in the case where σ is equal to 10, it is considered to be negligible with respect to the observation.The increase in the variance visible in Fig. 3 is therefore due to the increase in the uncertainty of the observations under H 2 , ie σ.In case E 1 , a high confidence in the values observed, ie with σ equal to 10, makes it possible to obtain a modal approximately 6 times more likely than the second one.On the other hand, in the absence of precision on the observations, ie with sigma equal to 750, the modals estimated by the method are almost equiprobable.This case reflects the importance of accurately estimating the uncertainty of the observations, and therefore of the surfaces extracted from the gamma spectrum.
VIII. CONCLUSION
The quantification of nuclear material is essential in the nuclear industry.It allows, among other things, to comply with the legal regulations relating to the management of nuclear waste.The quantification method developed in this study uses a stochastic approach to the problem in a Bayesian framework.This makes it possible to consider all sources of uncertainty and to easily propagate them.It also proposes a resolution of the inverse problem of quantification using an MCMC algorithm.The method was tested on 4 experimental cases of 239 P u measurement.These first results are coherent and allow to glimpse a robust quantification method.The method deals with the emulation of the detection efficiency of the measurement scene.Despite good prediction capabilities, optimization of this model in terms of accuracy and variance reduction is to be expected.
3 f
V. CALCULATION OF THE f (m|Y obs ) PROBABILITY DENSITY The objective is to obtain m mass conditional PDF f (m|Y obs ).Considering (5-7-8) and the given hypothesis exposed below, the calculation of f (Y obs |m) is due and developed as expressed in the following equation: EPJ Web of Conferences 170, 06002 (2018) https://doi.org/10.1051/epjconf/201817006002ANIMMA 2017 (Y obs |m) = where (µ n ) n∈ 1,N and (σ n ) n∈ 1,N are determined in considering and propagating all uncertainty sources from (3).Here, (µ obs n ) n∈ 1,N vector represents observation inputs coming from the extracted gamma spetrum.Let begin in taking (3):
Fig. 1 .
Fig. 1.Model used to build the efficiency emulator
4 (Fig. 2 .
Fig. 2. Predicted 239 P u masses (g) related to the four experiments under H 1 hypothesis with α equal to 0.05.Empirical densities appear in red and expected masses in blue.
5 (Fig. 3 .
Fig. 3. Predicted 239 P u masses (g) related to E 1 experiment under H 2 hypothesis.Empirical densities appear in red and expected masses in blue. | 4,299.4 | 2018-01-01T00:00:00.000 | [
"Physics",
"Environmental Science",
"Engineering"
] |
Colossal magnetoresistance and anomalous Hall effect in nonmagnetic semiconductors
Colossal Magnetoresistance (CMR) in nonmagnetic semiconductors and magnetic materials has been investigated as a function of magnetic field, charge carriers concentration and temperature. Both types of materials demonstrated qualitative and quantitative coincidence of CMR dependence on magnetic field, charge carriers concentration and temperature. The findings support the CMR interpretation in the framework of the Excitonic Insulator (EI) model and transition of an insulating EI phase to conducting state under magnetic field for all types of materials under study. It is suggested that Jahn-Teller distortion caused by magnetic ions and external uniaxial stress could initiate EI phase formation
Introduction
The effect of specific resistance decrease in magnetic field, known as Negative Magnetoresistance (NM), was first observed by Lord Kelvin in ferromagnetic metals in 1856 [1].Still the numerous attempts to reveal the nature of the NM effect in most types of solid state materials, like semiconductors, superconductors, perovskites, graphite remain debatable.Nowadays NM phenomenon exploration in the above materials has been boosted by the practical interest as their characteristics meet the demands of hard disk driver technology, spintronics and magnetoelectronics [2].
Special attention was attracted to NM effect in Magnetic Semiconductors (MS) like n-Gd 3-x v x S 4 [3], Diluted Magnetic Semiconductor (DMS) Hg 1-x Mn x Te [4] and Manganite Perovskite like Pr 0,7 Ca 0,3 MnO 3 [5], all of which demonstrated colossal resistivity decrease in external magnetic field, the effect known as Colossal Magnetoresistance (CMR).
The physical models describing CMR, i.e.Magnetic Polaron and Phase Separation models [6,7] are based on the suggestion that MS and Manganite Perovskite are magnetic materials, where charge carriers transport is dependent on the interaction with the magnetic moments of the host magnetic ions.
However, this approach absolutely ignores the fact that CMR has been also experimentally observed in nonmagnetic semiconductors (NMS), i.e. p-Ge [8,9] and p-InSb [8,10], where resistivity decrease of 10 2 ÷10 5 times was demonstrated in magnetic field B~3T and temperature T~1,5K, thus at the experimental conditions similar to the conditions applied to MS, DMS and Manganite Perovskite in CMR studies.
In this paper we aim to conduct the comparative study based on our magnetotransport experimental findings revealed for nonmagnetic semiconductors and the results of magnetic CMR materials studies described in scientific literature for the similar experimental conditions.This approach can help to expand CMR understanding and establish links between these two groups of solid state materials.
CMR comparison in nonmagnetic and magnetic materials
In this section we compare our experimental results of the CMR effect studies in NMS as the function of magnetic field, charge carrier concentration and temperature with the data adopted from literature for DMS, MS and Manganite Perovskite [3][4][5]12].
Experimental
p-InSb(Ge), p-InSb(Mn) and p-Ge(Ga) crystals were grown by Czochralski method.Magnetoresistance, the Hall effect a n d dc conductivity were measured at Hall bar samples 7×1.5×1 mm 3 .The materials were studied under the low and high magnetic field (B=0-15T), temperature T= 40mK -4K and charge carrier concentration n= 10 16 ÷10 18 cm -3 .Measurements at superlow temperatures were carried out in 3 H e -4 H e dilution refrigerator with 15T Oxford Instruments superconductive magnet at NHMFL, Florida.Measurements in the temperature range T = 300 ÷ 1,5 K were carried out on 15T Bitter magnet of HMFL in Technical University Braunschweig, Germany. 3
CMR dependence on magnetic field
The dependence of CMR value on magnetic field strength in NMS, DMS, MS and Manganite Perovskite is demonstrated in figure 1.The curves represent data for the following CMR materials: a. (NMS) uniaxially stressed InSb crystal doped with Ge at 1,84K; InSb crystal doped with Mn at 40mK and Ge crystal doped with Mercury at T=35K, adopted from [9] b. (DMS) p-Hg 1-x Mn x Te at 1,7K [4] c. (MS) n-Gd 3-x v x S 4 at 4,2K [3] d. Manganite Perovskite Pr 0,7 Ca 0,3 MnO 3 at 100K [5] It can be seen from figure 1 that ρ B /ρ B=0 values in all observed nonmagnetic and magnetic CMR materials are comparable and demonstrate similar tendency to decrease in a low magnetic field (B<1T).
It is important to note that all observed nonmagnetic semiconductors demonstrate CMR at relatively low temperatures below 4K with the exception of p-Ge(Hg) demonstrating the same phenomenon at T=35K. also should be stressed that all CMR nonmagnetic semiconductors are of p-type whereas magnetic semiconductors are both of p-and n-type.
In high magnetic field (B>4T) p-Hg 1-x Mn x Te and p-InSb(Mn) demonstrate the increase of resistivity with magnetic field increase, i.e.Positive Magnetoresistance.
Charge carrier concentration influence on CMR
Figure 2 demonstrates the dependence of CMR on charge carrier concentration (n) in the following materials: NMS -Ge(Ga), InSb(Ge) and InSb(Mn); DMS -p-Hg 1-x Mn x Te [4]; MS -n-Gd 3-x v x S 4 [3] and EuSe [11].It can be seen from figure 2 that all CMR materials we study demonstrate similar behavior being the function of charge carrier concentration.At charge carrier concentration below the critical concentration of metal-insulator transition (n<n cr ) the relation ρ B /ρ B=0 first declines with the decrease of charge carrier concentration, reaches its minimum and finally increases with the decrease of charge carrier concentration.
CMR temperature dependence
CMR temperature dependence in nonmagnetic and magnetic materials is demonstrated in figure 3. It can be seen that ρ B /ρ B=0 exponentially declines with materials cooling in all studied types of semiconductors.The difference is that in nonmagnetic semiconductor InSb(Mn) at manganese concentration N Mn =1,7×10 17 cm -3 the minimal value ρ B /ρ B=0 =10 -4 is reached within the millikelvine temperature range, whereas in diluted magnetic semiconductor p-Hg 1-x Mn x Te [4] the same CMR effect is observed at higher temperature level which is above 2K.Fig. 3. CMR temperature dependence for nonmagnetic p-InSb and p-Ge crystal semiconductors (dots on solid lines -our experimental results) and magnetic materials (dashed linesresults adopted from the Ref. [3][4][5]9]).
The Hall effect
To get information concerning the charge carrier contribution into the CMR effect we compare the Hall effect in NMS and DMS. Figure 4 depicts the Hall constant dependence on magnetic field in nonmagnetic InSb crystal doped with Mn at N Mn =1,5× 10 17 cm -3 and Diluted Magnetic Semiconductor p-Hg 1-x Mn x Te [12], where the concentration of Mn is 10 4 higher.It is clearly seen that both semiconductors demonstrate similar features in magnetic field, i.e. negative Hall constant in low magnetic field and positive Hall constant within the high magnetic field range.The difference between the observed NMS and DMS is that in InSb(Mn) crystal the Hall constant sign inversion occurs at B~1T but in p-Hg 1-x Mn x Te the inversion is revealed at B~5T.Such Hall constant behavior demonstrates the same tendency of the Hall constant sign inversion and occurs in both types of semiconductors.Here the inversion under different levels of magnetic field could be explained by a large difference in effective masses of electrons and holes which is supported by the fact that InSb crystals have large hole/electron masses ratio (m h /m e = 0,5/0,01 ~ 50) [14].Fig. 4. The Hall constant vs magnetic field in nonmagnetic p-InSb(Mn) crystal (solid line-represents our results) and DMS p-Hg 1-x Mn x Te (dashed line-results adopted from [12]).
Discussion
Having summarized the revealed above qualitative and quantitative CMR and the Hall constant coincidences in nonmagnetic semiconductors and magnetic materials we have come to the following conclusion: 1 It can be suggested that the content of magnetic ions is not the only factor which influences CMR which can be seen from CMR comparison of nonmagnetic InSb doped with manganese at N Mn ~10 17 cm -3 and p-Hg 1-x Mn x Te or Magnetic Perovskite Pr 0,7 Ca 0,3 MnO 3, where manganese concentration is 10 5 -10 6 times higher.2 It is also questionable whether the magnetic type of the CMR material, i.e. ferromagnetic or antiferromagnetic, could be considered the factor which influences NM value.3 The fact that NM value increases on the insulator side of MIT supports the key role of the charge carrier concentration and the interaction of these charge carriers in producing NM effect.In this view, the concept of excitonic insulator model (EI model) seems universal for the description of the CMR phenomenon in both magnetic and nonmagnetic materials.It has been suggested [15] that if electron-hole binding energy E B in semiconductors exceeds the gap energy E g , a new phase, which is called excitonic insulator, is formed.First the EI model was introduced for the interpretation of resistivity anomalies in TmSe 1- x Te x [16].The peak of resistivity with increasing pressure in this material was explained as the sequence of pressure-induced transitions from normal semiconductor to excitonic insulator, and then to normal semimetal.We can suggest that under external magnetic field excitons in CMR materials dissociate and we observe transition from EI to normal semiconductor i.e. the CMR effect.
We propose that in the framework of EI model the EI phase is formed in CMR materials as the result of Jahn-Teller distortion.In DMS and MS Jahn-Teller (JT) distortion is caused by the host magnetic ions, whereas in NMS we see several possibilities to introduce JT distortion.It can be supposed that in p-InSb(Mn) crystals JT distortion can be caused by Manganese impurity ions while in p-Ge(Hg) the JT distortion could be induced by the deep impurity center of Mercury.As for uniaxially stressed p-InSb(Ge) and p-Ge(Ga), i.e. semiconductors doped with shallow impurities, JT distortion could appear when we apply uniaxial stress to crystals [17].
In spite of conflicting theories concerning the nature of excitonic insulator its application to the CMR phenomenon description seems to be the important point in the explanation of this phenomenon in both magnetic and nonmagnetic materials.
Colossal
Magnetoresistance in nonmagnetic semiconductors and magnetic materials observed under magnetic field depends on charge carriers concentration and temperature.The revealed qualitative and quantitative coincidence of CMR behaviour in magnetic and nonmagnetic materials highlights the common nature of the CMR phenomenon in these types of materials.The interpretation of insulator-metal transition under magnetic field in the framework of the Excitonic Insulator phase model provides with the possibility to integrate the description of the conductivity transition from EI phase to conducting state for both magnetic and nonmagnetic materials into one model. | 2,396.8 | 2018-07-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Real-World Spatial Synchronization of Event-CMOS Cameras through Deep Learning: A Novel CNN-DGCNN Approach
This paper presents a new deep-learning architecture designed to enhance the spatial synchronization between CMOS and event cameras by harnessing their complementary characteristics. While CMOS cameras produce high-quality imagery, they struggle in rapidly changing environments—a limitation that event cameras overcome due to their superior temporal resolution and motion clarity. However, effective integration of these two technologies relies on achieving precise spatial alignment, a challenge unaddressed by current algorithms. Our architecture leverages a dynamic graph convolutional neural network (DGCNN) to process event data directly, improving synchronization accuracy. We found that synchronization precision strongly correlates with the spatial concentration and density of events, with denser distributions yielding better alignment results. Our empirical results demonstrate that areas with denser event clusters enhance calibration accuracy, with calibration errors increasing in more uniformly distributed event scenarios. This research pioneers scene-based synchronization between CMOS and event cameras, paving the way for advancements in mixed-modality visual systems. The implications are significant for applications requiring detailed visual and temporal information, setting new directions for the future of visual perception technologies.
Introduction
In the rapidly evolving field of imaging technology, integrating event-based cameras with traditional CMOS (complementary metal-oxide-semiconductor) sensors represents a significant research frontier with vast application potential.Traditional CMOS sensors have been the backbone of digital imaging for decades, providing high-quality, framebased captures.However, they face challenges such as substantial power requirements, limited dynamic range, and susceptibility to motion blur in rapid-moving scenarios [1,2].Conversely, event cameras, a relatively novel innovation, detect changes in light intensity at each pixel independently and asynchronously, only recording changes.This operation leads to lower power consumption, a much higher dynamic range, and the capability to capture fast-moving objects without blur [2].Despite these advantages, previous studies on event cameras have encountered significant challenges.Traditional methods often aggregate events into grid-based representations for processing with standard vision pipelines, which can lead to inefficiencies and loss of information [3].Additionally, there is a notable lack of a general framework to convert event streams into grid-based representations that can be learned end-to-end with taskspecific networks [4].Moreover, while event cameras have shown effectiveness for specific applications such as machine fault diagnosis and driving behavior characterization, these studies have highlighted difficulties in handling environmental noise and achieving high precision in real-world conditions [5].
The output from event cameras comprises a stream of "events" instead of conventional images.Each event represents a change in intensity at a pixel level, occurring independently of a frame structure.This output format can be challenging to integrate into many applications that traditionally rely on continuous frame-based video data, making it difficult to apply standard video processing techniques [3,6].The potential of combining these two technologies lies in leveraging the continuous, high-resolution imagery of CMOS cameras with the high temporal resolution and motion sensitivity of event cameras.This hybrid approach could significantly enhance fields that require detailed, dynamic visual information, such as robotics [7][8][9].While CMOS cameras provide essential high-resolution images for navigation and spotting obstacles, they can falter under rapidly changing light conditions or during high-speed situations.Event cameras, known for their quick response times and minimal delay, excel at capturing sudden changes in a scene, thus reducing motion blur.
Recent advancements in sensor fusion and visual-inertial odometry (VIO) have significantly enhanced the capabilities of autonomous vehicles.A notable approach integrates data from traditional CMOS cameras (frames) and event cameras (events) along with inertial measurements, creating a robust adaptive system.This system excels in complex lighting and high-speed conditions, significantly improving object detection and tracking accuracy.Central to this system is an 8-degrees-of-freedom (DOF) warping model that aligns the different data types by creating brightness increment patches, which help to minimize the differences between these outputs.This process allows for robust feature tracking by adaptively updating measurements based on the quality of tracked features, thus ensuring superior pose estimation accuracy compared to existing event-based VIO algorithms [10].
Nonetheless, achieving precise spatial synchronization between CMOS and event camera outputs remains a significant challenge.This difficulty primarily arises from the fundamental differences in their data acquisition modes.CMOS cameras capture uniform, time-synchronized frames, while event cameras record asynchronous, pixel-level changes triggered by variations in scene lighting.The asynchronous nature of event data does not align neatly with the synchronous frame rate of CMOS cameras, complicating real-time data fusion.Even with sophisticated models like the 8-DOF warping model, temporal alignment requires matching these sporadic, event-driven data with the uniform timestamps of frame data, necessitating complex interpolation and prediction methods that are yet to be perfected.These challenges necessitate advanced synchronization techniques beyond traditional calibration methods, which often fall short in dynamic, real-world conditions where both types of cameras must operate in unison [11].Previous attempts to transform event camera data into pseudo-images for applying conventional calibration techniques have proven complex and often imprecise, particularly in handling real-time, dynamically changing environments [11,12].
Our work aims to bridge this gap by directly synchronizing the output of event cameras with that of CMOS cameras, bypassing the intermediate transformation steps.We propose a direct method for aligning these disparate data streams, enhancing the accuracy and efficiency of the process.Our approach utilizes deep learning models to interpret and synchronize the raw outputs from both camera types, focusing on real-time, real-world applicability [4,13].We introduce innovative neural network architectures that are specifically designed to handle the unique characteristics of each camera's output, facilitating a seamless integration of these technologies.Additionally, the integration of dynamic vision sensors into unmanned aerial vehicles optimizes the estimation of dynamic interactions in real time, demonstrating the effectiveness of event cameras in scenarios that require rapid and precise adjustments [3,14].
Our findings show that this direct synchronization method significantly improves the accuracy and efficiency of data integration from both camera types, overcoming the limitations of previous approaches.The proposed deep learning architectures enable endto-end learning and adaptive processing, leading to enhanced feature extraction and event Sensors 2024, 24, 4050 3 of 14 interpretation.Experimental results have demonstrated high fault diagnosis accuracies in machinery monitoring, comparable to traditional accelerometer data, while also being effective in driving behavior characterization and autonomous navigation tasks.These advancements suggest that the method can be applied in diverse real-world scenarios, potentially offering reliable performance in dynamic environments [5,13].
Workflow Overview
The flowchart (see Figure 1) outlines the key steps involved in the spatial synchronization of CMOS and event cameras using the CNN-DGCNN model.This workflow covers the entire process from data acquisition to model evaluation.limitations of previous approaches.The proposed deep learning architectures enable endto-end learning and adaptive processing, leading to enhanced feature extraction and event interpretation.Experimental results have demonstrated high fault diagnosis accuracies in machinery monitoring, comparable to traditional accelerometer data, while also being effective in driving behavior characterization and autonomous navigation tasks.These advancements suggest that the method can be applied in diverse real-world scenarios, potentially offering reliable performance in dynamic environments [5,13].
Workflow Overview
The flowchart (see Figure 1) outlines the key steps involved in the spatial synchronization of CMOS and event cameras using the CNN-DGCNN model.This workflow covers the entire process from data acquisition to model evaluation.The process begins with data acquisition, where we collect high-resolution video data from CMOS cameras (480 p and 1080 p) and corresponding event data from event cameras.Next, during the data preprocessing step, we segment CMOS video frames into 100 × 100 windows and label these segments with precise shifts.Following this, in the dataset preparation phase, we create training and testing datasets from the segmented and labeled data, with the training set containing 3 million observations and the testing set comprising 600,000 observations.
In the model architecture stage, we define a convolutional neural network (CNN) to extract features from CMOS data and a dynamic graph convolutional neural network (DGCNN) to process event data.These outputs are then integrated using a multi-layer perceptron (MLP).During the model training phase, we initialize the model parameters and train the model using the Adam optimizer, applying early stopping and a learning rate scheduler to enhance performance and prevent overfitting.Finally, in the model evaluation step, we assess the model's performance on the test set by computing the calibration error and analyzing the results to ensure accuracy and reliability.
Event Camera Data Format
Event cameras are specialized sensors that capture local variations in luminance over time.Differing from conventional cameras, which use shutters to take pictures, event cameras consist of pixels that function autonomously and asynchronously.These pixels activate only in response to changes in light intensity, otherwise remaining passive.This mechanism allows for a highly efficient and sparse representation of video sequences which is particularly suited to natural scenes.In essence, when any pixel detects a variation in brightness, it generates an event.This information is aggregated as a sequence of events: The process begins with data acquisition, where we collect high-resolution video data from CMOS cameras (480 p and 1080 p) and corresponding event data from event cameras.Next, during the data preprocessing step, we segment CMOS video frames into 100 × 100 windows and label these segments with precise shifts.Following this, in the dataset preparation phase, we create training and testing datasets from the segmented and labeled data, with the training set containing 3 million observations and the testing set comprising 600,000 observations.
In the model architecture stage, we define a convolutional neural network (CNN) to extract features from CMOS data and a dynamic graph convolutional neural network (DGCNN) to process event data.These outputs are then integrated using a multi-layer perceptron (MLP).During the model training phase, we initialize the model parameters and train the model using the Adam optimizer, applying early stopping and a learning rate scheduler to enhance performance and prevent overfitting.Finally, in the model evaluation step, we assess the model's performance on the test set by computing the calibration error and analyzing the results to ensure accuracy and reliability.
Event Camera Data Format
Event cameras are specialized sensors that capture local variations in luminance over time.Differing from conventional cameras, which use shutters to take pictures, event cameras consist of pixels that function autonomously and asynchronously.These pixels activate only in response to changes in light intensity, otherwise remaining passive.This mechanism allows for a highly efficient and sparse representation of video sequences which is particularly suited to natural scenes.In essence, when any pixel detects a variation in brightness, it generates an event.This information is aggregated as a sequence of events: In Equation ( 1), (x i , y i ) represents the spatial coordinates of the event, t i denotes the event's timing (with microsecond precision), and p i signifies the intensity change at the pixel.Consequently, the output from an event camera is a streamlined series of events, structured as follows, with the objective being to apply vision-based downstream processing to these data, specifically for 2D regression tasks.
Dataset Creation
To facilitate the training of our proposed models, a comprehensive and accurately labeled dataset was essential.This dataset needed to encapsulate both images from the CMOS camera and the corresponding events captured by the event camera, alongside the precise movements on each axis to ensure effective spatial synchronization.
Data Sources
For our study, we selected three datasets from the DAVIS 240C Dataset [12,13], valued for its unique integration of a conventional CMOS camera with an event-based sensor.This integration offers dual perspectives on visual data capture, as demonstrated in Figures 2 and 3.
In Equation ( 1), ( , ) represents the spatial coordinates of the event, denotes the event's timing (with microsecond precision), and signifies the intensity change at the pixel.Consequently, the output from an event camera is a streamlined series of events, structured as follows, with the objective being to apply vision-based downstream processing to these data, specifically for 2D regression tasks.
Dataset Creation
To facilitate the training of our proposed models, a comprehensive and accurately labeled dataset was essential.This dataset needed to encapsulate both images from the CMOS camera and the corresponding events captured by the event camera, alongside the precise movements on each axis to ensure effective spatial synchronization.
Data Sources
For our study, we selected three datasets from the DAVIS 240C Dataset [12,13], valued for its unique integration of a conventional CMOS camera with an event-based sensor.This integration offers dual perspectives on visual data capture, as demonstrated in The "shapes rotation" dataset [15], visualized in Figure 2, presents simple geometric shapes against a wall.This dataset provided a controlled environment in which to test the basic synchronization capabilities of our models.Figure 2 illustrates this setup with three distinct frames: the first frame (t = 0 s) shows a single stationary shape; the middle frame (t = 0.5 s) captures the shape mid-rotation; and the last frame (t = 1 s) depicts the rotation's The "shapes rotation" dataset [15], visualized in Figure 2, presents simple geometric shapes against a wall.This dataset provided a controlled environment in which to test the basic synchronization capabilities of our models.Figure 2 illustrates this setup with three distinct frames: the first frame (t = 0 s) shows a single stationary shape; the middle frame (t = 0.5 s) captures the shape mid-rotation; and the last frame (t = 1 s) depicts the rotation's conclusion, with a change in orientation of the shapes.Below the video frames, corresponding 2D event histogram maps split into full events, negative events (P = 0), and positive events (P = 1).These histograms capture the pixel-wise changes in intensity due to movement, with the negative and positive events representing decreases a d increases in intensity, respectively.
"Boxes rotation" [15,16], as seen in Figure 3, is set within a complex, texture-rich environment to provide a multifaceted pattern of movements, challenging our models under diverse textural conditions.In Figure 2, the first frame (t = 0 s) shows the initial static scene.The middle frame (t = 0.5 s) captures slight movement, primarily due to the camera's motion.The last frame (t = 1 s) includes more noticeable movement, again primarily from the camera's perspective.This series of frames illustrates the scene as observed by the camera over a one-second interval.The corresponding 2D event histogram maps below each frame represent the full events, negative events (P = 0), and positive events (P = 1), highlighting the pixel-wise changes in intensity due to the movement.To produce a tagged dataset with precisely labeled shifts, we reduced and cropped the CMOS segments to smaller windows of 100 × 100 pixels.This reduction was necessary to align the CMOS data accurately with the event camera data and to manage computational resources effectively.Figures 2 and 3 reflect these 100 × 100 segments from higher-resolution movies, providing a proof of concept within the study's scope.Thus, this context also applies to Figure 3.
The "outdoors walking" dataset records the conditions of a sunny urban environment, including changes in natural light and the active nature of outdoor scenes.Although this dataset is not shown here (refer to Figures 2 and 3 for impressions of similar datasets), it adds value to the previous two by offering a real-world setting with uncontrolled lighting and movement.This helps us to test the effectiveness of our synchronization methods under less predictable conditions.These datasets encompass a broad spectrum of visual scenarios, from the simplicity and control of rotating shapes to the complexity of textured motion and the unpredictability of outdoor settings.This strategic curation of datasets was pivotal in developing our synchronization approaches, ensuring that they are effective and resilient in various real-life applications.The datasets provide a solid foundation for training and evaluating our models, confirming their effectiveness for practical use.
Data Segmentation
We segmented the video data into one-second intervals, each comprising 22 frames, adhering to the dataset's frame rate of 22 FPS.Given the original frame dimensions of 240 × 180 pixels, we further processed these frames into sub-frames or windows of 100 × 100 pixels.This segmentation was achieved by sliding the window across the frame in 20-pixel increments, ensuring thorough coverage and variety in the captured data.
Labeling and Variations
In our dataset, each window extracted from the video data underwent a systematic frame shifting process relative to the event image.This process, as elucidated in Figure 4, involved shifting the CMOS camera frame with respect to the event camera frame across a defined range along both the X and Y axes. Figure 4 provides a visual representation of the four primary shift scenarios implemented to produce our labeled dataset.Each illustration demonstrates a CMOS camera frame (dotted line) and an event camera frame (solid line) with specific shifts along the X and Y axes: Top left illustration: Here, the CMOS camera frame is shifted to the left (∆x < 0) and upwards (∆y < 0) relative to the event camera frame, depicting a negative shift along both axes.
2.
Top right illustration: The CMOS camera frame is shifted to the right (∆x > 0) and upwards (∆y < 0) with respect to the event camera frame, illustrating a positive shift along the X axis and a negative shift along the Y axis.
3.
Bottom left illustration: This scenario shows the CMOS camera frame shifted to the left (∆x < 0) and downwards (∆y > 0) in comparison to the event camera frame, indicating a negative shift along the X axis and a positive shift along the Y axis.4.
Bottom right illustration: The CMOS camera frame is shifted to the right (∆x > 0) and downwards (∆y > 0) relative to the event camera frame, representing a positive shift along both the X and Y axes.
a defined range along both the X and Y axes. Figure 4 provides a visual representation of the four primary shift scenarios implemented to produce our labeled dataset.Each illustration demonstrates a CMOS camera frame (dotted line) and an event camera frame (solid line) with specific shifts along the X and Y axes: 1. Top left illustration: Here, the CMOS camera frame is shifted to the left (Δx < 0) and upwards (Δy < 0) relative to the event camera frame, depicting a negative shift along both axes.2. Top right illustration: The CMOS camera frame is shifted to the right (Δx > 0) and upwards (Δy < 0) with respect to the event camera frame, illustrating a positive shift along the X axis and a negative shift along the Y axis.For each 100 × 100 window, the frame shifts range from 0 to 70 pixels in any direction, resulting in an extensive range of 140 × 140 unique shift variations, spanning from −70 to +70 pixels for each axis.This systematic shifting creates a precisely labeled dataset that documents each window with known positional shifts.These labeled samples are instrumental for training spatial synchronization models that accurately align CMOS and event camera data.
Each depicted frame shift represents a specific labeled instance in our dataset, capturing a diverse array of relative positions between the CMOS and event camera frames.For each 100 × 100 window, the frame shifts range from 0 to 70 pixels in any direction, resulting in an extensive range of 140 × 140 unique shift variations, spanning from −70 to +70 pixels for each axis.This systematic shifting creates a precisely labeled dataset that documents each window with known positional shifts.These labeled samples are instrumental for training spatial synchronization models that accurately align CMOS and event camera data.
Each depicted frame shift represents a specific labeled instance in our dataset, capturing a diverse array of relative positions between the CMOS and event camera frames.Collectively, these labeled instances encompass a full spectrum of possible frame alignments within the stipulated range.This structured labeling approach ensures that our dataset supports the training of a generalized model capable of handling a wide range of real-world synchronization scenarios.
By leveraging the entire collection of films available in the DAVIS240C dataset, we assembled a dataset intended to support the training of a model that generalizes across different visual contexts.This approach aims to enhance the performance and applicability of our synchronization methodology by incorporating a diverse range of visual scenarios.
To evaluate the proposed method, we created a large dataset for both training and testing.Approximately 3 million observations with various movements were selected for the training set, ensuring an equal distribution from all three movies ("shapes rotation," "boxes rotation," and "outdoors walking").This balanced selection helps the model to generalize across different visual scenarios.For the testing phase, we used 600,000 observations, also evenly distributed among the three movies.This dataset allowed us to evaluate the performance of our CNN-DGCNN method across different and dynamic environments.
Network Architecture for CMOS and Event Camera Calibration
The network architecture detailed in this manuscript features a dual-input system designed specifically for the task of calibrating data from two distinct camera technologies: the conventional CMOS (complementary metal-oxide-semiconductor) camera and the innovative event camera.This setup is essential for effectively processing and integrating the disparate types that these cameras capture.Figure 4 displays the DGCNN-CNN calibration network architecture, which employs a dual-input strategy to process and synchronize inputs from both CMOS and event cameras efficiently.The diagram details the data flow within the network, showing separate pathways for CMOS video and event camera data that converge to a unified calibration output.This visual representation elucidates the system's architecture, highlighting how distinct data streams are integrated and processed.
The dynamic graph convolutional neural network (DGCNN) processes event camera data by updating the graph structure as new data are received.This method identifies spatial relationships within the event data, which is necessary for accurate calibration.The DGCNN builds a graph G = (V, E), where V represents the feature vectors of the vertices (events) at layer l and E represents the edges (connections between events).Each event detected by the camera is represented as a vertex in V, and the connections between these events are the edges in E.
The initial feature vector h i o for each vertex i is based on the raw event data, including spatial coordinates (x i , y i ), the timestamp (t i ), and the intensity change (p i ).The EdgeConv operation, which updates the feature vectors of the vertices, is defined as: Here, h i l is the feature vector of vertex i at layer l; h j l is the feature vector of a neighboring vertex j; and ∅ is a neural network function, often a multi-layer perceptron, that computes the edge features based on the difference between the feature vectors of neighboring vertices.ReLU (Rectified Linear Unit) is an activation function that introduces non-linearity into the model, allowing it to learn more complex patterns.The max operation aggregates features from neighboring vertices, helping the network understand spatial relationships effectively.Within each DGCNN layer, the feature vectors of the vertices are updated by considering the features of their neighboring vertices.For each vertex i, a new feature embedding e i l+1 is computed as: This step involves computing the new feature embedding for each vertex by looking at the differences in the feature vectors of neighboring vertices and applying the function ∅.After processing through all the DGCNN layers, the final feature vector for each vertex is obtained.The final feature vector h i L for each vertex i is represented as: The AGGREGATE function combines the embeddings from all layers l to form the final feature vector h i L for each vertex i.On the left side of the illustration presented in Figure 4, the input from a CMOS camera is depicted as a 1 s video clip segmented into 22 frames, each with a resolution of 100 × 100 pixels.These video data undergo processing through multiple CNN blocks designed to progressively refine the data.Initially starting with 64 channels, the architecture expands to 256 channels, all using a 3 × 3 kernel size.Each block is engineered to enhance the network's understanding of the video by extracting features at progressively higher levels of abstraction.As the network advances, the depth of the feature maps increases, enabling the capture of increasingly complex patterns and details.This refinement process, which includes convolution, batch normalization, and max-pooling, effectively reduces dimensionality and accentuates key features.On the right side, the input from the event camera is represented as a point cloud with n-by-3 dimensions.This sparse and asynchronous event data undergo processing through a series of EdgeConv layers, which are structured to progressively increase in complexity from 64 to 512 output The EdgeConv layers are tasked with constructing and updating a dynamic graph that captures spatial relationships within the event data, enabling the network to adaptively learn and integrate event information over time.This is vital for accurate temporal tracking of events captured by the event camera.
The final layer of the DGCNN outputs features that are fused with the CNN-processed CMOS data through a multi-layer perceptron (MLP) to predict the spatial calibration parameters (∆x, ∆y): (∆x, ∆y) = MLP(h cmos , h DGCNN ) Here, h cmos and h DGCNN represent the feature vectors from the CNN and DGCNN, respectively.This fusion (as illustrated in Figure 5) enables precise alignment of CMOS and event camera data, ensuring accurate spatial synchronization.The CMOS camera data are processed through a series of convolutional neural network (CNN) blocks, each designed to handle 1 s video segments consisting of 22 frames at a resolution of 100 × 100 pixels.These blocks include convolutional layers with a 3 × 3 kernel, followed by batch normalization, which ensures consistent learning by centering and scaling the layer inputs [17,18].Batch normalization helps to maintain a steady learning process across the network.After normalization, a max-pooling operation with a 2 × 2 kernel reduces the spatial dimensions of the feature maps, decreasing computational demands and reducing the risk of overfitting.
Each CNN block incorporates a skip connection, as shown in Figure 6.The skip connection provides a direct gradient path, bypassing one or more layers, which prevents vanishing gradient issues in deeper networks.To ensure compatibility with the main CNN output, an additional 2D convolutional layer with a 1 × 1 kernel adjusts the feature map dimensions within the skip pathway.This element-wise addition merges the bypassed information with the main pathway's output, maintaining data integrity while benefiting from the learned features.
work (CNN) blocks, each designed to handle 1 s video segments consisting of 22 fram at a resolution of 100 × 100 pixels.These blocks include convolutional layers with a 3 kernel, followed by batch normalization, which ensures consistent learning by center and scaling the layer inputs [17,18].Batch normalization helps to maintain a steady lea ing process across the network.After normalization, a max-pooling operation with a 2 kernel reduces the spatial dimensions of the feature maps, decreasing computational mands and reducing the risk of overfitting.
Each CNN block incorporates a skip connection, as shown in Figure 6.The skip c nection provides a direct gradient path, bypassing one or more layers, which preve vanishing gradient issues in deeper networks.To ensure compatibility with the m CNN output, an additional 2D convolutional layer with a 1 × 1 kernel adjusts the feat map dimensions within the skip pathway.This element-wise addition merges the passed information with the main pathway's output, maintaining data integrity wh benefiting from the learned features.
Figure 6 illustrates a single CNN block with a skip connection, detailing its structu The input initially passes through a 2D convolutional layer, capturing basic spatial f tures.Batch normalization stabilizes the learning process, and an activation function troduces non-linearity.A max-pooling operation follows to reduce the feature map mensions and data volume for subsequent layers.The skip pathway contains a 2D con lutional layer to adjust feature map dimensions, aligning them with the main pathw output.This ensures that both immediate and deeper features contribute to the final r resentation of the video data, encapsulating low-level details and high-level abstractio By combining these components, the network architecture is optimized for process CMOS camera data, facilitating accurate calibration when integrated with event cam data.
Dynamic Graph CNN (DGCNN) for Event Data Processing
Event camera data differ significantly from traditional camera data due to th unique characteristics.Represented as a point cloud (where each point has dimension by-3), the data are processed using a specialized neural network called a dynamic gra CNN (DGCNN).Unlike standard CNNs, the DGCNN is well-suited for handling the structured nature of point cloud data.Inspired by successful architectures used for g ture recognition with event cameras [19], the DGCNN is adept at capturing spatial r tionships and patterns within irregular and sparse data distributions [20].It construc graph that dynamically adjusts as new data are received, allowing the network to rem current with the most recent spatial relationships between points.The network archi ture gradually increases in complexity, starting with layers configured for simpler feat detection (3 × 64) and then progressing to layers capable of identifying more intric Figure 6 illustrates a single CNN block with a skip connection, detailing its structure.The input initially passes through a 2D convolutional layer, capturing basic spatial features.Batch normalization stabilizes the learning process, and an activation function introduces non-linearity.A max-pooling operation follows to reduce the feature map dimensions and data volume for subsequent layers.The skip pathway contains a 2D convolutional layer to adjust feature map dimensions, aligning them with the main pathway output.This ensures that both immediate and deeper features contribute to the final representation of the video data, encapsulating low-level details and high-level abstractions.By combining these components, the network architecture is optimized for processing CMOS camera data, facilitating accurate calibration when integrated with event camera data.
Dynamic Graph CNN (DGCNN) for Event Data Processing
Event camera data differ significantly from traditional camera data due to their unique characteristics.Represented as a point cloud (where each point has dimensions n-by-3), the data are processed using a specialized neural network called a dynamic graph CNN (DGCNN).Unlike standard CNNs, the DGCNN is well-suited for handling the unstructured nature of point cloud data.Inspired by successful architectures used for gesture recognition with event cameras [19], the DGCNN is adept at capturing spatial relationships and patterns within irregular and sparse data distributions [20].It constructs a graph that dynamically adjusts as new data are received, allowing the network to remain current with the most recent spatial relationships between points.The network architecture gradually increases in complexity, starting with layers configured for simpler feature detection (3 × 64) and then progressing to layers capable of identifying more intricate features (512 × 1024).This step-by-step increase in layer complexity enables the DGCNN to analyze and understand the full depth of event data comprehensively.
Network Fusion and Calibration Output
After extracting features from the standard video using the CNN blocks and from the event data using the DGCNN, these features are combined for calibration in a multi-layer perceptron (MLP).The MLP in our system consists of layers with 2048, 1024, 512, 128, and 2 neurons.This structure processes data from both sources and outputs spatial calibration parameters (∆x, ∆y).This step aligns spatial information from the CMOS camera with temporal data from the event camera, creating a detailed representation of motion and space.By merging these data streams, our network supports real-time spatial calibration, potentially enhancing systems that rely on mixed-modality visual information, such as autonomous navigation, robotics, and augmented reality.
Computational Setup and Training Parameters
To evaluate our CNN-DGCNN model, we used a local machine equipped with a 13th Gen Intel Core i9-13900K processor, 128 GB of RAM, and an Nvidia GeForce RTX 4080 GPU with 16 GB of RAM.These specifications ensured efficient handling of large datasets and complex computations.
Training was conducted using PyTorch version 1.10.We set the number of epochs to 100 and used early stopping based on validation loss to prevent overfitting.The Adam optimizer was chosen for its efficiency in managing large datasets and its adaptive learning rate capabilities, set at an initial rate of 0.001.We used a batch size of 64 to balance memory usage and computational efficiency.Early stopping was implemented with a patience of 10 epochs, meaning training would halt if there was no improvement in validation loss over 10 consecutive epochs.Additionally, a learning rate scheduler, ReduceLROnPlateau, was applied with a factor of 0.1 and patience of 5 epochs to dynamically adjust the learning rate and refine the training process.
During training, we closely monitored the model's performance using validation loss.Specifically, we tracked the difference between the predicted and actual calibration parameters (∆x, ∆y).This allowed us to apply early stopping when there was no significant reduction in validation loss, ensuring that the model did not overfit to the training data.By dynamically adjusting the learning rate based on validation performance, we were able to fine-tune the model for optimal accuracy and efficiency.
Results
Our study investigates how the spatial distribution of events within an image, described through entropy, impacts the precision of video calibration algorithms.Notably, areas with denser event clusters, which correspond to lower entropy values, improve the calibration's accuracy.This empirical observation is supported by the entropy equation detailed in Equation (2), which calculates the entropy based on the distribution of events: where: m is the number of rows in the input video and the corresponding two-dimensional histogram p(x, y).In this case, m is defined as 100.
n is is the number of columns in the input video and the corresponding two-dimensional histogram p(x, y).In this case, n is defined as 100.p(x, y) is the ratio of the number of events at point (x,y) in the two-dimensional histogram to the total of all events in the histogram.
To evaluate the calibration process precisely, we measure the calibration error using the following distance formula: where: The effect of entropy on calibration precision is captured in Figure 7, which portrays a significant finding from our study: the calibration error distance increases with entropy, yet this relationship exhibits variability depending on the dataset.Thus, this graph serves as an indicator of how different environmental conditions can uniquely impact calibration performance.
Sensors 2024, 24, 4050 12 To evaluate the calibration process precisely, we measure the calibration error u the following distance formula: The effect of entropy on calibration precision is captured in Figure 7, which por a significant finding from our study: the calibration error distance increases with ent yet this relationship exhibits variability depending on the dataset.Thus, this graph s as an indicator of how different environmental conditions can uniquely impact calibr performance.In "shapes rotation" (represented by the blue dash-dot line), the calibration remains relatively stable and then shows a gradual increase as the entropy grows "outdoors walking" dataset (indicated by the red dash-dot line) starts with a lower distance at low entropy levels, but as entropy increases, the error distance rises sharply than in "shapes rotation".This sharp increase suggests that the model has difficulty achieving precise calibration when events are uniformly distributed, whi common in outdoor environments.
Most notably, the "boxes rotation" dataset (marked by the green dash-dot shows a distinct pattern.The calibration error starts off minimal and increases at a sl rate than the other datasets, despite having the highest entropy.However, once reac a certain threshold, there is a noticeable jump in the error distance, after which it stabi In "shapes rotation" (represented by the blue dash-dot line), the calibration error remains relatively stable and then shows a gradual increase as the entropy grows.The "outdoors walking" dataset (indicated by the red dash-dot line) starts with a lower error distance at low entropy levels, but as entropy increases, the error distance rises more sharply than in "shapes rotation".This sharp increase suggests that the model has more difficulty achieving precise calibration when events are uniformly distributed, which is common in outdoor environments.
Most notably, the "boxes rotation" dataset (marked by the green dash-dot line) shows a distinct pattern.The calibration error starts off minimal and increases at a slower rate than the other datasets, despite having the highest entropy.However, once reaching a certain threshold, there is a noticeable jump in the error distance, after which it stabilizes.This unique trend may be attributed to the specific characteristics of the dataset, such as the presence of high-contrast textures, which could initially assist in the calibration process but become less beneficial as entropy increases beyond a certain point.
These variations in error response to entropy across the datasets emphasize the complexity of calibration tasks.Different scene characteristics, such as the presence of distinct textures in "boxes rotation" or the more dynamic elements in "outdoors walking", are likely influencing factors in how well the calibration algorithm can perform.
Table 1 complements the graphical data, summarizing the average and variability of entropy and event counts for the datasets."Shapes rotation" records a mean entropy of 12.69 (SD: 0.51), with an event count average of 95K (SD: 37K)."Outdoors walking" shows a lower average entropy of 12.11 (SD: 0.92) and a higher average event count of 145K (SD: 67K)."Boxes rotation" stands out, with the highest average entropy of 13.23 (SD: 0.05) and the largest average number of events at 712K (SD: 334K).When comparing the data from Table 1 with the calibration error trends depicted in Figure 7, we can see a clear connection between entropy and calibration accuracy.This observation reinforces the importance of considering the distribution of events when designing and refining calibration methods for video analytics.
Discussion
Our study reveals that the complex interaction between entropy, event density, and calibration accuracy significantly impacts the outcomes of video data processing.Regions within a video frame with dense event distributions provide strong reference points, enhancing calibration accuracy.This is particularly evident when event cameras capture dense clusters of changes, which serve as precise anchor points for the calibration process.These "high information" zones not only increase the overall entropy, but also directly enhance the calibration effectiveness.
The effectiveness of dynamic graph CNNs (DGCNNs) in handling event data has improved real-time spatial synchronization and fostered the development of algorithms that adaptively respond to variations in event density and distribution, enhancing synchronization accuracy across diverse environments [3,21].Our empirical tests confirm that calibration quality is heavily dependent on both the volume and spatial concentration of events, with denser distributions yielding better outcomes.These insights underscore the value of integrating CMOS and event camera technologies for applications demanding high precision and responsiveness, and they pave the way for future research into optimizing mixed-modality visual systems [3,21].
These calibration advancements extend into environmental monitoring, where accurate, real-time calibration of sensor data across various modalities is critical.For instance, edge-detection algorithms calibrated for specific underwater conditions have enabled more precise monitoring of coral ecosystems, contributing to conservation efforts [22].Similarly, precise calibration in atmospheric monitoring using techniques like Fabry-Perot interferometers, which measure light wavelengths to detect CO 2 levels, plays a vital role in efforts to mitigate the greenhouse effect [23].
Furthermore, the scope of these advancements has broad implications for industries such as autonomous vehicle navigation, robotic surgery, and augmented reality, where enhanced calibration techniques enable more precise and real-time interactions between event and CMOS cameras [24,25].The integration of these technologies with emerging IoT devices and smart city infrastructure has also opened up transformative opportunities for urban development.Enhanced calibration techniques are essential for effective traffic management and interactive public safety solutions, ensuring accurate, real-time data processing.Techniques like the attention-based encoder-decoder network using atrous convolution are being utilized to effectively calibrate data from urban environments, aiding in urban water management [26].
In the Internet of Vehicles (IoV), innovative traffic management solutions, such as the dynamic ant colony optimization algorithm with a look-ahead mechanism, are revolutionizing urban traffic management by leveraging real-time data from wireless sensor networks.This enhances traffic signal control efficiency and reduces carbon emissions, highlighting the essential role of advanced calibration in maintaining data accuracy and reliability in these systems [27].
However, our study has limitations due to its reliance on specific datasets and conditions.Future research should extend these findings across a wider range of scenarios, including those with challenging lighting and fast-moving subjects.Additionally, improving the computational efficiency of our proposed architecture will be important to ensure its utility in real-world, time-sensitive applications [28].
Looking ahead, we suggest exploring different deep learning architectures to improve the calibration speed and efficiency [28].Addressing non-linear distortions from camera lenses is also crucial [29,30].Testing these improvements in practical settings will help to determine their real-world effectiveness [31,32].Integrating this technology into wearable devices could expand its uses.Wearables with advanced calibration capabilities could enhance user experiences in fitness tracking, health monitoring for prosthetic adjustments, and interactive environments, blending digital and physical interactions [33].
In conclusion, our research on improving calibration methods is key to advancing 3-D reconstruction capabilities, as demonstrated by our study and supported by related literature on photometric stereo [34].These improvements demonstrate the reliability of our methods and provide greater insight into how light interacts with surfaces, thereby enhancing 3-D imaging techniques.In future research, it will be important to refine visual systems in order to better adapt to the complexities of modern environments.
3 .
Bottom left illustration: This scenario shows the CMOS camera frame shifted to the left (Δx < 0) and downwards (Δy > 0) in comparison to the event camera frame, indicating a negative shift along the X axis and a positive shift along the Y axis.4. Bottom right illustration: The CMOS camera frame is shifted to the right (Δx > 0) and downwards (Δy > 0) relative to the event camera frame, representing a positive shift along both the X and Y axes.
Figure 4 .
Figure 4. llustration of frame shifting to produce labeled data, showing the positions of the CMOS camera frame (dotted line) relative to the event camera frame (solid line) for a diverse set of shifts along the X and Y axes.
Figure 4 .
Figure 4. Illustration of frame shifting to produce labeled data, showing the positions of the CMOS camera frame (dotted line) relative to the event camera frame (solid line) for a diverse set of shifts along the X and Y axes.
Figure 6 .
Figure 6.Visualization of a single CNN block with skip connection.
Figure 6 .
Figure 6.Visualization of a single CNN block with skip connection.
cal represents the calibration error distance; ∆x, ∆y are the labeled calibration shift values per each window; ∼ ∆x, ∼ ∆y are the estimated calibration shift values per each window.
= (∆ − ∆ ) + (∆ − ∆ )where: represents the calibration error distance; ∆x, ∆y are the labeled calibration shift values per each window; ∆x , ∆y are the estimated calibration shift values per each window.
Figure 7 .
Figure 7. Calibration results as a function of entropy value and dataset.
Figure 7 .
Figure 7. Calibration results as a function of entropy value and dataset.
Table 1 .
Mean and SD of entropy and events count by dataset. | 9,798.6 | 2024-06-21T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
The Single-molecule long-read sequencing of Scylla paramamosain
Scylla paramamosain is an important aquaculture crab, which has great economical and nutritional value. To the best of our knowledge, few full-length crab transcriptomes are available. In this study, a library composed of 12 different tissues including gill, hepatopancreas, muscle, cerebral ganglion, eyestalk, thoracic ganglia, intestine, heart, testis, ovary, sperm reservoir, and hemocyte was constructed and sequenced using Pacific Biosciences single-molecule real-time (SMRT) long-read sequencing technology. A total of 284803 full-length non-chimeric reads were obtained, from which 79005 high-quality unique transcripts were obtained after error correction and sequence clustering and redundant. Additionally, a total of 52544 transcripts were annotated against protein database (NCBI nonredundant, Swiss-Prot, KOG, and KEGG database). A total of 23644 long non-coding RNAs (lncRNAs) and 131561 simple sequence repeats (SSRs) were identified. Meanwhile, the isoforms of many genes were also identified in this study. Our study provides a rich set of full-length cDNA sequences for S. paramamosain, which will greatly facilitate S. paramamosain research.
Results
the quality examination of pooled RnA used for library construction and the evaluation of sequencing result. The quality of pooled total RNA extracted from twelve tissues was examined before library construction. The examined result indicated that the RNA was high quality and was appropriate for following experiment. The evaluation of sequencing result was carried out using 3 methods and the results were as follows: (1) The analysis result of BUSCO software revealed that 876 (82.2%) complete single-copy and duplicated BUSCOs, 34 (3.2%) fragmented BUSCOs (Benchmarking Universal Single Copy Orthologs), 156 (14.6%) missing BUSCOs ( Fig. 1) (2) the aligned ratio of published transcriptome data sequenced by second-generation technology with that sequenced by Pacbio technology in this study was more than 77% (3) the sequences of published genes (relish, dorsal, TGF-beta type I receptor and amine oxidase) were consistent with sequencing result performed by Pacbio technology.
functional annotation of transcripts. The identified transcripts were blasted against protein database (Nr, Swiss-prot, KOG, and KEGG) and the result indicated that a total of 52,544 transcripts (66.5%) were annotated. Of which 52,262 transcripts were annotated in Nr database, 41,054 transcripts in Swiss-prot database, 37,117 transcripts in KOG database, and 27,777 transcripts in KEGG database. The venn diagram was shown in Fig. 2. GO analysis result indicated that 13,441 transcripts were annotated in biological process, 7,288 transcripts in molecular function, and 8,055 transcripts in cellular component. The detail information of GO annotation was shown in Fig. 3.
According to the annotated results, the species distribution of transcripts BLASTx matches against the Nr protein database was performed and the result indicated that the top 10 species all belong to invertebrate, which included Hyalella Azteca, S. paramamosain, Zootermopsis nevadensis, Thermobia domestica, Daphnia magna, Limulus Polyphemus, Diaphorina citri, Lingula anatine, E. sinensis, and L. vannamei. The detailed information of species distribution was shown in Fig. 4.
Identification of long non-coding RNAs (lncRNAs).
In this study, the coding potential of the unannotated transcripts was analyzed with three different bioinformatics softwares, Coding Potential Calculator (CPC), Coding-Non-Coding Index (CNCI), and Protein family (Pfam). The predicted result revealed that 24,201 LncRNAs were identified with the software of CPC, 23,644 LncRNAs with the software of CNCI and 26,147 LncRNAs with the software of Pfam, among which 23,154 common LncRNAs were predicted by three different bioinformatics software (Fig. 5). the analysis of alternative splicing in transcriptome. The analysis result of alternative splicing indicated that there were seven different types existing in transcriptome, including 247 skipping exon (SE), 580 alternative 5′ splice site (A5), 600 alternative 3′ splice site (A3), 160 mutually exclusive exon (MX), 1780 retained intron (RI), 38 alternative first exon (AF), and 40 alternative last exon (AL), among which retained intron was the main type of alternative splicing, accounting for more than 5% (Fig. 7). The isoform analysis result indicated that the isoform number of some genes was more than ten (Fig. 8). For example, a total of 22 different isoforms of LIM domain-binding protein 3 were identified in this study and the sequence analysis result was shown in Fig. 9 (an example of RI). Additionally, 7 different isoforms of ferritin were identified and the sequence analysis result was shown in Fig. 10 (an example of A5).
Discussion
The obtainment of full-length gene is the first step to study gene function, but it can't obtain on a large scale and is time consuming, labor intensive and expensive through rapid amplification of cDNA ends (RACE) technology in general. With the development of technology, the second-generation sequencing technologies are developed such as Illumine, Roche 454, Solexa, SOLID, the sequencing reads length of which is usually short. Though, part of full-length transcripts could be obtained through the transcriptome data sequenced by second-generation sequencing technologies on a large scale, majority of assembled transcripts is short and is not full-length. The third-generation sequencing technology is the most advanced technology, which could obtain full-length transcripts on a large scale. In this study, a total of 79005 high-quality unique transcripts is obtained, among which 50% transcripts is full-length, which is more efficient than RACE and the second-generation sequencing technology 30,32,33,48 . These full-length transcripts identified in this study will facilitate further study of S. paramamosain.
It is well known that the sequencing length of the third-generation sequencing technology could reach as long as 2 Mb, avoiding the influence of the complex repeat motif. In this study, the longest transcript is 14701 bp and www.nature.com/scientificreports www.nature.com/scientificreports/ the N50 (an important parameter used for evaluating the quality of assembly) is 3160 bp, which is longer than that in S. paramamosain studies that used the second-generation sequencing technologies. For instance, in the gonad transcriptome, gill transcriptome, and hemocyte transcriptome of S. paramamosain, the N50 of assembled unigenes is only 477 bp, 1601bp, and 1488 bp, respectively [30][31][32] , which is far shorter than that in this study and indicates that the result of the third-generation sequencing technology is better than that of the second-generation sequencing technology.
Alternative splicing is an important way of regulating gene expression and plays vital roles in a variety of biological processes including sex differentiation and immunological resistance. In the study of E. sinensis, the two splice isoforms of the gene fruitless are obtained and could play important roles in sex-specific character development 66 . In the study of L. vannamei, a total of 6 sex-lethal splice isoforms are cloned used RACE technology and the different isoform may play different roles during embryo development 67 . In the study of S. paramamosain, the gene of down syndrome cell adhesion molecule (Dscam) is cloned and the bioinformatics result reveals that it could encode as high as 36,736 unique isoforms to bind different pathogen to protect the crab from the pathogen infection 68 . However, in crustacean, the identification of alternative splicing on a large scale is scare because of the absence of genome information which makes the study of alternative splicing in crustacean difficult. Because of the longer sequencing length, the third-generation sequencing technology could obtain the full-length of transcripts, which provides the basis for the research of alternative splicing in S. paramamosain. In this study, the constructed sequencing library was consistent of 12 different tissues, therefore, more isoforms were identified comparing to the result that obtained using single tissue constructed sequencing library, which also indicated that different isoforms may play different roles in different tissues and the function of these isoforms needed further research. For example, a total of 6 different ferminazation-1 transcripts were identified in this study and their predicted protein sequences were completely identical to the protein sequences obtained through gonad transcriptome data in our laboratory (unpublished data). However, only 3 different ferminazation-1 transcripts (fem-1a, fem-1b, fem-1c) were identified in E. sinensis transcriptome data sequencing using second-generation sequencing technology, which indicated the third-generation sequencing technology is more efficient than second-generation sequencing technology in identifying isoforms.
It has been reported that the transcripts sequenced using the third-generation sequencing technology has more annotation rate than the second-generation sequencing technology in L.vannamei 48 . In published articles about S. paramamosain transcriptome, the annotation rate of transcripts was 59%, 15.7% and 48.38%, respectively [30][31][32] . In this study, the annotation rate of obtained transcripts was 66.5%, which was higher than that previously obtained using the second-generation sequencing technology and consistent with the result in L. vannamei 48 .
Previous studies have shown that raw data error rate of the third-generation sequencing technology is relatively high, but the raw data error rate could be corrected by the data of second-generation sequencing technology 69 . In this study, the raw data has been corrected by the transcriptome data sequenced using Illumina platform in our laboratory (unpublished result), which ensure the reality of the sequencing result. The consistent blast result of several published genes, relish, dorsal, TGF-beta type I receptor, amine oxidase with sequencing result also indicate the reliability of sequencing result in this study.
LncRNAs are non-coding RNAs that are longer than 200 nucleotides long and play vital roles in many physiological processes 70 . However, the identification of LncRNAs in S. paramamosain using the third-generation sequencing technology has never been reported. In this study, a total of 23154 common LncRNAs predicted by three softwares are obtained, which will facilitate the function study of these LncRNAs in S. paramamosain. In spite of the identification of LncRNAs through the third-generation sequencing technology in this study, the
Materials and Methods
Samples. Healthy sexually adult male (n = 4) and female (n = 4) S. paramamosain (weight = 250 ± 10 g) were purchased from a local agricultural market in Xiamen, China. A total of 12 different tissues (gill, hepatopancreas, muscle, cerebral ganglion, eyestalk, thoracic ganglia, intestine, heart, testis, ovary, sperm reservoir and hemocyte) were collected. The total RNA was extracted using the E.Z.N.A. ® . Total RNA Kit II (Omega, Norcross, GA, USA) following the protocol provided by the manufacturer. The integrity of the RNA was determined with the Agilent 2100 Bioanalyzer and agarose gel electrophoresis. The purity and concentration of the RNA were determined with the Nanodrop micro-spectrophotometer (Thermo Fisher, USA).
SMRT library construction, sequencing, and quality control. mRNA was enriched by Oligo (dT) magnetic beads. Then the enriched mRNA was reverse transcribed into cDNA using Clontech SMARTer PCR cDNA Synthesis Kit (Takara, Shiga, Japan). PCR cycle optimization was used to determine the optimal amplification cycle number for the downstream large-scale PCR reactions. Then the optimized cycle number was used to generate double-stranded cDNA, followed by size selection using the Blue Pippin TM Size-Selection System to generate three libraries (1-2 kb, 2-3 kb, 3-6 kb). Then large-scale PCR was performed for the different size libraries for the next SMRT bell library construction. Different input amount of cDNA of size-selected samples was used to DNA damage repaired, end repaired, and ligated to sequencing adapters. The SMRT bell template was annealed to sequencing primer and bound to polymerase, and sequenced on the PacBio sequel platform by Gene Denovo Biotechnology Company (Guangzhou, China). Data processing. The raw sequencing reads of cDNA libraries were classified and clustered into transcript consensus using the SMRT Link v5.0.1 pipeline 71 supported by Pacific Biosciences. Briefly, CCS (circular consensus sequence) reads were extracted out of subreads BAM file. Then CCS reads were classified into full-length non-chimeric (FL), non-full-length (nFL), chimeras, and short reads based on cDNA primers and polyA tail signal. Short reads were discarded. Subsequently, the full-length non-chimeric (FLNC) reads were clustered by Iterative Clustering for Error Correction (ICE) software to generate the cluster consensus isoforms. Then non full-length reads were used to polish the above obtained cluster consensus isoforms by Quiver software to finally obtain the FL polished high quality consensus sequences (accuracy ≥ 99%). The final transcriptome isoform sequences were filtered by removing the redundant sequences with software CD-HIT-v4.6.7 using a threshold of 0.99 identities. To annotate the transcripts, transcripts were blasted against the NCBI non-redundant protein (Nr) database (http://www.ncbi.nlm.nih.gov), the Swiss-Prot protein database (http://www.expasy.ch/sprot), the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (http://www.genome.jp/kegg), and the COG/KOG database (http://www.ncbi.nlm.nih.gov/COG) with BLASTx program (http://www.ncbi.nlm.nih.gov/BLAST/) at an E-value threshold of 1e-5 to evaluate sequence similarity with genes of other species. GO annotation was analyzed by Blast2GO software 72 with Nr annotation results of transcripts. Transcripts ranking the first 20 highest score and no shorter than 33 HSPs (High-scoring Segment Pair) hits were selected to conduct Blast2GO analysis. Then, functional classification of transcripts was performed using WEGO software 73 . characterization of long non-coding RnAs. CNCI v2.0 74 , pfam 75 and CPC v1.0 76 were used to assess the protein-coding potential of transcripts without annotations by default parameters for potential long non-coding RNAs. To better annotate lncRNAs in evolution level, the software Infernal (http://eddylab.org/infernal/) was used in sequence alignment. The lncRNAs were classified by secondary structures and sequence conservation. | 2,975 | 2019-08-27T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Multiwavelength Study of Dark Globule DC 314.8–5.1: Point-source Identification and Diffuse Emission Characterization
We present an analysis of multiwavelength observations of the dark globule DC 314.8–5.1, using data from the Gaia optical, Two Micron All Star Survey near-infrared, and Wide-field Infrared Survey Explorer mid-infrared surveys, dedicated imaging with the Spitzer Space Telescope, and X-ray data obtained with the Swift X-Ray Telescope (XRT). The main goal was to identify possible pre-main-sequence stars (PMSs) and young stellar objects (YSOs) associated with the globule. For this, we studied the infrared colors of all point sources within the boundaries of the cloud. After removing sources with nonstellar spectra, we investigated the Gaia parallaxes for the YSO candidates and found that none are physically related to DC 314.8–5.1. In addition, we searched for X-ray emission from PMSs with Swift-XRT, and found no 0.5–10 keV emission down to a luminosity level ≲1031 erg s−1, typical of a PMS with mass ≥2 M ⊙. Our detailed inspection therefore supports a very young, “prestellar core” evolutionary stage for the cloud. Based on archival Planck and IRAS data, we moreover identify the presence of hot dust, with temperatures ≳100 K, in addition to the dominant dust component at 14 K, originating with the associated reflection nebula.
INTRODUCTION
The physical state of molecular clouds at a given evolutionary stage is strongly dependent on the development of star formation within such systems (for a review, see e.g.; Heyer & Dame 2015;Klessen & Glover 2016;Jørgensen, Belloche, & Garrod 2020).The interactions of young stellar objects (YSOs) with their host clouds are substantial at the early stages of stellar formation.Stars form when the dense cores of these clouds collapse, with the infall of material resulting in the gravitational potential energy heating the material and increasing its density up to ∼ 10 8 − 10 9 cm −3 (Jørgensen, Belloche, & Garrod 2020).The main effects of stellar formation are the processing of the dust within the cloud, the disruption of the cloud structure, and heating of the cloud material (Strom, Strom, & Grasdalen 1975).These processes continue as the system is altered and disrupted by the evolving young star.
Consequently, there is much interest in studying clouds prior to the onset of star formation, in particular pre-stellar cores (see Bergin & Tafalla 2007, for a review).Much study has been done on the already known pre-stellar cores, TMC-1 and L134N, however much of this was restricted to the sub-mm and radio end of the electromagnetic spectrum (Bergin & Tafalla 2007).Kirk, Ward-Thompson, & André (2005) did a survey for pre-stellar cores, detecting 29 cores with sub-mm observations, and established some basic characteristics expected of such pre-stellar cores, such as an average temperature of 10 K, volume densities of bright cores of 10 5 − 10 6 cm −3 and intermediate cores of 10 4 − 10 5 cm −3 , and additionally constrained radial density profiles and lifetimes of such cores.The filamentary structures of molecular clouds down to the internal structures of dense cores was further studied by André et al. (2014).
It is with this context in mind that we examined the pre-stellar nature of the dark globule DC 314.8-5.1.Originally classified as a compact dark globule (Hartley et al. 1986), it has a serendipitous association with a field star which illuminates reflection from the cloud.Whittet (2007) concluded, from Infrared Astronomical Satellite (IRAS; Beichman et al. 1988) and Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006) data, that the cloud is at the onset of low-mass star formation and further discussed the basic properties of the system.Whittet performed a 2MASS survey of an elliptical region with radii 7 ′ × 5 ′ covering the extent of the cloud to identify potential YSOs, and found only two candidates out of the sample of 387 sources.One was excluded as an old star with significant dust reddening and the other remained a viable YSO candidate, hereafter referred to as "C1." In Kosmaczewski et al. (2022), we showed a presence of divergent conditions for DC 314.8-5.1 as compared to molecular clouds with ongoing star formation.In particular, our study of the Spitzer Space Telescope InfraRed Spectrograph (Houck et al. 2004, IRS;) midinfrared spectra revealed a surprisingly large cation-toneutral PAH ratio, which could be explained by lowerenergy cosmic rays (CRs) ionizing the cloud's interior, in addition to photo-ionization by the star.However, to confirm this, one must perform a deeper search, to rule out pre/young-stellar objects.
In this paper, we expand on YSO identification methods for this system, utilizing data from dedicated observations with Spitzer and the Neil Gehrels Swift Observatory (Swift; Burrows et al. 2000), as well as from archival Wide-field Infrared Survey Explorer (WISE; Wright et al. 2010), 2MASS, andGaia (Gaia Collaboration et al. 2021) surveys.The main goals of the multi-wavelength data analysis presented here are: (i) to identify YSO candidates utilizing infrared and optical imaging, (ii) to test for the presence of pre-main sequence stars (PMSs) that exhibit no optical/infrared counterparts, and (iii) to characterize the diffuse emission seen at microwave and infrared frequencies.
MULTI-WAVELENGTH OVERVIEW
The DC 314.8-5.1 dark globule is located approximately 5 deg below the Galactic plane in the southern constellation Circinus (see Table 1).The B9 V field star HD 130079, located near the cloud's eastern boundary illuminates a reflection nebula (Whittet 2007).The association of HD 130079 with DC 314.8-5.1 was established by van den Bergh & Herbst (1975) who identified the −4.3 pc.Using this value as the distance to the cloud, the cloud's ∼ 7 ′ × 5 ′ radial angular dimensions translate to projected linear sizes of 0.9 pc × 0.6 pc, while the mean atomic hydrogen core (2007)."C2" and "C3" mark the potential YSO candidates identified in this work.The X-ray source detected with Swift-XRT is indicated by "S" with a cross.number density and the total cloud mass inferred from the extinction characteristics (see Whittet 2007), become ∼ 7 × 10 3 cm −3 and ∼ 160M ⊙ , respectively.
Planck
The top panel of Figure 1 presents the Planck map of the DC 314.8-5.1 region at 353 GHz; the Planck emission peak is offset by 1 ′ .4 to the east from the nominal center of DC 314.8-5.1, per Table 1.2016a) estimate of the distance to DC 314.8-5.1 from near-infrared extinction is 400 ± 370 pc.When combined with the Planck photometry, the resulting mass estimate and mean density for the cloud are 10 ± 14 M ⊙ and 892 ± 544 cm −3 , respectively.For comparison, using the Gaia measured distance, ∼ 432 pc, and considering only the error in the Planck flux measurement, we derive a mass of 12.0 ± 0.8 M ⊙ .Though somewhat unconstrained, these are both below the corresponding estimates by Hetem et al. (1988) and Whittet (2007), of 25 and 50 M ⊙ , respectively.
Three point sources were identified in the IRAS Point Source Catalog coincident within 10 ′ radius around the center of DC 314.8-5.1, namely IRAS 14451-6502, IRAS 14437-6503, and IRAS 14433-6506; and no sources were identified in the Faint Source Catalog (Beichman et al. 1988).Source IRAS 14451-6502 is associated with HD 130079, with high-quality detections in the first three bands and moderate quality detection in the 100 µm band, see Table 1.IRAS 14437-6503 was identified in Bourke et al. (1995a), along with HD 130079, to be associated with DC 314.8-5.1.However, IRAS 14437-6503 corresponds to the Gaia DR3 source 5849039334515066624 with a Gaia measured parallax of 0.073 ± 0.0967 corresponding to a Bailer-Jones et al. (2021) measured distance in the range 4.4 − 10.4 kpc and as such is not physically related to the cloud.IRAS 14433-6506 is located at the outskirts of the cloud and associated with the Gaia DR3 source 5849037788326819072 with a Gaia measured parallax of 0.866 ± 0.027 corresponding to a Bailer-Jones et al. (2021) measured distance range of ≃ 1.1 kpc and as such is determined to not be associated with DC 314.8-5.1.
The bottom panel of Figure 1 presents the 60 µm IRAS map of DC 314.8-5.1, with an angular resolution of ∼ 1 ′ (Wheelock et al. 1994).The far-infrared intensity is shifted to the east of center by ∼ 3 ′ .5 likely due to heating by HD 130079.
ROSAT
A weak X-ray point source is present in the Second ROSAT all-sky survey (2RXS), with the survey having an effective angular resolution of 1 ′ .8(Boller et al. 2016), suggesting that DC 314.8-5.1 could be an X-ray emitter.The source position for ROSAT J144833.7-651738 is 1 ′ .8 to the south of the cloud center (see Figure 6 in Appendix B), but still contained within the cloud's boundary, per Table 1 denoted as "2RXS".The low pho-ton count of 8 ± 4 cts in the ROSAT Position Sensitive Proportional Counters (PSPC) 0.1 − 2.4 keV band is insufficient for any meaningful spectral modeling.
Spitzer
The Spitzer Space Telescope observational data for this work, obtained from the NASA/IPAC Infrared Science (IRSA) archive, were originally acquired with the the Infrared Array Camera (IRAC; Fazio et al. 2004) and the Multiband Imaging Photometer (MIPS; Rieke et al. 2004, Proposal ID 50039;P.I.: D. Whittet).DC 314.8-5.1 was observed for a total of 6 hours in 2008 October in five infrared bands: 3.6, 4.5, 5.8, and 8.0 µm with IRAC, and 24 µm with MIPS, with angular resolutions of ∼ 2 ′′ and ∼ 6 ′′ , respectively.
Due to the presence of many bright sources within the field, we performed artifact correction utilizing the IRAC artifact mitigation tool, by following a procedure similar to that in the Spitzer Data Cookbook 1 , and additional tools listed within, to produce mosaic maps in each band.The resulting 5.8 µm IRAC image is shown in the upper right panel of Figure 2, while all four IRAC images are shown in Appendix A as Figure 5. Reduction of the MIPS data similarly followed recipe 22 in the Spitzer Data Cookbook using MOPEX (Makovoz et al. 2005).The resulting MIPS 24 µm map of the region, is shown in the top left panel of Figure 2.
DSS
The DC 314.8-5.1 spatial extent was delineated by Whittet (2007) based on the opacity spatial distribution seen in the Digitized Sky Survey (DSS) image, shown here as Figure 2, bottom left panel.We note that the maximum cloud core visual extinction, A v , according to Whittet is ≳ 8.5 mag through the center of the core and decreasing towards the outer regions.
Swift XRT & UVOT
Swift Target of Opportunity (ToO) observations of DC 314.8-5.1 were obtained with the Swift X-Ray Telescope (XRT) instrument (Burrows et al. 2000), as well 1 https://irsa.ipac.caltech.edu/data/SPITZER/docs/as the Ultra-violet Optical Telescope (UVOT; Roming et al. 2005) filter of the day, in this instance the UVM2-2250 Å band (Proposal ID: 16282, Requester: E. Kosmaczewski).The target was observed for 3 ks on 2021 September 26.Swift XRT observations were taken in photon counting mode.Three Swift XRT images were produced utilizing the Swift XRT data products generator: the entire spectral range from 0.3-10 keV; the soft band from 0.3-2.0keV; and, the hard band from 2-10 keV.The procedure for image creation followed Evans et al. (2020).
The resulting UVOT M2-2250 Å band image of the DC 314.8-5.1 region is shown in the bottom right panel of Figure 2. The XRT maps of the cloud are shown as Figure 3, including the full-band image from 0.3-10 kev (top panel), the soft-band image from 0.3-2.0keV (bottom-left), and the hard-band image from 2-10 keV (bottom-right).Images have been smoothed to aid in visualization (Joye & Mandel 2003), utilizing a Gaussian profile with a radius of 6 px, and σ = 3 px.
Source detection was performed for each of the three Swift XRT Point Source Catalogue (2SXPS) energy bands (i.e., 0.3-1.0keV, 1-2 keV, 2-10 keV), as well as for the total energy range (0.3-10 keV), following Evans et al. (2020).No sources were found within the individual narrow bands, and only one low-significance source, denoted hereafter as "S", was detected within the total energy range.The location of the source S, see Table 1, places it at, or just beyond, the periphery of the cloud.The error in the position measurement is 6 ′′ .9, and the XRT source off-axis angle is 5 ′ .8.The source was detected with C = 8 cts including background counts, with average background B ≃ 0.5 cts.The corresponding errors, calculated according to Gehrels (1986), as appropriate in the regime of a very low photon statistics (see Evans et al. 2014), are and analogously σ B ≃ 2.1, leading to a signal-to-noise (SNR) for the detection of The source "S" can be seen at a low level in the hardband image but it is not distinguishable in the soft-band image (see bottom panels in Figure 3), which may indicate a hard X-ray spectrum for the source.However, the low photon numbers prevent further characterization of its spectrum.An additional peak can be seen in the lower left just outside the cloud core in the total (top panel) and soft band (bottom left panel) in Figure 3.However, this peak fails to meet a σ ≥ 1 and as such is not discussed further here.
The Swift UVOT source extraction was performed utilizing the standard "uvotdetect" routine (Roming et al. 2005).We detected a total of 38 sources to a detection threshold of 5σ.Only two sources (see the bottom left panel of Figure 2) are detected within the extent of the cloud, with the brightest being HD 130079.The second source is a star, TYC 9015-926-1, in the northern region of the cloud, see Table 1.It has a Gaia measured parallax of 2.167±0.015mas, corresponding to a Bailer-Jones et al. ( 2021) distance range of 453.4 − 459.7 pc.There-fore, this object is a (somewhat) nearby star located behind the cloud.
IDENTIFICATION OF YSOS
The evolutionary progression of the infrared emission of YSOs is such that the wavelength of the blackbody peak emission migrates towards the near-infrared as the YSO ages, while the far-infrared excess (due to the surrounding dusty disks and/or envelopes) decreases (see, e.g., Andre et al. 2000;Greene et al. 1994).This results in the population of the youngest YSOs, i.e.Class 0 sources, emitting almost exclusively in the sub-mm/farinfrared range.On the other hand, Class I-III sources, which emit efficiently at shorter wavelengths, if present, should manifest in the analyzed mid-infrared surveys (see in this context Gutermuth et al. 2009;Evans et al. 2009).Class I sources are deeply-embedded protostars with infalling, dense envelopes, characterized by a rising or flat mid-infrared spectrum.Class II sources denote YSOs that are pre-Main Sequence Stars with gasrich optically-thick disks and on-going accretion onto the central star, and a decreasing MIR spectrum.Finally, Class III YSOs have gas-poor disks and very little infrared excess due to dust, and are notoriously difficult to separate from young Main-Sequence stars.We also discuss here the so-called "transition disk" objects which are YSOs without an inner disk but containing an optically thick outer disk (Andre et al. 2000).
Class 0 Source Limits
The Point Source Catalog for IRAS identified no candidate sources within our system down to luminosity levels 6.7 × 10 32 erg s −1 (∼ 0.18L ⊙ ) for 60 and 100 µ m, see Section 2.2.This luminosity level indicated that our study is sensitive to YSO Class 0 sources down to core masses ∼ 0.1M ⊙ (Dunham & Vorobyov 2012).The lack of any source detections associated with the cloud, with the exception of HD 130079, strongly suggests the absence of YSO Class 0 sources within the system.
Spitzer IRAC & 2MASS Source Examinations
The Spitzer IRAC mapping data, for the observed frame time of 12 seconds, effectively probes down to flux levels of 52 µJy at 8 µm, and 6.1 µJy at 3.6 µm with a spatial resolution of ∼ 2 ′′ (Fazio et al. 2004).At the distance of the cloud (432 pc), these limits correspond to monochromatic luminosities of ≃ 4.4 × 10 29 erg s −1 and ≃ 1.2 × 10 29 erg s −1 , respectively.The observed 3.6 µm range (3.1-3.9 µm), in particular, is rather close to the peak of the blackbody emission component in Class I-III sources, and as such the latter value should serve as a good proxy for the limiting luminosity of YSOs candidates, with the bolometric correction of the order of a few at most (see, e.g., Lada 1987).In other words, in the Spitzer IRAC mapping data, we are sensitive to YSO Class I-III luminosities as low as ∼ 10 −4 L ⊙ , so that any young star with a core mass down to 0.01M ⊙ (see Dunham & Vorobyov 2012), should easily be detected.
We performed a search with a radius of 5 ′ around the central position of the cloud, see Table 1, with the Spitzer Enhanced Imaging Products (SEIP) source list in order to identify potential YSOs.We restricted our sample by a signal-to-noise SNR > 5 in all four IRAC bands, excluding unresolved extended sources and excluding sources with only upper limits in any band (sources detected in only some bands are considered in the follow-up selection).This returned a total of 1,319 sources within the sampled region.
First, we applied the color criteria from Gutermuth et al. (2009), Appendix A.1 therein, to the sample of 1,319 sources.This removed 132 star-forming galaxies (SFGs) and 256 active galactic nuclei (AGN) resulting in a sample of 924 potential YSOs.None of these sources met the criteria to be identified as a Class I or Class II YSO as defined in Gutermuth et al. (2009).
Second, we investigated sources with lower significant detections, following the cuts presented in Winston et al. (2019), Appendix A.2 (Equations 17-20).Specifically, those sources that are lacking robust (SNR < 5) detections in IRAC 5.8 µm or IRAC 8.0 µm, but still show SNR > 5 detections in IRAC 3.6 µm and IRAC 4.5 µm, with the requirement that they also have significant (σ < 0.1 mag) detections in 2MASS bands H and K s .However, we find no sources within this sample that meet the color criteria needed for a YSO detection as defined in Winston et al. (2019).
Further, we searched for deeply-embedded protostars, in the so-called "Phase 3" cuts adopted by Gutermuth et al. (2009), Appendix A.3 therein.We included sources from SEIP that lack detections in IRAC 5.8 µm or IRAC 8.0 µm bands, are bright in the MIPS 24 µm band, and have strong (SNR> 5) detections in IRAC 3.6 µm and IRAC 4.5 µm.Our selection returned 164 sources, including some previously flagged as AGN based on the IRAC color cuts.However, only three sources met the MIPS 24 µm band brightness criteria of [24] < 7 mag.None of these three sources satisfied the remaining criteria to be identified as a YSO, and as such this selection returned no candidate sources.
Finally, we comment here on the remaining sources not classified in the first Gutermuth et al. (2009)
Gaia Parallax Measurements
We inspected the remaining 923 IRAC sources unidentified by the Gutermuth et al. (2009) cuts, which are likely background stars or Class III candidates (see Sec-tion 3.2), with the Gaia source catalog (Gaia Collaboration et al. 2021).Gaia parallaxes provide precise measurements with a spatial resolution of 0 ′′ .4 and so are capable of separating individual objects even when clustered on the sky.
The source "C1", not identified above as a bona fide YSO, but previously identified as a potential candidate by Whittet (2007), has a Gaia measured parallax of 0.073±0.096mas.The Bailer-Jones et al. ( 2021) catalog marks the distance to this star as 6.67 +3.75 −2.25 kpc, which is far beyond DC 314.8-5.1.
We looked at a sample of Gaia sources within the same region inspected by IRAC, out to a radius of 5 ′ around the central position of the cloud, see Table 1.We additionally constrained the list of Gaia sources to those having a parallax measurement (within the error bounds) coinciding with the parallax for HD 130079, i.e. 2.2981 ± 0.0194.To cross-check with the 923 IRAC sources, we investigated each IRAC source for any "good" Gaia source within the IRAC spatial resolution of ∼ 2 ′′ (Fazio et al. 2004).This resulted in a sample of 27 potential Class III/Field Star sources.We further cross checked this sample with the the Bailer-Jones et al. ( 2021) catalog for measured distances consistent with HD 130079 (∼ 427 − 435 pc).We identified 2 sources, SSTSL2 J144829.39-651448.5 and SSTSL2 J144907.95-651756.4,corresponding to the Gaia DR3 sources, id: 35849036757534689536, and 5849041288680373504, with appropriate distance measurements, denoted hereafter as "C2" and "C3", per Table 1.
Pre-Main Sequence Stars in Swift XRT Data
PMSs are established X-ray emitters, with corresponding X-ray luminosities 10-10,000 times above the levels characterizing the old Galactic disk population (e.g., Preibisch & Feigelson 2005;Tsuboi et al. 2014).They are routinely detected with the Chandra X-ray Observatory in molecular clouds because their keV photons penetrate heavy extinction (e.g., Wang et al. 2009;Kuhn et al. 2010).The bright members of PMS populations revealed by such studies are typically well modeled assuming a plasma in collisional ionization equilibrium, using the Astrophysical Plasma Emission Code (APEC; Smith et al. 2001), with temperatures of the order of a few-to-several keV, low metal abundances, and 0.5-10 keV luminosities of the order of 10 30 erg s −1 .
Here, we compare the expected X-ray levels of PMSs in DC 314.8-5.1 with X-ray luminosity of the source "S" detected in the Swift XRT pointing.We calculate the Galactic hydrogen column density in the direction of DC 314.8-5.1, utilizing the NHtot tool provided through HESEARC, (HI4PI Collaboration et al. 2016).Given that the source is only at a distance of 432 pc, the resulting values of N H, Gal ≃ 3.2 × 10 21 cm −2 is likely an overestimation for the real column density along the line of sight.Nonetheless, in all the flux estimates below we conservatively adopt the value N H, Gal ≃ 3 × 10 21 cm −2 .
In addition to the Galactic diffuse ISM fraction, we take into account the intrinsic absorption within the cloud.For this, assuming the cloud's mean gas density of 10 4 cm −3 and a spatial scale of 0.3 pc, the corresponding column density is estimated at the level of N H, int ≃ 10 22 cm −2 .Again, this should be considered as an upper limit for the intrinsic absorption value.
During the first few Myr, the X-ray luminosity of PMSs appear approximately constant, declining with time at later evolutionary stages, and again more rapidly as stellar mass increases (Getman et al. 2022).Only a small fraction of the < 2M ⊙ systems appears brighter than 3 × 10 30 erg s −1 in X-rays, and those cases are believed to represent super-and mega-flaring states (e.g., Getman & Feigelson 2021).More than 75% of the more massive (2−100 M ⊙ ) systems, on the other hand, exceed 3 × 10 30 erg s −1 .As such, at the Swift-XRT luminosity level of L 0.3−10 keV ≃ 4.9 +2.1 −1.7 × 10 30 erg s −1 , we are unlikely to detect single PMSs, other than the brightest super/mega flaring sources.
The WISE colors of J144818.35-652144.7 (W1-W2 = -0.31and W2-W3 ≥ 2.37), on the other hand, are consistent with a regular star-forming galaxy (see Wright et al. 2010).One of these two sources is a likely counterpart of the Swift XRT source "S", neither of which represent a viable PMS candidate.
Furthermore, ROSAT J144833.7-651738detected with a 448.83 s exposure by ROSAT, discussed above in Section 2.3, is not observed with the 3 ks observation by Swift-XRT, see Figure 3.A comparison of the ROSAT and Swift-XRT maps is shown in Figure 6 of Appendix B. This lack of a detection may indicate that ROSAT J144833.7-651738 is an artifact of the 2RXS analysis, or potentially a transient/variable source.
DISCUSSION
YSOs can be separated from background stars due to the presence of infrared excesses, primarily in the 1-30 µm range (Evans et al. 2009).As such, if present and related to the cloud, YSOs may appear as optical/infrared point sources for which parallax distances should be similar to the distance of DC 314.8-5.1.In this context, we investigated the point sources located within the spatial extent of DC 314.8-5.1 which survived the selection cuts applied following Winston et al. (2019) and Gutermuth et al. (2009) and with appropriate Gaia parallax and Bailer-Jones et al. ( 2021) distances.Two sources were identified in this way as potential Class III candidates: "C2" and "C3."Dunham et al. (2015) proposed the [3.6] − [24] ≤ 1.5 color values for separating likely Class III sources from AGB field stars.Utilizing this cut, and the 3σ upper limit for the 24 µm fluxes, we found color values of −0.66 and 2.35 for C2 and C3, respectively.We can, however, rule out the likelihood of "C3" being a Class III YSO based on its location near the outer edge of the core region.This is because, on the outskirts of the cloud, we expect a lower level of extinction ∼ 2 − 3 (see Kosmaczewski et al. (2022); Whittet (2007)), and so for a Class III evolutionary stage source, we would expect to see some evidence of an optical/near-ir reflection nebula (Connelley, Reipurth, & Tokunaga 2007;van den Bergh & Herbst 1975).The lack of a detectable reflection nebula for "C2," on the other hand, is unsurprising as "C2" is located near the central region of the core with extinction levels > 8.5 mag (Whittet 2007).Yet this region is also the coldest region (see top panel Figure 1) with a Planck measured temperature of ∼ 15 K.The presence of a Class III source would be expected to produce significant heating of the dust surrounding it, and that is not seen in DC 314.8-5.1 using available observations (Strom, Strom, & Grasdalen 1975).For these reasons, we consider the identification of "C2" as a Class III YSO to be unlikely.However, detailed spectral modeling combined with deeper X-ray measurements would be necessary to substantiate this claim (see Dunham et al. 2015).
The lack of any robust YSO detections further supports the pre-stellar state of DC 314.8-5.1, as discussed in Whittet (2007) and Kosmaczewski et al. (2022).However, younger YSOs (Class 0) and sources that are still heavily embedded within their cores may not be detectable by mid-infrared excesses (Evans et al. 2009;Karska et al. 2018).In order to exclude the presence of such objects deep far-infrared, CO, and/or X-ray observations are needed (Grosso et al. 2000).
The short Swift-XRT exposure we have obtained is sensitive to sources within the cloud down to an unabsorbed 0.5-10 keV luminosity level of ≲ 10 31 erg s −1 .Given this level, only the brightest PMSs could be detected and, among the low-mass (< 2M ⊙ ) systems, only young flaring objects would be seen.A much deeper Xray imaging observation would be needed to constrain the potential PMS population in DC 314.8-5.1 (Kuhn et al. 2010;Getman et al. 2022).
The spectral energy distribution (SED) of DC 314.8-5.1, within the spectral range from microwaves up to UV, is composed of three main components: the thermal emission of the dominant cold dust, the emission of a warm dust photo-ionized and heated by the field star, HD 130079, and the HD 130079 photospheric emission itself.These are presented in Figure 4.
We consider the cloud components and the HD 130079 starlight separately (top and bottom panels of Figure 4 6502 associated with HD 130079 (see Section 2.2).Indeed as seen in Figure 1, the IRAS and (to a lesser extent) the Planck images display a shifted maximum peak away from the center of DC 314.8-5.1.The different apertures and resolutions of the Planck (5 ′ ), IRAS (1 ′ ), and WISE (6 − 12 ′′ ) instruments could be important though.These differences are particularly relevant in the case of the IRAS vs. WISE comparison, and could explain the lower WISE fluxes when compared to the IRAS photometry within the overlapping wavelength range of 12-25 µm.
We have calculated several model curves for the cold thermal component using a modified blackbody emission B ν (T ) × (ν/ν 0 ) β .Our findings indicate that the best-matching model corresponds to a temperature of T = 14 K, consistent with the PGCC model fit, and a spectral index of β = 1.5 (see the dark red solid curve in Figure 4; cf.Section 2.1).The far/mid-infrared (roughly 7 − 70 µm) emission of the system is a complex superposition of the continua from multi-temperature dust, molecular lines, and PAH features, all generally decreasing in intensity with distance away from the photoionizing star DC 314.8-5.1 (see Kosmaczewski et al. 2022).As a basic representation of the entire spectral component, we adopt the simplest model, which consists of a single modified blackbody.This time, the model has a temperature of 160 K, a spectral index of β = 2.0, and a normalization adjusted to match the 12-25 µm IRAS fluxes (see the dark red dashed curve in Figure 4).
While the cold component temperature is precisely constrained by the multiwavelength (143-857 GHz) Planck data in conjunction with the IRAS 100 µm photometry, the warm component's temperature lacks such precision.As previously emphasized, using a single modified blackbody to approximate the hot dust emission in the system is a basic, zero-order approximation.It is worth noting that the IRAS 60 µm flux, which surpasses both the cold (14 K) and warm (160 K) blackbody emission components, indicates the presence of gas with intermediate temperatures in the system.As such, this model is meant to be primarily illustrative.
For the SED representing the HD 130079 starlight, the near-infrared (filters JHKL) and optical (U BV ) fluxes follow directly from the compilation by Whittet (2007, see Table 1 therein), with the addition of the G band flux from the EDR3 (Gaia Collaboration et al. 2021), and the UV 2250 Å flux measured from the newly obtained Swift UVOT observations (see Section 2.7).The photospheric emission of HD 130079, is modeled here assuming a simple optically-thick blackbody spectrum with the temperature T ⋆ = 10, 500 K, such that the bolometric stellar luminosity is L ⋆ = 4πR 2 ⋆ σ SB T 4 ⋆ ≃ 3 × 10 35 erg s −1 , for the stellar radius R ⋆ = 2.7 × R ⊙ , and the distance of 432 pc.This intrinsic emission (denoted in Figure 4 by the dark blue dot-dashed curve) is next reduced by interstellar reddening using the Cardelli et al. (1989) empirical extinction law with the coefficients as given in equations 2-5 of Cardelli et al., and values for E B−V (= 0.395) and R V (= 4.5, in excess over the averaged ISM value of 3.1) adopted from Whittet (2007).The reddened starlight (given by the solid dark blue curve in Figure 4), matches the near-infrared-to-UV fluxes of the star including the 3.4 and 4.6 µm WISE fluxes, and the Swift UVOT 2250 Å flux, even though no stellar photospheric reddening was included in this simple model.
The mass of the Planck source PGCC G314.77-5.14, has been estimated in Planck Collaboration et al. (2016a) as based on the measured 857 GHz flux density F ν integrated over the solid angle Ω = πθ 2 /4 (where θ is the geometric mean of the major and minor FWHM), which is effectively half the provided PGCC flux, with the dust opacity value κ ν = 0.1 (ν/1 THz) 2 cm 2 g −1 adopted from Beckwith et al. (1990).Meanwhile, Whittet (2007) estimated the mass of the core of the globule to be ≳ 50 M ⊙ when updated for the 432 pc distance.However, this discrepancy might not be significant, keeping in mind that the Planck estimate provides upper 2σ (95%) and 3σ (99%) confidence limits of 68 M ⊙ and 115 M ⊙ , respectively.These limits arise solely from uncertainties in flux and distance estimates, and do not account for the uncertainty in the dust opacity function, κ ν (see in this context the discussion in Beckwith et al. 1990, specifically Section IIIe, andalso D'Alessio et al. 2001).
The mass of the cloud -as an isolated dark cloud at high Galactic latitudes -can also be estimated from the excess absorption seen in X-rays toward the cloud (see in this context Sofue & Kataoka 2016) and from the high-energy γ-ray data as measured by the Fermi's Large Area Telescope (LAT; see Mizuno et al. 2022, and references therein).In the former case, a much deeper X-ray observations would be needed to estimate the absorbing hydrogen column density across the cloud.Concerning the latter, we note that in the Fermi High-Latitude Extended Sources Catalog (FHES) by Ackermann et al. (2018), the integrated 1 GeV-1 TeV fluxes of resolved high confidence sources in the LAT data extend down to a few/several ×10 −10 cm −2 s −1 .Further, those which appear point-like lie about one magnitude lower, with a median of 2.5×10 −10 cm −2 s −1 .An estimate for the flux expected from DC 314.8-5.1 due to the interactions with high-energy CRs (assuming no CR overdensity with respect to the CR background), is (see Gabici 2013).In the above, we use M = 160 M ⊙ corresponding to the total mass of the cloud (Whittet 2007, updated distance D = 432 pc), and E γ = 1 GeV.This level of emission may be detected in dedicated Fermi-LAT studies, leading to a robust estimate of the mass in this pre-stellar, condensed dark cloud.
CONCLUSIONS
In this paper we have discussed the multi-wavelength properties of the dark globule, DC 314.8-5.1, through dedicated observations with the Spitzer Space Telescope and the Swift-XRT and UVOT instruments, supplemented by the archival Planck, IRAS, WISE, 2MASS, and Gaia data.This investigation of the characteristics of the system, over a wide range of the electromagnetic spectrum, has led to the following conclusions: 1. We have further supported that DC 314.8-5.1 is a pre-stellar core, with no conclusive Class I-III YSO candidates present within the extent of the system down to luminosities as low as ∼ 10 −4 L ⊙ , translating to a stellar core mass down to 0.01M ⊙ (see Dunham & Vorobyov 2012).We do, however, maintain on possible candidate Class III YSO object ("C2," see Table 1), albeit unlikely due to the lack of heating seen in that region of DC 314.8-5.1.Furthermore, we exclude any younger Class 0 YSO candidates down to luminosities of ∼ 0.18 L ⊙ translating to a core mass of ∼ 0.1 M ⊙ (see in this context Barsony 1997).
2. With the Swift-XRT observations, we probed for any potential PMS population down to a luminosity level of ≲ 10 31 erg s −1 .This level would have detected a typical PMS of mass ≥ 2M ⊙ , while being capable of only detecting the brightest (flaring) low-mass (< 2M ⊙ ) PMSs.Deeper observations would be needed to reject the presence of lower mass objects.Furthermore, CO observations of this system could also test for the presence of the youngest, Class 0, protostars (Kirk, Ward-Thompson, & André 2005, and references therein).
3. We investigated the SED of the DC 314.8-5.1 system as well as the nearby illuminator HD 130079.Our analysis confirmed the presence of warm dust, with temperatures ≳ 100 K, in addition to the dominant 14 K dust component.This warm component manifests itself in the IRAS photometry, particularly within the 12-25 µm range.
4. We comment on the variation in mass estimates of DC 314.8-5.1, ranging from ≃ 12 M ⊙ based on the Planck photometry, up to ≳ 50 M ⊙ , following from the visual extinction characteristics, for the core of the globule.We point out that the discrepancy may be due to errors in the flux measurement, variations in the methodology, and the opacity model uncertainties for this particular system.We also note in this context, that the cloud should be detectable in high-energy γ-rays with Fermi-LAT, given the estimate for the total mass of the globule ∼ 160 M ⊙ .
Hence, DC 314.8-5.1 remains a pre-stellar cloud core.This makes it an ideal candidate for deeper observations, particularly in high-energy X-ray and γ-ray.This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia),processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium).Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.We are grateful to Timo Prusti for advice on Gaia data.
C. IRAC-2MASS COLOR CUTS
We utilize the criteria presented in Gutermuth et al. (2009) and Winston et al. (2019) to select YSO candidates in DC 314.8-5.1 based on Spitzer IRAC mapping data, as outlined in Section 3.2.The top left panel of Figure 7 displays the full extent of the Spitzer sample in the region detecting 1,319 sources within 5 ′ of the center of DC 314.8-5.1 as well as the first star-forming galaxy selection cut. Figure 7 further displays the selections done on the sample to remove contaminating sources beginning with removal of star-forming galaxies (top panels), followed by AGN and finally unresolved PAH and shock emission (bottom left and right panels, respectively).We further show the final selection, identifying Class I and Class II YSOs in Figure 8. Sources that fall outside the YSO selection regions (923 sources) are unidentified in the selection, and assumed to fall under the Class III or Field Star category.Given these remaining sources are heavily contaminated with AGB stars in similar samples, see Dunham et al. (2015), we further cross-correlated with Gaia measured distances and further discussed in Section 3.3 and 4.
Figure 1 .
Figure 1.DC 314.8-5.1 as seen by Planck at 353 GHz (top panel) and IRAS at 60 µm (bottom panel).The white dashed ellipse (with radii of ∼ 7 ′ × 5 ′ ) denotes the globule, with the central position marked by a black "x".The star HD 130079 is marked with a black open circle to the east of the cloud center.presence of the reflection nebula through a survey of southern globules with the Cerro Tololo Observatory.van den Bergh & Herbst (1975) further characterized the host cloud through absorption around the reflection nebulae, determined from the density of field stars method.Later, Bourke et al. (1995b) used NH 3 observations to determine the physical characteristics (density, temperature, mass) of isolated dark clouds, including DC 314.8-5.1.The parallax value for HD 130079 in Gaia Early Data Release 3 (EDR3; Gaia Collaboration et al. 2021), is 2.2981 ± 0.0194 mas.Bailer-Jones et al. (2021) used the Gaia data and additional analyses to estimate the distance to the star as 431.7 +3.2−4.3 pc.Using this value as the distance to the cloud, the cloud's ∼ 7 ′ × 5 ′ radial angular dimensions translate to projected linear sizes of 0.9 pc × 0.6 pc, while the mean atomic hydrogen core
Figure 2 .
Figure 2. DC 314.8-5.1 region as seen at different wavelengths: (top left) Spitzer MIPS 24 µm log-scaled intensity mosaic map; (top right) Spitzer IRAC 5.8 µm log-scaled intensity mosaic map; (bottom left) DSS red linear scaled image (700 nm); (bottom right) Swift UVOT M2-2250 Å band log-scaled map.In each panel, the white dashed ellipse denotes the globule with the central position marked by a white"x".The green ellipses mark UVOT detected sources with HD 130079 marked on the left and TYC 9015-926-1 marked near the northern boundary of the globule."C1" marks the YSO candidate identified by Whittet(2007)."C2" and "C3" mark the potential YSO candidates identified in this work.The X-ray source detected with Swift-XRT is indicated by "S" with a cross.
DC 314.8-5.1 is listed in the Planck Catalogue of Galactic Cold Clumps (PGCC; Planck Collaboration et al. 2016a) as PGCC G314.77-5.14.The Planck team created cold residual maps by subtracting a warm component from individual maps (at given frequencies) as described in Planck Collaboration et al. (2011).As such, cold sources will appear as positive departures, signifying lower temperatures than the surrounding background.The modeling of the cloud on the Planck 857 GHz cold residual map with an elliptical Gaussian returns FWHMs along the major and minor axes of
Figure 3 .
Figure 3. (top) Full-band 0.3-10 keV Swift XRT image of the DC 314.8-5.1 region, smoothed with a Gaussian of radius 6 pixels.HD 130079 field star is marked in the left of each image."C1" marks the YSO candidate identified by Whittet (2007)."C2" and "C3" mark the potential YSO candidates identified in this work.Colorbar indicates the linear intensity scale for the smoothed (averaged) counts.(bottom left) Soft-band 0.3-2.0keV and (bottom right) hard-band 2-10 keV.
cuts adopted here, see the right panel in the Appendix C Figure 7. Sources that fall in this range are often consistent with Class III sources (see Dunham et al. (2015); Anderson et al. (2022) for further discussion).However, these regions of infrared colors are heavily contaminated by AGB type background stars.Dunham et al. (2015) estimated contamination in the Class III type sources by background stars ranges from 25 − 90% in their sample.In order to disentangle background stars from true Class III sources, we further inspected these remaining 923 sources with Gaia, below.
, respectively).For the microwave segment of the cloud SED, dominated by cold dust, we take aperture fluxes from the Second Planck Catalogue of Compact Sources (PCCS2E; Planck Collaboration et al. 2016b) at 143, 217, 353, 545, and 857 GHz.In the infrared range, dominated by the radiative output of the warm dust in the cloud's regions adjacent to the star, we use fluxes from the IRAS Point Source Catalog v.2.1 at 12, 25, 60, and 100 µm, and the WISE fluxes at 3.4, 4.6, 12, and 22 µm, all corresponding to the infrared source IRAS 14451-
Figure 4 .
Figure 4. (top) SED of the DC 314.8-5.1 system, based on observations with Planck (filled black circles), IRAS (red crosses), and WISE (open red circles).Dark red solid and dashed curves represent modified blackbody models for the emission of cold (14 K) and warm (160 K) gas within or on the surface of the cloud, respectively; black solid curve denotes the superposition of the two.(bottom) SED of HD 130079 from ground-based telescopes and Gaia survey (small blue stars), WISE (open red circles) and finally with the Swift UVOT (big blue star).Dark blue dot-dashed curve corresponds to the intrinsic emission of the field star HD 130079, modeled as a blackbody with the temperature 10,500 K and the total luminosity of 3 × 10 35 erg s −1 ; dark blue solid curve illustrates this intrinsic emission subjected to the interstellar reddening.See § 4 for description.
based on observations made with the Spitzer Space Telescope, obtained from the NASA/ IPAC Infrared Science Archive, both of which are operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration.
data products from the Two Micron All Sky Survey, a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center, funded by the National Aeronautics and Space Administration and the National Science Foundation.The Digitized Sky Survey was produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166.The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope.The plates were processed into the present compressed digital form with the permission of these institutions.
resemble the resolution of the ROSAT observation using a boxcar with a width = 2r + 1 pixels and radius (r) = 3 pixels(Joye & Mandel 2003).
Figure 6 .
Figure 6.(top left) Spitzer MIPS 24 µm image of the DC 314.8-5.1 region.Color bar spans from a minimum of 23.5 to 100 MJy/sr on a log scale.White contours in the following panels represent MIPS emission, with levels set at 24.1, 25.05, and 26 MJy/sr.(top right) ROSAT full-band 0.1 − 2.4 keV image of the same region, with Spitzer MIPS contours superimposed.Color bar shows a range 0.6-2.5 smoothed counts on a sqrt scale with a Gaussian smoothing, see Section B. (bottom) Swift XRT hard-band 2 − 10 keV and soft-band 0.3 − 2.0 keV images of the same region (left and right, respectively), smoothed with a boxcar kernal, see Section B. Spitzer MIPS contours are superimposed in white for reference.Color bar shows a range of 0-0.05 smoothed counts on a linear scale.
Figure 7 .Figure 8 .
Figure7.IRAC color-color diagram with selection cuts fromGutermuth et al. (2009).(top left) Total 1,319 sources from the SEIP query of 5 ′ radius from the optical center of DC 314.8-5.1.Overlaid with the first selection cut, red dashed lines showing the removal of star forming galaxies (SFG).(top right) Second SFG selection cut, removed sources are marked as red diamonds.(bottom left) AGN selection cut with AGN like sources marked in blue.(bottom right) Selection cut in order to remove in order to remove Galactic-scale unresolved shocks and unresolved PAH emission sources, PAH emission sources are marked with green pentagons, no unresolved shock sources were selected.
Table 1 .
Source Associations | 10,416 | 2022-09-06T00:00:00.000 | [
"Physics"
] |
Levenberg-Marquardt Algorithm for Mackey-Glass Chaotic Time Series Prediction
For decades, Mackey-Glass chaotic time series prediction has attracted more and more attention. When the multilayer perceptron is used to predict the Mackey-Glass chaotic time series, what we should do is to minimize the loss function. As is well known, the convergence speed of the loss function is rapid in the beginning of the learning process, while the convergence speed is very slow when the parameter is near to the minimum point. In order to overcome these problems, we introduce the Levenberg-Marquardt algorithm (LMA). Firstly, a rough introduction is given to the multilayer perceptron, including the structure and the model approximation method. Secondly, we introduce the LMA and discuss how to implement the LMA. Lastly, an illustrative example is carried out to show the prediction efficiency of the LMA. Simulations show that the LMA can give more accurate prediction than the gradient descent method.
Introduction
The Mackey-Glass chaotic time series is generated by the following nonlinear time delay differential equation: where , , , and are real numbers.Depending on the values of the parameters, this equation displays a range of periodic and chaotic dynamics.Such a series has some short-range time coherence, but long-term prediction is very difficult.
Originally, Mackey and Glass proposed the following equation to illustrate the appearance of complex dynamics in physiological control systems by way of bifurcations in the dynamics: They suggested that many physiological disorders, called dynamical diseases, were characterized by changes in qualitative features of dynamics.The qualitative changes of physiological dynamics corresponded mathematically to bifurcations in the dynamics of the system.The bifurcations in the equation dynamics could be induced by changes in the parameters of the system, as might arise from disease or environmental factors, such as drugs or changes in the structure of the system [1,2].The Mackey-Glass equation has also had an impact on more rigorous mathematical studies of delay-differential equations.Methods for analysis of some of the properties of delay differential equations, such as the existence of solutions and stability of equilibria and periodic solutions, had already been developed [3].However, the existence of chaotic dynamics in delay-differential equations was unknown.Subsequent studies of delay differential equations with monotonic feedback have provided significant insight into the conditions needed for oscillation and properties of oscillations [4][5][6].For delay differential equations with nonmonotonic feedback, mathematical analysis has proven much more difficult.However, rigorous proofs for chaotic dynamics have been obtained for the differential delay equation / = (( − 1)) for special classes of the feedback function [7].Further, although a proof of chaotic dynamics in the Mackey-Glass equation has still not been found, advances in understanding the properties of delay differential equations is going on, such as (2), that contain both exponential decay and nonmonotonic delayed feedback [8].The study of this equation remains a topic of vigorous research.
The Mackey-Glass chaotic time series prediction is a very difficult task.The aim is to predict the future state ( + Δ) using the current and the past time series (), ( − 1), . . ., ( − ) (Figure 2).Until now, there are many literatures about the Mackey-Glass chaotic time series prediction [9][10][11][12][13][14].However, as far as the prediction accuracy is concerned, most of the results in the literature are not ideal.
In this paper, we will predict the Mackey-Glass chaotic time series by the MLP.While minimizing the loss function, we introduce the LMA, which can adjust the convergence speed and obtain good convergence efficiency.
The rest of the paper is organized as follows.In Section 2, we describe the multilayer perceptron.Section 3 introduces the LMA and discusses how to implement the LMA.In Section 4, we give a numerical example to demonstrate the prediction efficiency.Section 5 is the conclusions and discussions of the paper.
Multilayer Perceptrons.
A multilayer perceptron (MLP) is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate outputs.A MLP consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one.Except for the input nodes, each node is a neuron (or processing element) with a nonlinear activation function.The multilayer perceptron with only one hidden layer is depicted as in Figure 1 [15].
The output of the multilayer perceptron described in Figure 1 is and the outputs of the hidden units are respectively, where (⋅) is the activation function.We will adopt the sigmoid function () as the activation function; for example, and the derivative of the activation function with respect to is or MLP provides a universal method for function approximation and classification [16,17].In the case of the function approximation, we have a number of observed data (x 1 , 1 ), (x 2 , 2 ), . ..,(x , ), which are supposed to be generated by where is noise, usually subject to Gaussian distribution with zero mean, and 0 (x) is the unknown true generating function.
Given a set of observed data, sometimes called training examples, we search for the parameters = ( 1 , . . ., , 11 , . . ., 1 , 12 , . . ., 2 , . . ., 1 , . . ., to approximate the teacher function 0 (x) best, where denotes the matrix transposition.A satisfactory model is often obtained by minimizing the mean square error.One of the serious problems in minimizing the mean square error is that the convergence speed of the loss function is rapid in the beginning of the learning process, while the convergence speed is very slow in the region of the minimum [18].In order to overcome these problems, we will introduce the Levenberg-Marquardt algorithm (LMA) in the next section.
The Levenberg-Marquardt Algorithm.
In mathematics and computing, the Levenberg-Marquardt algorithm (LMA) [18][19][20], also known as the damped least-squares (DLS) method, is used to solve nonlinear least squares problems.These minimization problems arise especially in least squares curve fitting.
The LMA is interpolates between the Gauss-Newton algorithm (GNA) and the gradient descent algorithm (GDA).As far as the robustness is concerned, the LMA performs better than the GNA, which means that in many cases it finds a solution even if it starts very far away from the minimum.However, for well-behaved functions and reasonable starting parameters, the LMA tends to be a bit slower than the GNA.
In many real applications for solving model fitting problems, we often adopt the LMA.However, like many other fitting algorithms, the LMA finds only a local minimum, which is always not the global minimum.
The least squares curve fitting problem is described as follows.Instead of the unknown true model, a set of pairs of independent variables (x 1 , 1 ), (x 2 , 2 ), . . ., (x , ) are given.Suppose that (, ) is the approximation model and () is a loss function, which is the sum of the squares of the deviations: The task of curve fitting problem is minimizing the above loss function () [21].
The LMA is an iterative algorithm and the parameter is adjusted in each iteration step.Generally speaking, we choose an initial parameter randomly, for example, ∼ (−1, 1), = 1, 2, . . ., , where is the dimension of parameter .
The Taylor expansion of the function (x , + Δ) is As we know, at the minimum * of loss function (), the gradient of () with respect to will be zero.Substituting (11) into (10), we can obtain where = (x , )/, or Taking the derivative with respect to Δ and setting the result to zero give where J is the Jacobian matrix whose th row equals and also y and f are vectors with th component ( , ) and , respectively.This is a set of linear equations which can be solved for Δ.
Levenberg's contribution is to replace this equation by a "damped version, " where is the identity matrix, giving the increment Δ to the estimated parameter vector .
The damping factor is adjusted at each iteration step.If the loss function () reduces rapidly, will adopt a small value, and then the LMA is similar to the Gauss-Newton algorithm.While the loss function () reduces very slowly, can be increased, giving a step closer to the gradient descent direction, and Therefore, for large values of , the step will be taken approximately in the direction of the gradient.
In the process of iteration, if either the length of the calculated step Δ or the reduction of () from the latest parameter vector + Δ falls below the predefined limits, iteration process stops and then we take the last parameter vector as the final solution.
Levenberg's algorithm has the disadvantage that if the value of damping factor is large, the inverse of J J + does not work at all.Marquardt provided the insight that we can scale each component of the gradient according to the curvature so that there is larger movement along the directions where the gradient is smaller.This avoids slow convergence in the direction of small gradient.Therefore, Marquardt replaced the identity matrix with the diagonal matrix consisting of the diagonal elements of J J, resulting in the Levenberg-Marquardt algorithm [22]: and the LMA is as follows: where
Numerical Simulations
Example 1.We will conduct an experiment to show the efficiency of the Levenberg-Marquardt algorithm.We choose a chaotic time series created by the Mackey-Glass delaydifference equation: for = 17.Such a series has some short-range time coherence, but long-term prediction is very difficult.The need to predict such a time series arises in detecting arrhythmias in heartbeats.
The network is given no information about the generator of the time series and is asked to predict the future of the time series from a few samples of the history of the time series.In our example, we trained the network to predict the value at time + Δ, from inputs at time , − 6, − 12, and − 18, and we will adopt Δ = 50 here.
In the simulation, 3000 training examples and 500 test examples are generated by (22).We use the following multilayer perceptron for fitting the generated training examples: The initial values of the parameters are selected randomly: The learning curves of the error function and the fitting result of LMA and GDA are shown in Figures 3, 4, 5, and 6, respectively.
The learning curves of LMA and GNA are shown in Figures 3 and 4, respectively.The training error ∑ 3000 =1 ( − (x , )) 2 of LMA can reach 0.1, while the final training error of GDA is more than 90.Furthermore, the final mean test error (1/500) ∑ 5500 =5001 ( −(x , )) 2 = 0.0118 of LMA is much smaller than 0.2296, which is the final test error of GDA.As far as the fitting effect is concerned, the performance of LMA is much better than that of the GDA.This is very obvious from Figures 5 and 6.
All of these suggest that when we predict the Mackey-Glass chaotic time series, the performance of LMA is very good.It can effectively overcome the difficulties which may arise in the GDA.
Conclusions and Discussions
In this paper, we discussed the application of the Levenberg-Marquardt algorithm for the Mackey-Glass chaotic time series prediction.We used the multilayer perceptron with 20 hidden units to approximate and predict the Mackey-Glass chaotic time series.In the process of minimizing the error function, we adopted the Levenberg-Marquardt algorithm.If reduction of () is rapid, a smaller value damping factor can be used, bringing the algorithm closer to the Gauss-Newton algorithm, whereas if an iteration gives insufficient reduction in the residual, can be increased, giving a step closer to the gradient descent direction.In this paper, the learning mode is batch.At last, we demonstrate the performance of the LMA.Simulations show that the LMA can achieve much better prediction efficiency than the gradient descent method.
Figure 3 :
Figure 3: The GDA learning curve of the error function.
Figure 4 :
Figure 4: The LMA learning curve of the error function. | 2,780.4 | 2014-11-11T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Leveraging coreference to identify arms in medical abstracts: An experimental study
Performing systematic reviews is a critical yet manual, labor-intensive step in evidence-based medicine. Automating systematic reviews is an active area of research, requiring innovations in machine learning and computational linguistics. We examine how coreference resolution can aid in identifying the arms of a study, an often overlooked piece of information needed to synthesize the results in a systematic review. A classification model 1 that performs better with the coreference features supports the intuition that coreference is able to capture the discourse salience of arms. We note that control arms do not benefit as much from these features.
Introduction
Evidence-based medicine (EBM) is a paradigm that seeks to inform medical practitioners of the optimal treatment, based on the totality of the available evidence (i.e., the results of all relevant clinical trials). To this end, teams of medical experts often conduct systematic reviews, which synthesize all published medical literature pertaining to a specific clinical question. The first step in a systematic review is to formulate the research question to be investigated, and then find all of the relevant citations. Abstracts and then full texts are screened to exclude irrelevant trials. Once a set of trials pertinent to the research question are identified (typically 10-20 trials), key pieces of information are extracted from each trial. This information generally consists 1 https://github.com/elisaF/extractGroups of the patient Population under study, the Intervention(s) being tested, the Comparison and the Outcomes (abbreviated as PICO). Results from all identified trials are typically statistically combined via meta-analysis to produce an aggregated result.
Producing systematic reviews is a timeconsuming, largely manual process. This is exacerbated by the rapidly growing evidence base: PubMed 2 contains 800,000+ publications on clinical trials in humans (Wallace et al., 2013), and on average reports of 75 new trials are published daily. A single systematic review can take over a year to produce -at which point it risks becoming outdated. Therefore, automating evidence synthesis poses an enormous yet enticing challenge for automation.
A crucial step towards automating synthesis is identifying the arms, or groups, in trials. A clinical trial consists of one control arm, and one or more intervention arms. For example, a study comparing the efficacy of aspirin versus a placebo would consist of two arms: those taking aspirin (the intervention group), and those taking the placebo (the control group). Previous work has mostly focused on identifying the PICO elements. However, the PICO elements alone are insufficient to convey the design of the study, a key piece of evidence necessary in the downstream task of data synthesis and analysis. Thus, the present study focuses on improving the automated identification of arms. We observed that arms are often salient in the discourse of the abstract, in that they corefer more often than other to-Randomised controlled trial with 12 month intervention. Change in body mass index (BMI) standard deviation score (SDS) over 12 months with assessment 18 months after the start of the intervention. Using the last available data on all participants (n=106), those in the Mandometer group arm1 had significantly lower mean BMI SDS at 12 months compared with standard care arm2 . The mean meal size in the Mandometer group arm1 fell by 45 g. Those in the Mandometer group also had greater improvement in concentration of high density lipoprotein cholesterol. Those in chain3 the Mandometer group also had greater improvement in concentration of high density lipoprotein cholesterol. Table 2: Medical abstract with annotated arms and coreference chains. The chains were automatically determined as described in section 4.3. All phrases with the same chain label are judged to co-refer.
kens. This study is exploratory work that focuses on investigating the effectiveness of using coreference features for identifying arms.
The remainder of this paper is organized as follows. We motivate the choice of coreference features for arm identification. We then examine prior work in identifying the arms in medical texts, and how coreference resolution has been applied to the medical field. Next, we present an experiment to classify whether tokens in annotated medical abstracts are part of an arm. We propose features that take advantage of the discourse salience of arms, and we discuss the results with and without the coreference features.
Motivation
Identifying the arms is not a simple information extraction task. The arms in a study consist of one control group, and one or more intervention groups. Often, the control group is never explicitly mentioned in the abstract. In the following excerpt, only the intervention arm is mentioned: To determine whether modifying eating behaviour with use of a feedback device facilitates weight loss in obese adolescents.
An arm in a study is typically a noun phrase (NP), where this NP is repeated, either verbatim or anaphorically, throughout the abstract. An example of the discourse salience of arms in a medical abstract is in Table 1. The intervention arm, Mandometer group, is repeated several times verbatim throughout the abstract.
Given this recurring linguistic pattern in medical abstracts, we investigated the use of coreference resolution to help identify arms. The goal of coreference resolution is to determine which mentions in a text refer to the same entity. A referring expression, or mention, is the natural language expression used by discourse participants to refer to entities. Two or more mentions that refer to the same entity are coreferent, and together form a coreference chain. An anaphor and its antecedent (or cataphor and its postcedent) will form a coreference chain. Mentions can be indefinite noun phrases, definite noun phrases, proper names and pronouns, where clinical trial abstracts contain mostly NP's. Using an off-theshelf coreference tool (to be discussed in more detail in section 4.3) yields the mentions and coreference chains illustrated in Table 2.
Note that the token intervention, which is not part of an arm, appears at most 2 times within a single coreference chain, whereas Mandometer, part of the experimental arm, appears 3 times. Further, intervention is found only in 1 chain, whereas Mandometer appears in 2 chains. More generally, we hypothesize a token forming part of an arm is more salient in two ways: (i) an arm token appears more often within a single coreference chain, and (ii) an arm token appears more frequently across different chains (within the same abstract). These observations motivate the coreference features presented in section 4.3. In Table 2, standard care is not a member of any chains. More generally, we can expect salience to help more with intervention arms than control. 3 3 Related work
Automated Identification of Arms
Previous work has identified PICO elements either at the word or sentence level. Most research has extracted information from medical abstracts, although some studies have used the full text of the articles (De Bruijn et al., 2008;Zhao et al., 2012;Wallace et al., 2016). One of the seminal studies in PICO extraction (Demner-Fushman and Lin, 2007) collapsed intervention and comparator, where interventions were short noun phrases based largely on recognition of semantic types (mapped to UMLS concepts) and a few manually constructed rules. The intervention/comparator extractor returned a list of all the interventions under study, and the extractor was evaluated at the sentence level. However, it is important to distinguish between experimental and control treatments as the bias for the experimental group must be accounted for in the data synthesis step (Lumley, 2002).
Beyond PICO, De Bruijn et al. (2008) extracted data from full-text articles based on the CONSORT Plus Guideline, 4 a list of required, recommended and optional items to include in a systematic review compiled by medical experts. The study found that one of the most difficult items to identify was the experimental treatment, which varied widely beyond just drug names. Elsewhere, Chung (2009) identified interventions as a coordinating structure in a single sentence, and found the major weakness in this approach was parsing errors when identifying the boundaries of the conjuncts. And Summerscales et al. (2011) focused on the downstream task of calculating the absolute risk reduction (ARR), identifying the number of bad outcomes for the control and experimental treatment groups, along with the sizes of both treatment groups. This study found outcomes hardest to detect because of their variability, but also had an overall poor recall partly because coreference was not taken into account.
Most recently, Trenta et al. (2015) proposed a novel approach for identifying the arms and PICO elements that does not rely on a first stage of sentence classification, but instead classifies each token directly, followed by an inference process to constrain the labels to more accurate results. As with previous studies, outcome results were the hardest because they are more variable. A significant limitation of this study is that the abstracts were limited to two-arm trials, and in a specific domain.
Automated Coreference Resolution
Coreference resolution is a long-studied task that remains a challenging problem. Most recent work on coreference resolution builds mainly on one of four models.
• The first and most widely-used approach is the mention-pair model (Soon et al., 2001;Ng and Cardie, 2002b). A classifier first identifies all the pairs of mentions which are coreferent. These pairs are then grouped into coreferent chains by clustering techniques such as closest-first (Soon et al., 2001) or best-first (Ng and Cardie, 2002b;Ng and Cardie, 2002a).
In closest-first, you link to the closest preceding mention, whereas in best-first, you choose the likeliest one. Common features in these models include distance between the two mentions, syntactic features (e.g., POS tags), semantic features (e.g., named entity type), lexical features (e.g., head word of the mention), and string matching.
• The mention-ranking model (Denis and Baldridge, 2008), reframes the task as a ranking function rather than a classification function, ranking all the candidate antecedents of a mention to determine which candidate antecedent is the most probable.
• The entity-centric model makes use of entitylevel information, focusing on features of mention clusters, and not just pairs (Raghunathan et al., 2010). The coreference clusters are built up incrementally, using information from partially-completed coreference chains to guide later decisions. Features include whether a mention head word matches any of the head words in the antecedent cluster.
• The antecedent tree model (Yu and Joachims, 2009) builds a graph from a document, where the nodes are the mentions and arcs are the links between mention pairs that are coreferent candidates. The coreference chains are then modeled as latent trees in the graph.
Constraints are imposed on these models for improved results, such as enforcing a transitive closure to guarantee you end up with legal assignments (Finkel and Manning, 2008). For example, if John Smith is coreferent with Smith, and Smith with Jane Smith, then it should not follow that John Smith and Jane Smith are coreferent. Other work has shown that joint models improve performance. Denis et al. (2007) recognized that anaphoricity (whether an entity is the first mention) and coreference should be treated as a joint task since one informs the other. Durrett and Klein (2014) models coreference together with named entity recognition and linking named entities to Wikipedia entities. Combinations of these models have also yielded improved results, such as Clark and Manning (2015) stacking mention-pair and entity-centric systems (which the current paper uses as its off-the-shelf coreference resolver). Many coreference resolvers exploit deeper linguistic knowledge, beyond the features mentioned above. Chowdhury and Zweigenbaum (2013) eliminated less-informative training instances prior to model training by creating a list of criteria based on semantic and syntactic intuitions such as a mismatch in semantic types. Peng et al. (2015) created predicate schemas to constrain inference, such as two predicates with a semantically shared argument. Yang et al. (2015) used semantic role labeling to link the time and locations for event mentions, and for verbal mentions they linked their participants. More recently, Kilicoglu et al. (2016) focused on sortal anaphoras which they found to commonly occur in biomedical literature, resolving anaphors that carry a specific semantic type, or sort, such as these drugs. Many of these studies take advantage of linguistic resources such as WordNet 5 and FrameNet 6 .
In the medical area, coreference resolution has been most closely studied for analyzing clinical narrative text such as that found in Electronic Health Records (EHRs), and biomolecular studies. In fact, there have been corpora (i2b2/VA Corpus (Uzuner et al., 2012), GENIA Event Corpus (Kim et al., 2008)) and shared tasks (SemEval-2015 shared task on Analysis of Clinical Text (Task 14) (Elhadad et al., 2015), BioNLP09 shared task (Kim et al., 2009), ShARe/CLEF eHealth 2013 Evaluation Lab Task 1(Pradhan et al., 2013)) created specifically to advance this area. Given that resources such as FrameNet and WordNet are based mostly on news (e.g. British National Corpus, U.S. newswire), a large number of resources have been created to aid in natural language processing of medical texts. By far the largest and most complex is the Unified Medical Language System (UMLS) 7 , consisting of three main components: Metathesaurus with terms and codes from many vocabularies (including CPT, ICD-10-CM, MeSH, RxNorm, and SNOMED CT), Semantic Network with semantic types and semantic relations, and the SPECIAL-IST Lexicon, which contains syntactic, morpholog-ical and orthographic information on terms, along with NLP tools such as POS tagger and word sense disambiguator. Other tools include MetaMap 8 , a tool for recognizing UMLS concepts, DrugBank 9 , a database of drug names, BANNER 10 , a named entity recognizer for biomedical texts, BioText for identifying entities and relations in bioscience texts, and BioFrameNet 11 , an extension of FrameNet for molecular biology (and BioWordNet (Poprat et al., 2008) was a failed attempt at extending WordNet also to the biomolecular field). However, when applied to clinical trial texts, these tools prove useful mainly for identifying only medical terms and drug names, and thus more linguistically-motivated resources are still lacking for clinical trial texts.
In the area of clinical narratives, Raghavan et al. (2012) took advantage of the temporal features present in these texts to help determine whether two medical concepts corefer with each other. Their 2014 paper (Raghavan et al., 2014) expanded on this idea to identify medical events spanning across narratives, such as admission notes, medical reports, and discharge notes. Yoshikawa et al. (2011) exploited coreference information for extracting eventargument relations from biomedical texts in the Genia Event Corpus. Jindal and Roth (2013) used very specific domain knowledge to resolve coreference in clinical narratives, such as creating a specific discourse model (i.e. a single patient, several doctors and a few family members) to resolve entities of type "person". Despite the active interest in coreference resolution, there has been much less research investigating its application to clinical trial texts. Most of the literature that does exist is applied to the bio-medical field, focusing more on full-text articles (Gasperin and Briscoe, 2008;Huang et al., 2010;Kilicoglu et al., 2016) than on abstracts (Castano et al., 2002;Yang et al., 2004). To the best of the authors' knowledge, there have been no papers using coreference features to identify arms in clinical trial abstracts.
Experiment
The goal of this experiment is to explore empirically whether incorporating coreference features improves the performance of a classifier for arm identification, as compared to a baseline model without coref features (note that we do not aim to necessarily achieve state-of-the-art results on this task). The task of the classifier is to label a token as either part of an arm or not.
The corpus
The corpus 12 consists of 263 abstracts from the British Medical Journal (BMJ) annotated with the experimental and control groups (and other PICO elements) by Summerscales (2013). The BMJ requires structured input, and the number of sections varies with some abstracts only containing a few sections such as BACKGROUND, METHODS, FINDINGS and INTERPRETATION. These structured abstracts usually consist of short phrases and incomplete sentences.
Experimental setup
Sentences were tokenized, lower-cased and stop words were removed . Each token was paired with its abstract to form an [abstract, token] pair to uniquely correlate the token with the medical abstract where it appeared (e.g. [abstract 3, "intervention"], [abstract 129, "intervention"]). A binary classifier was implemented to label each token as belonging to an arm or not (scikit-learn implementation of Support Vector Machine, Pedregosa et al. (2011)). Due to the imbalance of classes (9% positive), the class weights in the model were adjusted to be inversely proportional to the class frequencies in the corpus. We performed five-fold cross validation.
Features
The following features, summarized in Table 5, were used in the machine learning algorithm. bag-of-words The number of times the token occurs within its medical abstract (i.e., the count of [abstract, token] pairs for the given token and abstract). As evident in Table 5, abstracts can be quite repetitive in their vocabulary, but on average a token appears only a couple of times within the same abstract.
drugbank Whether the token exists in the Drug-Bank database version 4.3 13 . The clinical trials often compare the efficacy of different drugs, such that intervention arms would contain drug names. However, note from Table 5 that most words are not drugs, keeping in mind that interventions also consist of therapies, behavior changes and other nondrug-related treatments. tf-idf: Term frequency-inverse document frequency for term t in document d for corpus D: where: One is added in the equation (1) so that terms with zero idf (those that occur in all documents of a training set) are not entirely ignored. The goal of this metric is to capture how informative a word is. For coreference: The Coreference Resolution annotator packaged in Stanford Core NLP 3.0 14 (a model that stacks mention-pair and entity-centric systems) is used to calculate the maximum number of times the token occurs in a single coreference chain within the same medical abstract (max counts) and the number of chains the token appears in the same medical abstract (num chains). This tool was chosen because it is publicly available and yields state-of-the-art results on the 2012 CoNLL data set. The coreference features aim to capture the discourse salience of arms in medical abstracts. As mentioned before, the (max counts, num chains) values for mandometer are (3,2), but for intervention are (2,1). Note from Table 5 that although a token can occur very frequently in a single chain (max counts) and across many chains (max chains), a token on average is not part of a chain at all. This observed statistic lends weight to the use of coreference features as a measure of salience. Previous work has employed other features such as dependency trees and other predicate argument structures to capture this discourse salience. Summerscales (2013) implemented a form of post-hoc coreference resolution as a way to cluster labeled words into groups, for example into a control group versus an intervention group. However, the present study uses the coreference features at the front end to detect the mentions, and is presently not concerned with differentiating among the different arms. Table 4 summarizes the evaluation scores. The results of the classifier are evaluated against the spans of text that were annotated as arms, following Summerscales (2013). Because an arm consists of several contiguous words (e.g. mandometer group), we want to ensure the classifier is able to correctly label the more informative words in that span (mandometer vs. group). A labeled group of words is considered a match for an annotated group if they consist of the same set of words, ignoring had, group(s), and arm. For example, a labeled span of mandometer for the annotated span mandometer group is a true positive. On the other hand, a labeled span of only group is a false positive. Although the scores are relatively low for both models, we emphasize the goal of this experiment is not to achieve state-of-the-art results but to investigate the viability of salience for arm identification. Further, we are being strict in our evaluation, compared to prior work (e.g., Summerscales (2013) ).
Baseline
The baseline model includes the features for how many times a token appears in a single abstract (b-o-w), whether the token exists in the Drug-Bank (drugbank), and the term-frequency inversedocument-frequency measure for the token (tf-idf).
With Coreference
The coref model additionally includes the maximum number of times the token appears in a single coreference chain for a given abstract (max counts), and the number of coreference chains the tokens appears in for a given abstract (num chains).
Error Analysis
The coref model performed better than the baseline model in almost all the metrics: precision (improved 6.8 points) and F1 (+9.3). Additionally, these improvements are consistent across all the crossvalidation runs, as illustrated in Figure 1. Adding the coreference features lowers recall by 5.9 points. To understand the results in more detail, we compare the confusion matrices of the two models. The raw counts in Figure 2 illustrate the class imbalance of the data, giving the impression that a false positive is more likely than a false negative. The normalized confusion matrices in Figure 3 show that false negatives are a higher percentage of the errors than false positives, so that the positive class is the harder one to label.
Given that false negatives are the most common errors across both models, we analyze their occurrences first. The control arm is the most susceptible to this type of error, as it is not as salient in the discourse as the experimental arms. The control words are typically drawn from a finite and small vocabulary (e.g. control, placebo, sham, standard), so their tf-idf scores are usually low. The false negative rate worsens in the coref model partly because it places more weight on discourse salience, and control arms are often not part of a coreference chain, compared with experimental arms. We refer back to the abstract presented in Table 1. A small ablation study was conducted to determine that the b-o-w feature is able to correctly label standard (count=4) as part of an arm. With the coreference features, the word is no longer labeled as an arm, as it does not appear in any coreference chain.
Next, we analyze the false positives across both models. Given that all the features (except drugbank) in both models are aimed at extracting salient words, they also pick out other relevant PICO information. For example, both models incorrectly label knee as part of an arm in the following abstract, where each of these mentions is, in fact, annotated as part of an outcome: ...reduce the incidence of knee and ankle injuries in young people participating in sports. The rate of acute injuries to the knee or ankle. A structured programme of warm-up exercises can prevent knee and ankle injuries...
Another issue with false positives is that the gold data is not comprehensively annotated. Note that in Table 2, the annotator failed to label the third occurrence of mandometer as an arm, although both models attempt to classify it as such. However, striving for a thoroughly annotated data set is not realistic, and so the models should be more robust to these gaps and inconsistencies. The false positive rate improves in the coref model partly because the coreference features prove to be a better measure of discourse salience for the intervention arms. As noted earlier, repetition in medical abstracts is not limited to the words describing the arm. For example, in the abstract from Table 1, the baseline model incorrectly labels the high-frequency tokens eating, months and mean as parts of an arm. The coref model instead correctly labels these as negative, given that they do not occur in a coreference chain.
Finally, we note that the coreference features help in grouping together words with conflicting tf-idf measures. In the abstract from Table 1, the baseline model correctly labels mandometer (tf-idf=26.3), but misses group (tf-idf=4.2). However, the coref model correctly labels the entire span mandometer group as an arm, because both of these tokens appear together in a mention and have the same coreference features.
Conclusion
We introduced a new approach to identify the arms in a clinical trial abstract by creating coreference features aimed at capturing the discourse salience of arms. The coreference features were shown to help in classifying a word as part of an arm, confirming the intuition that mentions of arms throughout the abstract often corefer. However, we note this pattern holds more for the experimental than control arms. The error analysis also revealed that arms are not the only concepts that are coreferent: other PICO elements such as the outcome often have the same features. This observation could motivate a model that jointly labels these PICO elements along with the arms, since one would inform the other. There are several other recurring linguistic patterns yet to be explored that could further aid in arm identification, such as apposition: A computerised device, Mandometer, providing real time feedback... and paraphrasing: ..half were produced automatically with a larger volume of material...The larger booklets produced automatically were...
Another avenue of research is to investigate how these linguistic features pattern across abstracts in the same review. For example, finding the paraphrases across all abstracts that study the same treatment (as defined in a systematic review) could yield finer-grained information on the language used to describe that intervention. To compensate for the inconsistent and small number of annotations, label propagation might be used to retrieve clusters of relations and find the structure in the data.
As noted earlier, the present study focused on the effect of salience on arm identification. In a future study, we plan to implement Summerscales (2013) as a strong baseline (which achieved an F-score of 0.69) to understand whether coreference can still yield improved results when compared to a model that nears state-of-the-art performance. | 5,979.4 | 2016-11-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
The random integral representation hypothesis revisited : new classes of s-selfdecomposable laws
For $\,0<\alpha\le \infty$, new subclasses $\,\mathcal{U}^{<\alpha>}$ of the class $\,\mathcal{U}$, of s-selfdecomposable probability measures, are studied. They are described by random integrals, by their characteristic functions and their L\'evy spectral measures. Also their relations with the classical L\'evy class $L$ of selfdecomposable distributions are investigated.
Limit distribution theory belongs to the core of probability and mathematical statistics. Often limit laws are described by analytical tools such as Fourier or Laplace transforms, but a more stochastic approach (e.g., like stochastic integration, stopping times, random functionals etc.), seems more natural for probability questions. Some illustrations of this paradigm are given in the last paragraph of this note. In a similar spirit, in Jurek (1985) on page 607 (and later repeated in Jurek (1988) on page 474), the following hypothesis was formulated: Each class of limit distributions, derived from sequences of independent random variables, is the image of some subset of ID (the infinitely divisible probability measures) by some mapping defined as a random integral.
Random integral representations, when they can be established, would provide descriptions of limiting laws via stochastic methods, i.e., as the probability distributions of the random integrals of form (a,b] h(t) dY (r(t)) = h(r −1 (s)) dY (s), (1) where h and r are deterministic functions, h : (a, b] → R, r : (a, b] → (0, ∞) and Y (s), 0 ≤ s < ∞, is a stochastic process with independent and stationary increments and cadlag (right continuous with left hand limits) paths; in short, we refer to Y as a Lévy process. In this note we provide new examples of classes of limit distributions for which the above hypothesis holds true. The main results here are Propositions 3, 4 and 5, and Corollaries 5, 6 and 7.
Introduction and notation.
Let E denotes a real separable Banach space, E ′ its conjugate space, < ·, · > the usual pairing between E and E ′ , and ||.|| the norm on E. The σfield of all Borel subsets of E is denoted by B, while B 0 denotes Borel subsets of E \ {0}. By P(E) we denote the (topological) semigroup of all Borel probability measures on E, with convolution " * " and the weak topology, in which convergence is denoted by "⇒". Similarly, by ID(E) we denote the topological convolution semigroup of all infinitely divisible probability measures, i.e., µ ∈ ID(E) iff ∀(natural k ≥ 2) ∃(µ k ∈ P(E)) µ = µ * k k .
Recall also here that ID(E) is a closed topological subsemigroup of P(E).
Finally on a Banach space E we define the transforms T r , for r > 0, as follows: T r x := rx, x ∈ E, and define L(ξ) as the probability distribution of an E-valued random variable ξ. A probability measure µ ∈ P(E) is said to be s-selfdecomposable on E, and we will write µ ∈ U(E), if there exists a sequence ρ n ∈ ID(E) such that ν n := T 1 n (ρ 1 * ρ 2 * ... * ρ n ) * 1/n ⇒ µ, as n → ∞. ( Since we begin with infinitely divisible measures ρ n we do not include the shifts δ xn in (1), and do not assume that the triangle system {T 1 n ρ * 1/n j : 1 ≤ j ≤ n; n ≥ 1} is uniformly infinitesimal, as is usually done in the general limiting distribution theory. Also let us note that our definition (2) is, in fact, the result of Theorem 2.5 in Jurek (1985). There s-selfdecomposability was defined in many different but equivalent forms. Finally, s-selfdecomposable distributions appeared in the context of an approximation of processes by their discretization; cf. Jacod, Jakubowski and Mémin (2001).
Originally the s-selfdecomposable distributions were introduced as limit distributions for sums of shrunken random variables in Jurek (1981). The 's' stands here for shrinking operation defined as follows: Also see the announcement in Jurek (1977). On the real line similar distributions, but not related to s-operation, were studied in O'Connor (1979).
In the present paper we will repeat the scheme (2) successively and will assume that ρ k are chosen from a previously obtained class of limit laws. Such an approach, for another scheme of limiting procedure was introduced by K. Urbanik (1973) and then continued by K. Sato, A. Kumar and B. M. Schreiber, N. Thu, with the most general setting, up to now, described in Jurek (1983), where there is also a list of related references.
For easy reference we collect below some of the known characterizations of the class U(E) of s-selfdecomposable probability measures and indicate only the main steps in the corresponding proofs. PROPOSITION 1. The following statements are equivalent: (iii) there exists a unique Lévy process Y such that µ = L( (0,1) t dY (t)) .
Sketch of proofs. Characterizations (i) and (ii) are equivalent by Theorem 2.5 and Corollary 2.3 in Jurek (1985). Equivalence of (ii) and (iii) follows from Theorem 1.1 and Theorem 1.2(a) in Jurek (1988), where one needs to take the constat β = 1 and the linear operator Q = I.
For our purposes we define random integrals by the formal formula of integration by parts: where the later integral is defined as a limit of the appropriate Rieman-Stieltjes partial sums. This "limited" approach to integration is sufficient for our purposes; cf. Jurek and Vervaat (1983) or Jurek and Mason (1993), Section 3.6. On the other hand, since Lévy processes are semi-martingales, the integrals (1) or the above, can be defined as the stochastic integrals as well.
COROLLARY 1. The class U of s-selfdecomposable probability measures is closed topological convolution subsemigroup of ID. Moreover, it also is closed under the convolution powers (i.e, for t > 0 and µ we have that µ ∈ U if and only if µ * t ∈ U) and the dilations T d , for d ∈ R ( i.e., µ ∈ U if and only if T d µ ∈ U).
Proof. Both algebraic properties follow from (ii) in Proposition 1 and the following identities (T d (ν * ρ)) * t = T d ν * t * T d ρ * t , for t > 0, d ∈ R, and ν, ρ ∈ ID. To show that U is closed in weak convergence topology we use again the factorization (ii) together with Theorem 1.
where Y ρ (·) is a Lévy process (i.e., a process with independent and stationary increments, starting from zero and with cadlag paths) such that L(Y ρ (1)) = ρ. We refer to Y (·) as the background driving Lévy process (in short, the BDLP) for the s-selfdecomposable measure J (ρ).
REMARK 1. The random integral mapping J is an isomorphism between the closed topological semigroups ID(E) and U(E); cf. Jurek (1985), Theorem 2.6.
Finally, letμ be the characteristic function (the Fourier transform) of a measure µ. Then for random integrals (1) we infer that L( when h is a deterministic function, r is an increasing (or monotone) time change in (0, ∞) and Y ρ (.) a Lévy process; cf. Lemma 1.2 in Jurek and Vervaat (1983) or Lemma 1.1 in Jurek (1985) or simply approximate the right-hand integral by Rieman-Stieltjes partial sums.
Our results are given in the generality of a Banach space E, however, below in many formulas we will skip the dependence on E.
2. m-times s-selfdecomposable probability measures. Let us put U <1> := U(E) and for m ≥ 2, let U <m> denotes the class of limiting measures in (2), when ρ k ∈ U <m−1> , for k = 1, 2, ... . As a convention we assume that U <0> := ID. Our first characterization is proved along the lines of the proofs of Theorem 1.1 and 1.2 in Jurek (1988), however one needs not to confuse the classes U β introduced there, with those of U <m> investigated here. Needed changes in arguments are explained as they are deemed.
PROPOSITION 2. For m = 1, 2, ..., the following are equivalent descriptions of m-times s-selfdecomposable probability measures: (iii) There exists a unique (in distribution) Lévy process Y ρ such that Proof. For m = 1, the above is just the Proposition 1. Now suppose that the proposition is proved for m. If µ ∈ U <m+1> then, by the definition (formula (1)), ρ k ∈ U <m> , for k = 1, 2, ... . For given 0 < c < 1, let us choose natural numbers m n such that 1 ≤ m n ≤ n and m n /n → c, as n → ∞. From (2) we have By Theorems 1.2 and 2.1 in Parthasarathy (1967), the second convolution factor in (5) converges, say to µ c , which must be in U <m> by Corollary 1. Thus we get the factorization (ii) for m + 1, i.e., (i) implies (ii).
If (ii) holds we have a family C := {µ c : 0 ≤ c ≤ 1} ⊂ U <m> , where µ 1 = δ 0 and µ 0 = µ, from which we construct sequence (ρ k ) as follows Using the factorization (ii) for c = (k − 1)/k, then applying to both sides the dilation T k and then raising to the (convolution) power k, gives the equality Hence which completes the proof that (ii) implies (i).
Since we have we infer that (iii) implies (ii). To prove the converse that (ii) implies (iii) we proceed as in Jurek (1988), page 482 (formula (3.1)) till page 484, taking β = 1 and Q = I (identity operator). Thus we construct process Z(t) with independent increments and cadlag paths such that L(Z(t)) = µ e −t ∈ U <m> . Because of Corollary 1 we conclude that has increments with probability distributions in U <m> . All in all we have proved (iii).
Proof. Part (a) follows from the characterization (ii) in Proposition 2. To prove that U <m> are closed we use Theorem 1.7.1 in Jurek and Mason (1993) or cf. Chapter 2 in Parthasarathy (1967).
Part(b). Since U ⊂ ID, therefore applying successively the random integral mapping J to both sides gives the inclusion U <m+1> ⊂ U <m> . For the second inclusion L k ⊂ U <k> , note that it is true for k = 1, cf. Corollary 4.1 in Jurek (1985). Assume it is true n, i.e., L n ⊂ U <n> and let µ ∈ L n+1 ⊂ L n . Then for any 0 < c < 1 there exits ν c ∈ L n such that because, by the induction assumption, ν c and µ are in L n . Consequently, by (ii), in Proposition 2, µ ∈ U <n+1> and this completes the proof.
Our next aim is to describe m-times s-selfdecomposability in terms parameters of infinitely divisible laws. Recall that each ID distribution µ is uniquely determined by a triple: a shift vector a ∈ E, a Gaussian covariance operator R, and a Lévy spectral measure M; we will write ρ = [a, R, M]. These are the parameters in the Lévy-Khintchine representation of the characteristic functionμ, namely Φ is called the Lévy exponent ofρ (cf. Araujo and Giné (1980), Section 3.6). Furthermore, by the Lévy spectral function of ρ we mean the function where D is a Borel subset of unit sphere S := {x : ||x|| = 1} and r > 0. Note that L M uniquely determines M.
Since the Lévy processes have infinitely divisible increments (from the class ID) and ID is a topologically closed convolution semigroup, and also closed under dilations T a (a multiplication of random variable by a scalar a), therefore the random integrals (a,b] h(t) dY (r(t)) have probability distributions in ID as well . If [a h,r , R h,r , M h,r ] denotes the triple corresponding to the probability distribution of the integral in question, and [a, R, M] denotes the one corresponding to the law of Y (1) then (4) and (7) give the following equation: and finally for the shift vector we have In order to get the second equality in (13) one needs to observe that 1 {t||x||≤1} (x) = 1 {0<t≤||x|| −1 } (t) or to change the order of integration. Thus Now we may characterize the m-times s-selfdecomposable distributions in terms of the triples in their Lévy-Khintchine formula.
Similarly from (8) and (11), and from the change of the order of integration, we get which proves (15). In order to prove the formula for the shift, first note that by (11) and by change of order of integration, we have for m = 1, 2, ... . Note that for m = 0 the above formula gives the second summand in (13). In terms of w m , (19) gives the recurrence relation where a <0> := a. Thus, if the formula for the shifts (16) holds for m, then the above gives that it also holds for m + 1, which completes the proof the proposition.
Let us recall that the functions are called the incomplete gamma functions. Simple calculations shows that Consequently, the formula (16) may be written as Let us introduce rescales of time in the interval (0, 1) as follows Note that τ α is the cumulative probability distribution function of the random variable g α := e −Gα , where G α is the gamma random variable with the probability density (Γ(α)) −1 x α−1 e −x for x > 0, and zero elsewhere. Hence , for s > 0, 0 < c < 1, (24) and (15) can rewritten as Now we can establish the random integral representation for the subclasses U <m> of s-selfdecomposable probability measures.
where g m = e −Gm and G m is the gamma random variable.
Proof. Use Proposition 4 together with the formula (4). Note that there are not restrictions on a shift vector and a Gaussian covariance operator R. Finally, for m=1 this is Theorem 2.9 in Jurek (1985). REMARK 2. Using the series representation of the exponential function and (24) we get Since the previous characterization, of m-times s-selfdecomposability, has only a restriction on the Lévy spectral measure, therefore we have a characterization of U <m> in terms of Lévy spectral functions.
for all sets D and all r > 0, or equivalently for all sets D and all r > 0.
Proof. In view of the Proposition 3 we have that M = G <m> for a unique Lévy spectral measure G, and (15) gives the first part of the corollary. Since the relation (18), in terms Lévy spectral functions, reads for j = 1, 2..., therefore the inductive argument proves the second part of the corollary.
COROLLARY 5. In order that a Lévy spectral measure G to be a Lévy spectral measure of an m-times s-selfdecomposable probability measure, it is necessary and sufficient that its Lévy spectral functions r → L G (D, r) are m-times differentiable, except at countable many points r, and the function L(D, r) = (A m (L G (D, ·)))(r) is a Lévy spectral function.
The operator A m is the m-time composition of the linear differential operator A, which is defined as follows for once differentiable real-valued functions h defined on (0, ∞).
Proof. If measures M ′ and M are related as in (12) then their corresponding spectral functions (tails) L M ′ and L M satisfy equality Hence L M ′ is at least once differentiable (except on a countable set) and . If one assume that the formula (12) defines the mapping J on the measure M or its spectral function L M , then A may be viewed as its inverse mapping. Before the next characterization, of the class U <m> distributions, let us recall that a Lévy exponent is just the logarithm of an infinitely divisible Fourier transform; cf. formula (7). Let us note that, if Ψ is the Lévy exponent of ρ and Φ is that of J (ρ), then (3) and (4) With these equalities and the recursive relation between classes U <m> we have The operator D m is the m-time composition of the following linear differential operator (Dg)(y) := g(y) + d(g(ty))/dt| t=1 , where g : E ′ → C is once differentiable in each direction y ∈ E ′ and t ∈ R.
Note that in a particular case one has (Dg)(y) := g(y) + y dg(y))/dy, when y ∈ E ′ = R and it differs from A in Corollary 5 only by a sign. REMARK 4. If ones defines J on Lévy exponents by (4) then the operator D can be viewed as its inverse, i.e., D = J −1 , on Lévy exponents on a Banach space.
PROPOSITION 5.
A probability measure µ = [a, R, M] is completely sselfdecomposable, i.e., µ ∈ U <∞> := ∞ m=1 U <m> if and only if there exists a unique bi-measure σ(·, ·) on S × (0, 2) such that where A · D := {x ∈ E : x/||x|| ∈ D, ||x|| ∈ A} and for each Borel D ⊂ S, σ(D, ·) is a finite Borel measure on the interval (0, 2) and for each Borel subset A ⊂ (ǫ, ∞) for some ǫ > 0, σ(·, A) is a finite Borel measure on the unit sphere S. Moreover, we have that Proof. If µ = [a, R, M] is completely s-selfdecomposable then by Proposition 3 or Corollary 4, for each m there exists a unique Lévy measure G such that or for all D and r > 0 On the other hand, using (28) the integral i.,e., Σ is the more explicit description of U <∞> . Further, let us recall that
By the formula (27), the last integral is equal
Similar integrability formulas hold for functions g k (x) := log k (1 + ||x||) and Lévy measures M. Recall that the integrability condition of g k appears in the random integral representation for the class L k .
Concluding remarks and two examples.
A). The classes U <m> were introduced by an inductive procedure and thus we have the natural index m. For a positive non-integer α one may proceeds as in Thu (1986) using the fractional calculus. However, we may utilize our random integral approach and define where Y ρ (·) is a Lévy process with L(Y ρ (1)) = ρ. Equivalently, we have cf. (14), (15) and for the shift vector (16) with (21),(22) and (24). Furthermore, for any continuous and bounded f on (0, ∞) and gamma random variables G α and G β we have B). In this subsection we consider only R-valued random variables or Borel measures on the real line. Because of the inclusion L ⊂ U each selfdecomposable distribution is an example of s-selfdecomposable one. On the other hand, by Proposition 3 in Iksanow, Jurek and Schreiber (2002), selfdecomposable distributions of random variables of the form X := ∞ k=1 a k η k , where η k 's are independent identically distributed Laplace (double exponential) random variables and k a 2 k < ∞, have the background driving probability measures ν ∈ U. Furthermore, by Proposition 3 in Jurek (2001) we have that In Jurek (1996) it was noticed that φ S (t) = t/(sinh t) ("S" stands for the hyperbolic 'sine') and φ C (t) := 1/(cosh t) ( "C" stands for the hyperbolic 'cosine') are the characteristic functions of random variables of the above series form X. Using (31) we conclude It might be worthy to mention here that φ S (t) · ψ S (t) is a characteristic function of a conditional Lévy's random area integral ; cf. Lévy (1951) or Yor (1992) and Jurek (2001). Similarly, (φ C (t) · ψ C (t)) 1/2 is a characteristic function of an integral functional of Brownian motion; cf. Wenocur (1986) and Jurek (2001), p. 248. Recently in Jurek and Yor (2002) the probability distributions corresponding to both ψ S and ψ C were expressed in terms of squared Bessel bridges. Also both functions viewed as the Laplace transform in t 2 /2 can be interpreted as the hitting time of 1 by the Bessel process starting from zero; cf. Yor (1997), p. 132. At present we are not aware of any stochastic representation for the analytic expressions in (32). Finally, it seems that the operators A m may be related to some Markov processes. | 4,832.8 | 2004-06-01T00:00:00.000 | [
"Mathematics"
] |
Genexpi: a toolset for identifying regulons and validating gene regulatory networks using time-course expression data
Background Identifying regulons of sigma factors is a vital subtask of gene network inference. Integrating multiple sources of data is essential for correct identification of regulons and complete gene regulatory networks. Time series of expression data measured with microarrays or RNA-seq combined with static binding experiments (e.g., ChIP-seq) or literature mining may be used for inference of sigma factor regulatory networks. Results We introduce Genexpi: a tool to identify sigma factors by combining candidates obtained from ChIP experiments or literature mining with time-course gene expression data. While Genexpi can be used to infer other types of regulatory interactions, it was designed and validated on real biological data from bacterial regulons. In this paper, we put primary focus on CyGenexpi: a plugin integrating Genexpi with the Cytoscape software for ease of use. As a part of this effort, a plugin for handling time series data in Cytoscape called CyDataseries has been developed and made available. Genexpi is also available as a standalone command line tool and an R package. Conclusions Genexpi is a useful part of gene network inference toolbox. It provides meaningful information about the composition of regulons and delivers biologically interpretable results. Electronic supplementary material The online version of this article (10.1186/s12859-018-2138-x) contains supplementary material, which is available to authorized users.
Background
Uncovering the nature of gene regulatory networks is one of the core tasks of systems biology. Identifying direct regulons of sigma factors/transcription factors can be considered the basic element of this task. In fact a large portion of software for network inference is limited to such direct interactions (e.g., [1][2][3]). It has however been shown that using only one source of data for network inference (e.g., only CHIP-seq experiment) can be misleading and combining multiple sources is necessary [4].
Primary focus of this paper is on CyGenexpia plugin for the Cytoscape platform [5] that uses time-course gene expression data to discover regulons among candidate genes obtained from other sources (literature, database mining, or ChIP experiments). CyGenexpi can be also used for de-novo network inference, although this is less reliable. CyGenexpi is built on top of the Genexpi software package that provides the core functionality also as a command-line tool and an interface to the R language.
Genexpi is based on an ordinary differential equation model of gene expression introduced in [6]. In the model, the synthesis of new mRNA for a gene is determined by a non-linear (sigmoidal) transformation of the expression of its regulators. The model also includes a per-gene decay rate of the mRNA, which is assumed to be constant.
While there are multiple tools for gene network inference from the command line or programming languages (see [7] for a recent review), there are currently, only two Cytoscape plugins for gene network inference: ARACNE [8] and Network BMA [9]. ARACNE is intended for steady-state expression data, while Network BMA handles time series, but assumes a simple linear model of regulation without regard to mRNA decay. CyGenexpi thus provides an alternative to Network BMA in that it builds on a non-linear model including decay.
A preliminary version of the method presented in this paper has been applied in our previous work [10]. The additional contribution of this paper is a) a polished and documented publicly available implementation of the method with well-defined API, b) improved workflow and software support for the workflow c) interfacing the method with Cytoscape and R and d) evaluation of the method on additional datasets. As Cytoscape does not natively support working with time series data, we also developed CyDataseries -a plugin for importing and handling time series and other forms of repeated measurements data in Cytoscape.
Both Genexpi and CyDataseries are imlemented in Java and are platform independent. Binaries, source code and documentation are available at http://github.com/ cas-bioinf/genexpi/wiki/. The software is open source and licensed under LGPL version 3.
Implementation
The core of Genexpithe algorithm for fitting model parametersis implemented in OpenCL, with a Java wrapper. Thanks to high portability of both Java and OpenCL, Genexpi can be executed on both GPUs and CPUs in any major operating system and has very good performance. There are currently three interfaces to Genexpi core: CyGenexpi (a Cytoscape plugin), a command-line interface and an R interface. In this section we describe the model and fitting method of Genexpithe implementation of the interfaces is straightforward. Initial part of this section is taken from [10] and its supplementary material where we describe first use of Genexpi in practice. In addition we provide details of regularization and parameter fitting as well as further developments made to make the method usable by non-expert users, especially the semi-automatic evaluation of good fits and the "no change" and "constant synthesis" models.
The model
Genexpi is based on an ordinary differential equation (ODE) model for gene regulation, inspired by the neural network formalism [6]. In this model the synthesis of new mRNA for a gene z controlled by set of m regulators y 1 ,..,y m (genes or any other regulatory influence) is determined by activation function f(ρ(t)) of the regulatory input ρðtÞ ¼ P j¼1::m w j y j ðtÞ þ b. Here w j is the relative weight of regulator y j and b is bias (inversely related to the regulatory influence that saturates the synthesis of the mRNA). In our case, f is the logistic soft-threshold function f(x) = 1/(1 + e -x ). The transcript level of z is then governed by the ODE: where k 1 is related to the maximal level of mRNA synthesis and k 2 represents the decay rate of the mRNA. Both k 1 and k 2 must be positive. The complete set of parameters for this model is thus β = {k 1 , k 2 , b, w 1 ,…, w m }. Given N samples from a time series of gene expression taken at time points t 1 , …, t N the inference task can be formalized as finding β that minimizes squared error with regularization: Here z is the observed expression profile,ẑ β the solution to (1) given the parameter values β and the observed expression of y 1 ,..,y m , and r(β) is the regularization term. The regularization term represents a prior probability distribution over β that gives preference to biologically interpretable values for β and is discussed in more detail below. Assuming Gaussian noise in the expression data, (2) is the maximum a posteriori estimate of β.
Our model is similar to that used by the Inferelator algorithm [1], although there are important differences: the Inferelator does not model decay (k 2 )it assumes decay is always one. Further, Inferelator minimizes the error of the predicted derivative of the expression profile, while we minimize the prediction error for the actual integrated expression profile and introduce the regularization term.
Smoothing the expression profiles
Since the expression data is noisy, Genexpi encourages smoothing the data prior to computation. We have had good results with linear regression of B-spline basis with degrees of freedom equal to approximately half the number of measurement points. By smoothing we get more robust results with respect to low frequency phenomena, but sacrifice our ability to discover highfrequency changes and regulations (oscillations with frequency comparable to the measurement interval are mostly suppressed). Further our experiments with fitting raw data or tight interpolations of the data (e.g. cubic spline with knots at all measurement points) have had little success in fitting even the profiles that were highly correlated, due to the amplified noise in the data.
Smoothing of time series profiles has been used previously for network inference [11].
Further advantage of smoothing is that it lets us subsample the fitted curve at arbitrary resolution. The subsampling then allows us to integrate (1) accurately with the computationally cheap Euler method, making evaluation of the error function fast and easy to implement in OpenCL.
Parameter fitting and regularization
Genexpi minimizes eq. 2 by simulated annealing. For each gene and candidate regulator set we execute 128 annealing runs with different initial parameter values. Using 128 runs was enough to achieve high replicability of the results. Annealing runs for the same target and regulator are executed on the same OpenCL compute unit, letting us to move all necessary data to local memory and thus increase efficiency. We use the Xor-Shift1024* random generator [12] as a fast and high quality parallel source of randomness.
Note that in some cases, multiple vastly different combinations of parameters may yield almost identical regulatory profiles. For example, if the interval of attained regulatory input ð min i¼1::N ρðt i Þ; max i¼1::N ρðt i ÞÞ lies completely on one of the tails of f, the activation function becomes approximately linear over the whole interval, so increasing the weights and decreasing bias while decreasing k 1 yields a very similarẑ β . To discriminate between those models and to force the parameters into biologically interpretable ranges, we introduce the regularization term r(β). In particular, we expect k 1 smaller than the maximal expression level of the target gene (i.e., that maximal transcript level cannot be achieved in less than a unit time starting from zero), we put a bound on maximal steepness of the regulatory response: max t j w j y j ðtÞ j< 1 0 for all regulators j and we expect the regulatory input to come close to zero (the steepest point of the sigmoid function) for at least one time point: min t j ρðtÞ j< 0:5.
For a suitable penalty function γ(x, ω) the regularization term becomes: where c is a constant governing the amount of regularization. In our work, the penalty for value x > 0 and bound ω is: Minimizing γ(x, ω) is then the same as maximizing log-likelihood, assuming that x is distributed uniformly over (0; ωx) with some probability p and as ωx + α|e| with probability (1p) where e ∼ N(0, 1). In this interpretation, the probability p is uniquely determined by c in the regularization term and by choosing α such that the resulting density function is continuous.
We have empirically determined the best value of c to be approximately one tenth of the number of time points after smoothing. While without regularization, many of the inferred models contained implausible parameter values, regularization forced almost all of those parameters into given bounds -r(β) was zero for most models. At the same time the mean residual error of the models inferred with regularization differed by less than one part in hundred from models inferred without regularization.
Evaluating good fits
To evaluate whether a fit is good, we have chosen a simple, but easily interpretable approach. The primary reason is that we intend to keep the human in the loop throughout the inference process and thus the human has to be able to understand the criteria intuitively. Since most published time series expression data is reported only as averages without any quantification of uncertainty, we let the user set the expected error margin based on their knowledge of the data. The error margin is determined by three parameters: absolute, relative and minimal error. These combine in a straightforward way to get an error margin for each time point, depending on the expression level z(t): Fit quality is then the proportion of time points where the fitted profile is within the error margin of the measured profile. A fit is considered good if fit quality is above a given threshold (the default value is 0.8).
No change and constant synthesis model
Prior to analyzing a gene as being regulated, we need to test for two baseline cases that would make any prediction useless. The obvious first case are genes that do not change significantly over the whole time range. Genes that do not change are excluded from further analysis as both regulators and targets as the Genexpi model contains no information in that case.
A slightly more complicated case is the constant synthesis model where we expect the mRNA synthesis to be constant over the whole time range: Note that this is the same as assuming there are 0 regulators. Since genes with constant synthesis could be fitted by any regulator by simply putting w = 0, and large b, those genes are excluded as targets. However, regulators that could be explained by constant synthesis are still analyzed, as there is meaningful information. Fitting the constant synthesis model is also done via simulated annealing in OpenCL.
For the putative regulations excluded this way, the correct interpretation is that the underlying dataset provides no evidence for or against such regulations. If there are biological justifications that the regulations should be visible in the data (e.g. that the regulatory effect should be larger than the measurement noise), it is possible to cautiously consider this as evidence against the regulations taking place.
Results and discussion
In this section we describe the intended workflow for analysis with Genexpi and its user interface and then we discuss results of evaluation on real biological data.
The primary user interface for Genexpi is the CyGenexpi plugin for the Cytoscape software, but Genexpi can also be run directly from R and via a command line interface. For CyGenexpi, an important improvement over the Aracne or NetworkBMA Cytoscape plugins is the direct involvement of user in the process.
Genexpi workflow
The workflow for analysis with Genexpi is as follows: 1. Start with a network of putative regulations either obtained from database mining or experiments. 2. Import the time-course expression data and smooth them to provide a continuous curve. 3. Remove genes whose expression does not change significantly throughout the whole time-course. 4. Remove genes that could be modelled by the constant synthesis model. 5. Optional: Human inspection of the results of steps 3&4, possibly overriding the algorithm's decisions. 6. Finding best parameters of the Genexpi model for each gene-regulator pair. The fitted models are then classified into good and bad fits. Good fits indicate that the regulation is plausible, while bad fits show that the regulation either does not take place or involves additional regulators. 7. Optional: Human inspection of the fits, possibly overriding the algorithm's classification (shown in Fig. 1).
This workflow is completely covered by CyGenexpi with the help of CyDataseries in a simple wizard-style interface. Alternatively, the same workflow, but without human intervention can be run by a single function call in R. All interfaces also provide the user with the ability to run individual steps separately.
While Genexpi can include multiple regulators for a gene, we found this not very useful in practice, as even for relatively long expression time series (13 time points) , an arbitrary pair of regulators is able to model the expression of a large fraction of all genes, increasing the false positive rate. CyGenexpi therefore currently does not expose GUI for using more than one regulator in the model. Using more regulators is however available for more advanced users via the command-line or R interfaces.
For CyGenexpi, the time series data is imported with CyDataseries from either a delimited text file or the SOFT format used in Gene Expression Omnibus. Fig. 1 Human inspection of the model fits in CyGenexpi. The user is shown the profile of the regulator (blue) and target (red) as well as the best profile found by Genexpi (green). The red ribbon is the error margin of the measured profile. The algorithm classified the first profile as a good fit, while the second was considered implausible to be regulated. The user may however modify the classification based on their knowledge of the data and organism While Genexpi can be used for de-novo regulon identification from time-series expression data only, high rate of false positives should be expected. The main reason is that in real biological data, multiple sigma factors may have similar expression profiles and Genexpi thus considers all genes regulated by one of the sigma factors as possibly regulated by all of the similar sigma factors. The evaluation in this paper therefore focuses on identifying the regulated genes among a set of plausible candidates. Nevertheless, the workflow for de-novo inference is almost the same as described above, only the initial network should contain a link from each investigated regulator to all other genes.
We evaluated Genexpi in three ways: 1) direct biological testing of the suggested regulatory relationships, 2) comparing the ability of Genexpi and other tools to reconstruct two literature-derived regulons and 3) measuring computing time required to process the data. The first part of the evaluation is taken from our previous work [10], while the latter two are new contributions.
In-vitro biological evaluation
This section recapitulates the relevant results obtained with Genexpi, originally reported as a part of [10]. We performed a basic analysis of the predictive performance of Genexpi with the SigA regulon of Bacillus subtilis combined with the expression time series from GSE6865 [13]. We followed the Genexpi workflow outlined in the previous section, including evaluation of fits by human. Genexpi predicted 215 genes that were not known to be regulated by SigA as potential SigA targets. We selected 10 of those genes for in-vitro transcription assays. 1 We found that 5 of them were SigA-dependent (for the remaining five, the regulation could not be excluded). More details of the SigA analysis can be found in the aforementioned paper. We have however excluded the SigA regulon from purely computational evaluation as the method was developed and tweaked for the SigA data and any comparison would thus be likely biased.
Reconstructing bacterial regulons
To extend the biological evaluation from [10] and to better determine Genexpi's performance in identifying regulons, we took two bacterial regulons from the literature: a) the SigB regulon of B. subtilis from Subtiwiki [14] as of January 2017 combined with the GSE6865 expression time series [13] and b) two versions of the SigR regulon of Streptomyces coelicolor: one derived with ChIP-chip [15] and the one determined via knockouts [16]. Both versions of the SigR regulon were combined with the GSE44415 expression time series [17].
For each of the literature regulons we first exclude targets that were constant or had constant synthesis (steps 3&4 of the workflow) and determined how many of the remaining members were considered by Genexpi to be regulated by the respective sigma factorthese correspond to true positives. Then we generated a set of random expression profiles with similar magnitude and rate of change as the sigma factor. Inspired by [18] we draw random profiles from a Gaussian process with a squared exponential kernel with zero mean function, transformed to have positive values. See Fig. 2 for an example of the random profiles. We then tested how many targets were predicted to be regulated by this nonsensical profilethese correspond to false positives.
We consider testing a random regulator profile as a more reliable assessment than testing the complement of the literature-based regulon for two reasons. First, it is a better match for the intended Genexpi workflow, which starts with a set of candidate genes. Here, using a random profile for the regulator models the situation where the candidate list is wrong and we expect Genexpi to reject that there is regulatory influence on most genes. Second, the complement is usually composed of less characterized genes and there is little guarantee that the complement contains genes that are not regulated by the sigma factor. The complement may include genes that are regulated with the sigma factor, but were not annotated yet, and also genes that have expression profile similar to the profiles of the regulon of the analyzed sigma factor due to chance or non-regulatory interactions. Such profiles would be classified as false positives, while they in fact have nothing to do with the analyzed regulon and its sigma factor. Comparing the performance on regulon complement actually depends more on the uniqueness of the sigma factor profile than on the inference algorithm.
For this evaluation we ran Genexpi with default settings and without any human input. Complete code to reproduce all of the results for this and the following section is attached as an R notebook in Additional file 1. For comparison, we performed the same analysis with TD-Aracne [19] an extension of the frequently used Aracne algorithm designed for time series data. TD-Aracne was run both on the whole dataset at once and on each regulator-target pair separately. Running regulator-target pairs however had much worse performance than using the whole dataset, so those results are omitted here, but can be inspected in Additional file 1. We also compared the results for the whole regulon and for the subset of the regulon that was predicted by Genexpi, i.e. without the genes removed in steps 3&4 of the workflow.
For all analyses, we smoothed the raw data by linear regression over B-spline basis of order 3 with 3-10 degrees of freedom. TD-Aracne was tested with the raw data as well as the smoothed data subsampled to give lower number of equal-spaced time points as expected by TD-Aracne. For TD-Aracne we tested three methods of recovering the regulon from the inferred network over the full gene set: a) take only the genes that were marked as directly regulated by the sigma factor, b) take all genes connected by a directed path from the regulator and c) take all genes connected to the regulator. Variant a) had very low performance overall, among b) and c) we report the result more favorable to TD-Aracne. For the SigR regulon of Kim et al., the results were very similar when only the targets marked as having "strong" evidence were used. All results not shown here can be found in Additional file 1. See Table 1 for the main results.
In the SigB regulon, the Genexpi performs slightly better than TD-Aracne. While TD-Aracne (in multiple settings) confirms almost all of the literature regulon
Table 1 Main Evaluation Results
Results of Genexpi and TD-Aracne on the regulon reconstruction task. The "Regulator" column reports the proportion of predicted regulations by the true regulator, "Random" reports the proportion of predicted regulations by a random profile (averaged over 50 runs). The best results for each algorithm are highlighted in bold. TD-Aracne (tested) are results of TD-Aracne only on those genes not removed by Genexpi in steps 3&4 of the workflow. The "tested" variant is not reported for the SigR regulon as the results are very similar to those on all genes. The DFs column contains the degrees of freedom for the spline, "#T" stands for "Number of genes tested by Genexpi", "Reg." for "Regulator" and "Rand." for "Random" while rejecting over half of the regulations by a random profile, Genexpi using spline with 4 degrees of freedom rejects two thirds of random regulations while also recovering 90% of the literature regulon. Moreover, Genexpi has the advantage of allowing for a sensitivity/specificity tradeoff by choosing the degree of freedom for the splinewith high degrees of freedom, almost all random regulations are rejected while still recovering majority of the literature regulon. The performance of TD-Aracne varied unexpectedly with the chosen degree of freedom. We also see, that running TD-Aracne with smoothed data and removing no change and constant synthesis genes as in Genexpi workflow, allows for only slight improvements for the performance of TD-Aracne over running directly with the raw data (as TD-Aracne is designed to work).
For both variants of the SigR regulon, TD-Aracne mostly found little difference between the literature based and random regulons. The few cases of better performance by TD-Aracne occurred unpredictably with certain smoothing of the data. At the same time, Genexpi was rarely misled by the random regulations and recovered large fractions of the literature regulon while behaving consistently: the proportion of both true and random regulations grows with more aggressive smoothing (less degrees of freedom).
Computing time required
For analysis of computing time, Genexpi was run on a mid-tier GPU (Asus Radeon RX 550) and TD-Aracne on an upper-level CPU (Intel i7 6700 K). Both algorithms were run on a Windows 10 workstation with only basic precautions to prevent other process from perturbing the system load. The numbers reported should therefore not be considered benchmarks but rather an informative estimate of the computing time during a normal analysis workflow. The results are shown in Table 2 and indicate that Genexpi was fast enough to be run repeatedly on commodity hardware with TD-Aracne being slower, but still fast enough for most practical use cases.
Reconstructing eukaryotic regulons
While Genexpi was designed for bacterial regulons, we also tested its performance on eukaryotic data, in particular the time series of gene expression throughout the cell cycle of Saccharomyces cerevisiae [20], deposited as GDS38. We chose the same 8 transcription factors regulating the cell cycle as in our previous work [21] and downloaded their regulons from the YEASTRACT database (as of 2018-02-09) [22]. We used spline with 6 degrees of freedom to smooth the data and interpolate missing values. After excluding constant and constant synthesis targets (steps 3&4 of the workflow), we selected 30 targets for each gene at random to reduce computational burden. We then proceeded as in the bacterial regulons evaluation by generating random profiles and comparing recovered regulations by both Genexpi and TD-Aracne across the measured regulator profiles and 20 random profiles. The results are shown in Table 3.
In this case, the signal is weaker than in the prokaryotes, which is not unexpected given the increased complexity of eukaryotic regulation. Genexpi gives the worst (undistinguishable from random) results for MBP1, SWI4 and SWI6, which are known to regulate in complexes and thus break the model expected by Genexpi. Interestingly, TD-Aracne is able to determine some of those regulations. For the other genes, Genexpi provides consistent, but weak information while TD-Aracne provides strong signal for some genes, while performing very poorly on the others.
The full code to reproduce the analysis can be found in Additional file 1.
Future work
The Genexpi workflow was kept deliberately simple, but this involves some inaccuracies. Most notably, Genexpi masks uncertainty in the data and uses multiple hard thresholds. Following [18] that use a similar model of gene regulation in a fully Bayesian setting, we want to extend Genexpi to handle uncertainty Time taken to compute a possible regulations for a single regulon. All of the results were averaged across both the runs with the actual regulator profile and the runs with a randomly generated profile. All times in seconds Results of Genexpi and TD-Aracne on the eukaryotic regulon reconstruction task. The "Regulator" column reports the proportion of predicted regulations by the true regulator, "Random" reports the proportion of predicted regulations by a random profile (averaged over 20 runs) explicitly and provide full posterior probability distributions for the quantities of interest.
Conclusions
Our evaluation has shown that Genexpi is a useful part of a bioinformatician's toolbox for uncovering and/or validating regulons in biological systems. Genexpi was designed for bacterial regulons, but can bewith cautionemployed also for eukaryotic data. It also provides transparent results andunlike other similar programs -lets the human to stay in the loop and apply expert knowledge when necessary. The parameters of the fitted models are biologically interpretable and thus can guide design of future experiments. Time-series expression data cannot in principle provide complete information about the regulatory interactions taking place and Genexpi is therefore best used as one of multiple sources of insight about a biological system. Genexpi is equipped with both simple point&click interface for the Cytoscape application and with R and command-line interfaces for advanced users. | 6,430.2 | 2018-04-13T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Control of white mold (Sclerotinia sclerotiorum) through plant-mediated RNA interference
The causative agent of white mold, Sclerotinia sclerotiorum, is capable of infecting over 600 plant species and is responsible for significant crop losses across the globe. Control is currently dependent on broad-spectrum chemical agents that can negatively impact the agroecological environment, presenting a need to develop alternative control measures. In this study, we developed transgenic Arabidopsis thaliana (AT1703) expressing hairpin (hp)RNA to silence S. sclerotiorum ABHYDROLASE-3 and slow infection through host induced gene silencing (HIGS). Leaf infection assays show reduced S. sclerotiorum lesion size, fungal load, and ABHYDROLASE-3 transcript abundance in AT1703 compared to wild-type Col-0. To better understand how HIGS influences host–pathogen interactions, we performed global RNA sequencing on AT1703 and wild-type Col-0 directly at the site of S. sclerotiorum infection. RNA sequencing data reveals enrichment of the salicylic acid (SA)-mediated systemic acquired resistance (SAR) pathway, as well as transcription factors predicted to regulate plant immunity. Using RT-qPCR, we identified predicted interacting partners of ABHYDROLASE-3 in the polyamine synthesis pathway of S. sclerotiorum that demonstrate co-reduction with ABHYDROLASE-3 transcript levels during infection. Together, these results demonstrate the utility of HIGS technology in slowing S. sclerotiorum infection and provide insight into the role of ABHYDROLASE-3 in the A. thaliana–S. sclerotiorum pathosystem.
insight into the control S. sclerotiorum prior to systemic infection. Further, while ABHYDROLASE-3 silencing has demonstrated reduced disease severity against S. sclerotiorum using SIGS and HIGS technology, its role in pathogenesis is still largely unknown and identification of interacting partners and pathways may provide new candidates to target through RNAi.
Plants activate innate immunity pathways in response to pathogen attack and are dependent on the recognition of specific pathogen-derived molecules 19 . These pathways are commonly separated into necrotrophic and biotrophic responses and demonstrate distinct modes of pathogen detection. For example, necrotrophic pathogens are detected through the presence of pathogen and damage associated molecular patterns (PAMPs/ DAMPs) via pattern recognition receptors (PRRs) in pattern-triggered immunity (PTI), while in contrast, biotrophic pathogens are detected via intracellular NBS-LRR proteins detecting secreted effector molecules in effector-triggered immunity (ETI) 19 . Activation of these recognition pathways induces specific defense hormones which can further activate down-stream defense processes to appropriately respond to pathogen attack. These hormones include jasmonic acid (JA) and ethylene (ET) which can directly activate the induced systemic resistance (ISR) pathway and are associated with promoting physical barriers against necrotrophic pathogens 20 . Other phytohormones like salicylic acid (SA) activate the systemic acquired resistance pathway (SAR) leading to a local hypersensitive response as a defense against biotrophic infection 21,22 . While S. sclerotiorum has been previously categorized as a necrotrophic fungal pathogen, recent reports provide evidence of an early biotrophic phase 23,24 . S. sclerotiorum secretes pathogenicity factors including oxalic acid (OA) and a suite of cell wall degrading enzymes to weaken and break down host cells upon infection 25,26 . During the initial stages of infection, a brief biotrophic phase may occur, where OA and other secreted pathogenicity factors act to suppress host defenses, specifically the SA-mediated SAR pathway 23,24 . The inability of S. sclerotiorum to successfully suppress SAR may lead to increased resistance against S. sclerotiorum infection. Further support for this initial biotrophic phase comes from the identification of secreted effector molecules essential for pathogenicity 27 and intracellular NBS-LRRs necessary for successful defense against S. sclerotiorum 28 . Lastly, synergistic effects between SA-mediated SAR and JA/ET-mediated ISR have been described against necrotrophic infection 29 and that removal of either of these defense hormones will result in an unsuccessful defense response 30 .
In the current study, we engineered A. thaliana expressing hpRNA complementary to S. sclerotiorum ABHYDROLASE-3 under the activity of a CaMV 35S promoter (AT35S::SS1G_01703RNAi herein referred to as AT1703). AT1703 A. thaliana was more tolerant to S. sclerotiorum infection with reduced lesion size, fungal load and ABHYDROLASE-3 transcript abundance compared to wild-type Col-0. Light microscopy of the infection process further identified reduced epidermal and mesophyll tissue degradation and vascular tissue colonization in AT1703 lines. Global RNA sequencing revealed enrichment of the SA-mediated SAR pathway and predictive gene regulatory network analysis predicted heat shock factor (HSFA4A, HSFA8) and TGA (TGA10) transcription factors as putative regulators of resistance in AT1703. Analysis of ABHYDROLASE-3 predicted interacting partners suggests this pathogenicity factor plays a role in the methionine salvage pathway which is necessary for successful polyamine biosynthesis. Taken together, these data demonstrate the utility of HIGS in slowing S. sclerotiorum infection and highlight the importance of the SA-mediated SAR pathway in contributing to successful defense against S. sclerotiorum.
HIGS of S. sclerotiorum ABHYDROLASE-3 slows infection in transgenic AT1703
Arabidopsis. hpRNA complementary to the ABHYDROLASE-3 gene coding sequence of S. sclerotiorum (SS1G_01703) was transformed into wild-type Col-0 A. thaliana under the control of the CaMV 35S promoter and propagated to the T3 generation. Three independent insertion lines were propagated to the T2 generation ( Supplementary Fig. S1a), and our top performing line (AT1703.1, herein referred to as AT1703) was selected for RNA sequencing and further analysis at the T3 generation due to its significantly reduced lesion size and ABHYDROLASE-3 transcript reduction compared to wild-type Col-0 ( Supplementary Fig. S1b). Light microscopy of leaf cross sections using the chitin-targeting stain lactophenol blue showed no apparent differences in uninfected AT1703 and wild-type Col-0 leaves (Fig. 1a). However, at 2 dpi, wild-type Col-0 leaves show an increased abundance of S. sclerotiorum in both mesophyll and vascular tissues (Fig. 1b). At 3 dpi S. sclerotiorum shows further colonization of vascular tissue in wild-type Col-0 compared to AT1703 leaves, coupled with decreased levels of epidermal and mesophyll tissue degradation in AT1703 lines (Fig. 1c). At 1 dpi, no differences in lesion size or fungal abundance were observed (Fig. 1d,e), while at 2 dpi, S. sclerotiorum ABHYDROLASE-3 transcript levels showed a 73% reduction in AT1703 with no significant differences in lesion size or fungal load compared to wild-type Col-0 (Fig. 1f). However, by 3 dpi AT1703 showed a 48% reduction in lesion size, an 84% reduction in fungal load, and a 93% reduction in ABHYDROLASE-3 transcript accumulation compared to wild-type Col-0 plants (Fig. 1d,f). While this significant transcript reduction is present as early as 2 dpi, these data suggest a ~ 24-h lag phase where transcript abundance is reduced but insufficient to immediately slow infection (Fig. 1d,f).
Global RNA-sequencing reveals enriched gene activity of the salicylic acid-mediated SAR pathway in AT1703. To uncover the underlying gene expression patterns between AT1703 and wild-type Col-0 we performed global RNA sequencing on uninfected (0 dpi) leaves as well as directly at the site of infection at mid (2 dpi) and late (3 dpi) stages of S. sclerotiorum leaf infection. Hierarchical clustering of raw count expression values revealed distinct clusters forming between uninfected (0 dpi) and S. sclerotiorum infected tissues (Fig. 2). Genotypes also clustered in response to S. sclerotiorum infection compared to infection time points. Global gene activity showed no significant differences in the proportion of low, moderate or highly accumulated transcripts, with on average 20%, 21% and 58% of detected transcripts showing low, moderate and high-count levels respectively (Fig. 2). Further, total transcript detection between genotypes is similar, with an increase in www.nature.com/scientificreports/ 1813 and 1348 detected transcripts in AT1703 compared to Col-0 at 2 and 3 dpi respectively due to an increased abundance of S. sclerotiorum transcripts in wild-type Col-0. Differential expression analysis identified shared and specifically up-regulated gene sets in response to S. sclerotiorum in AT1703 and wild-type Col-0 (Fig. 3a). For example, 335 and 235 genes were specifically upregulated at 2 and 3 dpi respectively, while 444 genes were up-regulated at both timepoints in AT1703. Further, 449 and 235 genes were down-regulated in AT1703 relative to Col-0 at 2 and 3 dpi respectively, and 351 genes were down-regulated at both timepoints (Supplementary Dataset S4). Next, to identify putative biological processes associated with these differentially expressed gene sets, we performed Gene ontology (GO) term enrichment (Fig. 3b). Data show the SA-mediated SAR pathway (p = 2.05 × 10 -27 ) and SA biosynthesis (p = 1.16 × 10 -31 ) as well as enrichment of the hypersensitive response (p = 3.07 × 10 -7 ) in gene sets up-regulated at 2 and 3 dpi in AT1703. The enrichment of SA biosynthesis, SAR and the hypersensitive response indicate a strong role of the SAR defense pathway in successful defense response and incompatible interaction (p = 1.28 × 10 -13 ) found in AT1703 lines. While SA biosynthesis and SAR demonstrate shared enrichment between AT1703 at 2 and 3 dpi, it is worth noting that statistically significant enrichment is also found specifically at 2 dpi, while no such enrichment is found for down-stream activated defense processes like the hypersensitive response and incompatible interaction (Fig. 3b). Further, 750 genes were up-regulated in response to S. sclerotiorum across both genotypes and timepoints (Fig. 3a). This shared gene set showed significant enrichment of PTI associated JA (p = 7.77 × 10 -34 ) and ET biosynthesis (p = 1.32 × 10 -49 ), induced systemic resistance (p = 6.63 × 10 -8 ) and PAMP induction of immune response (p = 1.71 × 10 -4 ) (Supplementary Dataset S1), suggesting that activity of these JA/ET-mediated defense pathways alone in wild-type Col-0 are insufficient to slow S. sclerotiorum infection without enrichment of SA-mediated SAR, providing additional evidence of the synergism of SA-mediated SAR and JA/ET-mediated ISR/PTI in mounting a successful defense against S. sclerotiorum. In contrast, enrichment of up-regulated genes specifically in wild-type Col-0 show association with programmed cell death and cell wall reinforcement, including leaf senescence (p = 1.97 × 10 -6 ), autophagy (p = 8.88 × 10 -6 ), cell wall modification (p = 2.61 × 10 -6 ) and organization (p = 2.75 × 10 -4 ), and lignin biosynthesis (p = 7.22 × 10 -7 ).
To explore potential regulators of immunity in AT1703 expressed throughout infection, we performed a transcription factor network analysis using SeqEnrich in our AT1703 up-regulated gene set in response to S. sclerotiorum. Here, we identified the TGA transcription factor TGA10, as well as the HSFs transcription factors HSFA4A and HSFA8, which are both predicted to regulate SA-signalling and SAR activity (Fig. 4a). Further, we explored differential expression between AT1703 and wild-type Col-0 in response to S. sclerotiorum in genes that have been previously described to interact with these transcription factors. Here, we found up-regulation of the TGA interacting ENHANCED DISEASE SUSCEPTIBILITY 1 (EDS1), NONEXPRESSER OF PR GENES 1 (NPR1) and ISOCHORISMATE SYNTHASE 1 (ICS1) in AT1703 A. thaliana, as well as up-regulation of the HSFA4A interacting kinase MITOGEN-ACTIVATED PROTEIN KINASE 3 (MPK3) (Fig. 4b).
ABHYDROLASE-3 silencing influences expression of polyamine synthase genes in S. sclerotiorum.
While AT1703 demonstrates significant ABHYDROLASE-3 transcript reduction during S. sclerotiorum infection, we had yet to explore the influence of ABHYDROLASE-3 silencing on gene expression within the fungus itself. Three predicted interacting partners of ABHYDROLASE-3 (SS1G_02233, SS1G_14434, SS1G_03597) were identified in S. sclerotiorum using stringdb (https:// string-db. org/) (Fig. 5a). While SS1G_14434 and SS1G_03597 have yet to be significantly characterized, SS1G_02233 encodes a 5′-methylthioadenosine phosphorylase (MTAP) that prevents accumulation of the polyamine biosynthesis by-product S-methyl-5′-thioadenosine (MTA), inhibiting polyamine biosynthesis and ultimately fungal growth 31,32 . Here, all three predicted interacting partners showed significant transcript reduction in S. sclerotiorum that was infecting AT1703 compared to wildtype Col-0 at 2 and 3 dpi (Fig. 5b). Further, reduction in these predictive partners was consistent with our target gene ABHYDROLASE-3, showing at least a 50% increase in reduction from 2 to 3 dpi. To further support the role of ABHYDROLASE-3 in polyamine biosynthesis, we quantified expression of genes encoding the rate-limiting enzyme in polyamine biosynthesis ORNITHINE DECARBOXYLASE (ODC) as well as down-stream enzymes SPERMINE SYNTHASE and SPERMIDINE SYNTHASE. Transcript reduction was observed for all three genes in S. sclerotiorum challenging AT1703, suggesting a role of ABHYDROLASE-3 in polyamine biosynthesis during S. sclerotiorum infection.
AT1703 is not tolerant to the phytopathogen Botrytis cinerea. ABHYDROLASE-3 targeting
hpRNA expressed by AT1703 was carefully designed to not share 21-nt siRNAs with transcripts of related species including the closely related necrotrophic pathogen B. cinerea. To study whether AT1703 was also tolerant to B. cinerea, we next inoculated both wild-type Col-0 and AT1703 with B. cinerea spores (Fig. 6a). Here, no significant differences in lesion size (Fig. 6b), fungal load (Fig. 6c) or target transcript abundance (Fig. 6d) were found using the B. cinerea ABHYDROLASE-3 homolog BC1G_08022, suggesting no off-target effects in B.cinerea challenging AT1703.
Discussion
RNAi technology has the demonstrated ability to control pathogenic fungi in host plants through constitutively expressed dsRNA molecules in transgenic plants via host-induced gene silencing (HIGS). The constitutive expression of fungal-targeting dsRNAs allows for a durable means of protection against pathogen attack throughout the host plant lifecycle 16,33,34 . Protection of the plant relies on successful uptake and processing of dsRNA and processed siRNAs which can be generated in the host prior to uptake by the challenging pathogen 3 . Cross-kingdom trafficking of both dsRNAs and processed siRNA has been demonstrated across the host-pathogen interface in necrotrophic pathogens including S. sclerotiorum and the closely related B. cinerea, making these species ideal www.nature.com/scientificreports/ Successful S. sclerotiorum infection involves the penetration and destruction of host epidermis and mesophyll tissue followed by colonization of the vasculature 28,37 . Once the vascular tissue has been colonized, infection is considered systemic, being able to cut off host nutrient supply while the fungus extends mycelia conveniently throughout host tissues. Susceptible genotypes have demonstrated preferential growth throughout the vascular tissue in Brassica species compared to respective tolerant lines 38 . While microscopy of S. sclerotiorum infected wild-type Col-0 and AT1703 demonstrates reduced S. sclerotiorum abundance in AT1703, we also find decreased levels of epidermal and mesophyll tissue degradation and reduced abundance of vascular tissue fungal colonization, thereby preventing systemic infection throughout the plant. Limiting systemic infection has significant implications on reducing yield losses from S. sclerotiorum infection. This was demonstrated by Wytinck et al. 16 , where transgenic B. napus lines expressing the same ABHYDROLASE-3 hpRNA construct against S. sclerotiorum show reduced vascular tissue colonization coupled with an increase in seed mass compared to untransformed B. napus post whole-plant S. sclerotiorum infection. As infection typically initiates on mature leaves before progressing into stem tissue through the vascular system 39,40 , it is critical to reduce vascular colonization directly at the site of infection, which can ultimately slow S. sclerotiorum progression throughout the host.
Target gene silencing in the pathogen is essential to slow the infection through HIGS. AT1703 plants demonstrate significant ABHYDROLASE-3 transcript reductions in S. sclerotiorum at 1 and 2 dpi, while reduced lesion size and fungal load are only identified at 3 dpi relative to wild-type Col-0 plants. Wytinck et al. 16 showed similar results in transgenic B. napus, with significant reductions in ABHYDROLASE-3 transcript abundance and fungal load observed at 1 and 2 dpi respectively. In contrast, prior HIGS studies targeting S. sclerotiorum www.nature.com/scientificreports/ essential genes (TRX1, OA) detected both target transcript reduction and reduced lesion size at 2 dpi in host plants 33,41 , suggesting this delay to be specific to HIGS plants targeting ABHYDROLASE-3. These findings may be attributed to down-stream interactions of ABHYDROLASE-3 with other interacting partners. Reduced activity of ABHYDROLASE-3 in addition to regulators of polyamine biosynthesis are likely responsible for or at least contribute to the inhibition of fungal growth. However, further characterization of ABHYDROLASE-3 interacting partners and the polyamine biosynthesis pathway during ABHYDROLASE-3 silencing are required to better understand this delay. While the utility of HIGS in controlling S. sclerotiorum has been demonstrated in multiple host plants 16,33,34 , the underlying host response of these transgenic plants, as well as host-pathogen interactions in this system are less understood and dependent upon the specific gene being silenced. Here, global RNA sequencing and differential expression analyses provide insight into the response of AT1703 leaves to S. sclerotiorum directly at the site of infection. Moreover, the mode of pathogenicity of S. sclerotiorum has been recently debated, as emerging studies have suggested the presence of a brief biotrophic stage during early infection highlighted by the suppression of the SA-mediated SAR pathway 23,24 . SA biosynthesis in response to pathogen attack occurs through the isochorismate synthase (ICS) pathway, where ICS1 is essential for SA biosynthesis and successful SAR 42 . Downstream of SA accumulation, NPR1 activity is required for expression of pathogenesis-related (PR) genes involved in the SAR response through SA signal transduction 42,43 . AT1703 shows significant enrichment of the SA-mediated SAR pathway compared to wild-type Col-0, with enrichment of both ICS1 and NPR1 in response to S. sclerotiorum infection, suggesting an inability to suppress SAR while infecting AT1703 A. thaliana. Further, while NPR1 does not contain a DNA binding domain, it acts as a transcriptional cofactor, enhancing TGA transcription factor www.nature.com/scientificreports/ binding affinity to PR genes upon translocation into the nucleus in response to SA 44 . While multiple TGA transcription factors have been shown to interact with NPR1 in response to SA in A. thaliana 45 , TGA10 interaction has yet to be explored. Here, TGA10 was identified as the lone member of the TGA transcription factor family significantly enriched in AT1703 A. thaliana in response to S. sclerotiorum and was identified in our predictive transcription factor network to regulate SAR activity. Further, TGA10 has been demonstrated to be necessary for activation of reactive oxygen species (ROS) response in PTI in response to the bacterial PAMP flg22, where tga10 mutants were unable to activate flg22-response genes and showed reduced ROS production 46 , highlighting its importance in the activation of defense genes in response to pathogen attack. Our network analysis further identified two HSF transcription factors (HSFA4A, HSFA8) associated with the SA signalling pathway that are up-regulated in AT1703 in response to S. sclerotiorum. While HSFs have been shown to interact directly with EDS1, resulting in SA accumulation in response to pathogen attack 47,48 , similarly to TGA10, HSFA4A and HSFA8 have specifically been implicated in the host ROS response as well as activation of the host antioxidant system 49,50 . Activation of these HSFs can act through phosphorylation by MPK3 in response to stress, which also demonstrates significant up-regulation in AT1703 A. thaliana in response to S. sclerotiorum 49,51,52 . While these data suggest SAR activity is an essential component of a successful defense response against S. sclerotiorum infection, SAR alone is insufficient to protect the host, as quantitative disease resistance is complex and requires the coordination of multiple defense pathways and processes 53 . For example, SA-mediated SAR and JA/ET regulated ISR pathways exhibit synergistic effects in defense against necrotrophic pathogen attack 29 . This was demonstrated using npr1 knockout Arabidopsis mutants deficient in SA signalling, coi1 mutants deficient in JA signalling, and ein2 deficient in ET signalling, which all show hyper-susceptibility phenotypes due to the inactivity of SAR and ISR pathways 30 . Differential expression analysis not only identified enrichment of SAmediated SAR in AT1703, but also JA/ET biosynthesis and the ISR pathway which were significantly enriched in both AT1703 and wild-type Col-0 lines relative to their respective uninfected leaves, suggesting both defense pathways are essential in mounting a successful defense response against S. sclerotiorum.
To better understand the different defense responses to infection by wild-type Col-0 and AT1703 plants, we further explored defense processes enriched specifically in wild-type Col-0 and not AT1703. S. sclerotiorum demonstrates necrotrophic growth, especially at later stages of infection. Thus, the bi-products of gene activity belonging to autophagy and leaf senescence found in wild-type Col-0 can serve as nutrients for the progressing pathogen to continue growth as it feeds on necrotic host tissue 54,55 . With increased necrotic host tissue available to S. sclerotiorum in wild-type Col-0, host plants would require mechanical defenses through cell wall reinforcement and lignification in response to cell wall degrading enzymes being consistently secreted during actively progressing infection 56 . Lignification in response to S. sclerotiorum has previously been described by Höch et al. 56 , where genes associated with lignin biosynthesis showed significantly increased expression levels in susceptible B. napus lines compared to moderately resistant lines, specifically during mid to late stages of infection. These findings agree with our data where we find enrichment of lignin biosynthesis and cell wall reinforcement in wild-type Col-0 lines compared to AT1703 throughout S. sclerotiorum infection.
While we have examined the influence ABHYDROLASE-3 silencing has on the host response, there is also a need to explore differences within the pathogen and the effect that ABHYDROLASE-3 silencing has at the molecular level. ABHYDROLASE-3 is annotated as a tRNA synthetase but also predicted to be involved in aflatoxin biosynthesis. However, this gene has yet to be further characterized beyond its ability to slow S. sclerotiorum infection through SIGS and HIGS RNAi treatments 15,16 . Using the stringdb (https:// string-db. org/) protein-protein interaction network database, we identified three predictive interacting partners of ABHY-DROLASE-3 (SS1G_14434, SS1G_03597 and SS1G_02233). Expression levels of ABHYDROLASE-3 and these predicted interacting partners in S. sclerotiorum were coordinately reduced in transgenic AT1703 compared to wild-type Col-0. While SS1G_14434 and SS1G_03597 have yet to be functionally characterized in S. sclerotiorum, SS1G_02233 encodes MTAP, which plays a critical role in cleaving and processing accumulated MTA, a by-product of polyamine biosynthesis 31,32 . Buildup of MTA through inactivity of MTAP ultimately results in the inhibition of polyamine biosynthesis 31,32 . This in turn can have significant impacts on fungal growth, as the three most widely distributed polyamines, putrescine, spermidine and spermine have been shown to be necessary for successful growth and cell differentiation 32,57 . The ornithine decarboxylase (SS1G_05468) is an essential enzyme of polyamine biosynthesis, converting ornithine into the polyamine putrescine 32,57 . Once ornithine is converted into putrescine, there are further down-stream synthase genes necessary for conversion into both spermine and spermidine. These enzymes include spermine synthase (SS1G_12906) to convert putrescine into spermine, and spermidine synthase (SS1G_10282) to convert spermine into spermidine. Our RT-qPCR analysis of these essential polyamine synthesis genes shows transcript reduction during S. sclerotiorum infection of AT1703 plants compared to their wild type susceptible counterparts, suggesting a potential interaction between ABHYDROLASE-3 and the identified predictive partners of the polyamine biosynthesis pathway. Further, as putrescine can also be produced through spermine and spermidine catabolism, expression of rate limiting enzymes SSAT and PAO can also be explored to improve our understanding of ABHYDROLASE-3 influence on polyamine metabolism 32 . While the potential direct or indirect interaction of ABHYDROLASE-3 and these predictive interacting partners is still unclear, these data provide some preliminary insights of the role ABHY-DROLASE-3 plays in S. sclerotiorum pathogenicity and further identifies a group of promising RNAi candidate genes through the polyamine synthesis pathway.
One benefit of RNAi and HIGS technology in crop protection is the ability to design pathogen specific molecules that are ineffective against potentially beneficial species in the ecosystem 4,58 . This requires the careful design of target sequences within genes, as dsRNA molecules containing as few as three overlapping 21-mer nucleic acid sequences have demonstrated the ability to significantly reduce off-target transcripts 59 . B. cinerea serves as an ideal candidate to study these potential off-target effects in controlling S. sclerotiorum given both pathogens share similar hosts and disease cycles in addition to having a high level of sequence homology 60 15 showed that dsRNA targeting the ABHYDROLASE-3 B. cinerea homolog BC1G_08022 was able to reduce B. cinerea lesion size, fungal load and BC1G_08022 transcript abundance when applied to B. napus leaves prior to B. cinerea inoculation. While BC1G_08022 is capable of slowing B. cinerea infection when targeted, our dsRNA region targeting S. sclerotiorum ABHYDROLASE-3 shows no 21-mer overlapping regions with off-target species (Supplementary Dataset S2) and AT1703 A. thaliana challenged with the closely related necrotrophic fungal pathogen B. cinerea shows no significant differences in lesion size, fungal load or BC1G_08022 transcript reduction. Taken together, these data demonstrate the efficacy of HIGS as an alternative control measure in slowing S. sclerotiorum infection through targeted silencing of S. sclerotiorum ABHYDROLASE-3. Reduction of ABHY-DROLASE-3 in S. sclerotiorum provides A. thaliana the ability to increase innate defense responses through SA-mediated SAR at the global mRNA level. The identification of co-silenced ABHYDROLASE-3 interacting partners further uncovers a promising group of candidates for RNAi control against fungal pathogens. Finally, with ongoing concerns of potential off-target effects in RNA-based applications, we demonstrate the specificity of HIGS targeting S. sclerotiorum in AT1703 to a closely related fungus. Taken together, we provide evidence that HIGS applications can be used as an effective and sustainable strategy in crop improvement against necrotrophic pathogens. Initially, gene fragments were ligated into the pENTR4 vector before insertion into the pHellsgate8 destination vector. To confirm insert identity, entry vectors were sequenced at the Centre for Applied Genomics in Toronto, Ontario. Inserts were then recombined into the destination vector using a Gateway LR clonase reaction (ThermoFisher Scientific) following the manufacturer's instructions, with the modification of adding 4:1 entry vector to destination vector ratio according to Wytinck et al. 16 . A cold shock treatment was used to transform Agrobacterium. Successful transformants were selected using colony PCR with XhoI and XbaI separate restriction enzyme digestions. Cultures of Agrobacterium were grown to OD 1.6-2.0, pelleted using centrifugation, and re-suspended in MS media and 0.001% Silwet L-77. To transform A. thaliana, seeds were initially sterilized using alternating washes of 70% and 95% ethanol then placed on Murashige and Skoog (MS) media (Sigma Aldrich). Seeds were vernalized for three days at 4 °C prior to being grown in a constant light incubator. Upon formation of first leaves A. thaliana seedlings were transplanted into Sunshine Mix #1 and grown to maturity at 22 °C. Mature flowering A. thaliana plants were dipped into the Agrobacterium culture and kept in high humidity conditions for two days. Floral dips were repeated five times before seeds were dried and harvested. Transformants were selected using MS and 150 μg/mL kanamycin media 62 . PCR was used to confirm the presence of the S. sclerotiorum gene using the same primers used for cloning within the plant (Supplementary Dataset S3). Three confirmed independently transformed A. thaliana lines were grown to the T2 generation and tested for their ability to silence ABHYDROLASE-3 transcripts and slow S. sclerotiorum infection. The top performing line (AT1703.1) was subsequently selected for further RNA sequencing analysis.
Methods
Arabidopsis thaliana infection assays. Sclerotinia sclerotiorum ascospores were collected at the Morden Research and Development Centre, Agriculture and Agri-Food Canada, Morden, MB, Canada and stored at 4 °C in desiccant in the dark according to Ref. 15 . S. sclerotiorum ascospore inoculum was made by suspending a 1 × 10 6 spore/mL concentration of spores in a potato dextrose broth (PDB) and peptone solution (24 g PDB, 10 g peptone, 1 L water). Using a pipette, 10 µL of the solution was transferred onto mature transgenic and wildtype Col-0 A. thaliana leaves at a n = 15 leaves per treatment. Plants were stored in growth chambers at room temperature (21.0 °C), with plant trays being sealed with lids to maintain high levels of humidity for three days, allowing for infection progression in planta. At 2-and 3-days post inoculation (dpi), lesion area was quantified using ImageJ software and leaves were collected for RNA sequencing, fungal load, and transcript abundance experiments. Fifteen infection sites were quantified per treatment while three leaves were collected per bio-replicate with three biological replicates collected per treatment. Botrytis cinerea infections were performed using ascospores from the Saint-Jean sur Richelieu Research and Development Centre, Agriculture and Agri-Food Canada, QC, Canada following the same protocol for S. sclerotiorum ascospore inoculations. Raw count data was generated using featureCounts 67 and inputted into pvclust (https:// cran.r-proje ct. org/ web/ packa ges/ pvclu st/ index. html) for hierarchical clustering analysis with a detection cutoff ≥ 1. Detected transcripts were subsequently sorted into low (1 ≥ 5), moderate (5 > 25), or high (≥ 25) abundance levels. Further, raw count data was processed in DESeq2 for differential expression analysis. Here, differentially expressed genes (DEGs) were identified between pairwise comparisons at a p-value cutoff of p < 0.0001. GO enrichment of identified DEGs was performed using SeqEnrich 68 with terms being considered significantly enriched at p ≤ 0.001 according to software specifications. GO heatmap visualization was performed using the conditional formatting function in Excel. GO summary tables can be found in Supplementary Dataset S1. Predictive transcription factor networks were generated using SeqEnrich (Supplementary Dataset S5) with DEG gene lists being used as input. | 6,629 | 2023-04-20T00:00:00.000 | [
"Biology"
] |
Effect of Carboxyl Group Position on Assembly Behavior and Structure of Hydrocarbon Oil–Carboxylic Acid Compound Collector on Low-Rank Coal Surface: Sum-Frequency Vibration Spectroscopy and Coarse-Grained Molecular Dynamics Simulation Study
In this work, the assembly behavior and structure of a compound collector with different carboxyl group positions at the low-rank coal (LRC)–water interface were investigated through coarse-grained molecular dynamics simulation (CGMD) combined with sum-frequency vibration spectroscopy (SFG). The choice of compound collector was dodecane +decanoic acid (D-DA) and dodecane +2-butyl octanoic acid (D-BA). CGMD results showed that the carboxyl group at the carbon chain’s middle can better control the assembly process between carboxylic acid and D molecules. SFG research found that the carboxyl group at the carbon chain’s termination had a greater impact on the displacement of the methyl/methylene symmetric stretching vibration peak, while the carboxyl group at the carbon chain’s middle had a greater impact on the displacement of the methyl/methylene asymmetric stretching vibration peak. The spatial angle calculation results revealed that the methyl group’s orientation angle in the D-BA molecule was smaller and the carboxyl group’s orientation angle in the BA molecule was bigger, indicating that D-BA spread more flatly on the LRC surface than D-DA. This meant that the assembled structure had a larger effective adsorption area on the LRC surface. The flotation studies also verified that the assembly behavior and structure of D-BA with the carboxyl group at the carbon chain’s middle at the LRC–water interface were more conducive to the improvement of flotation efficiency. The study of interface assembly behavior and structure by CGMD combined with SFG is crucial for the creation of effective compound collectors.
Introduction
A compound with a carboxylic group (-COOH) in the molecule is called a carboxylic acid.The carboxylic carbon atom makes three σ bonds in sp2 hybrid orbitals, two of which are with two oxygen atoms and one with the hydrocarbon group.The remaining p electron joins the oxygen atom in the carboxylic group to form the π bond of C=O, but the oxygen in the -OH portion of the carboxylic group contains a pair of unshared electrons that can join the π bond to form a p-π conjugated system.The oxygen atom in the -OH group advances its electron cloud towards the carbonyl group due to p-π conjugation, as well as the oxygen atom being nearer to the electron cloud between O-H.This strengthens the polarity of the O-H bond and makes it easier for the H atom to dissociate [1].Based on the properties of the carboxyl group, many scholars in the field of flotation use chemicals containing the carboxyl group as collector components to efficiently separate useful minerals.Cao et al. [2] used a mixture of oleic, linoleic, and linolenic acids as collectors to increase the flotation recovery of phosphate.Keith Quast [3] demonstrated that the 18-carbon unsaturated fatty acid mixtures were an ideal flotation agent for recovering oxidized minerals with high mineral recovery rates.Similarly, for the flotation of low-rank coal (LRC), carboxylic acid agents are added based on traditional oil collectors to form strong hydrogen bonds between carboxylic groups and the LRC surface, so as to achieve the efficient separation of LRC.Tian et al. [4] discovered that the flotation effect of LRC in the presence of carboxylic acid was superior to that of alkane.Liu et al. [5,6] systematically studied the active mechanism of the mass ratio and addition sequence of the carboxylic acid composite collector on LRC flotation.For the choice of carboxylic acid, most carboxylic acid reagents were discovered to disperse their carboxylic group near the end of the carbon chain, and the reason for this was to make the carbon chain better in the collection performance.However, when the carbon chain's carboxyl group is in the middle, what about the harvesting effect of low-rank coal?In other words, the structure-activity relationship of the carboxyl group position in LRC flotation is worthy of further study, which has significant ramifications for the development and use of compound collectors for LRC.
In the field of interface chemistry, experts and scholars have proposed the interface assembly mechanism of mixed surfactants in solution.In the mineral processing of metal ores, studies have shown that composite collectors have an assembly process on mineral surfaces [7][8][9].In recent years, Han et al. [10,11] proposed the assembly mechanism of different types of collector molecules at mineral surface interfaces, including the concepts of the assembly of different electrical collectors and interface hydrophobic assembly.Cui et al. [12,13] also used density functional theory to study the arrangement structure and arrangement mode of mineral collector molecules with different structures at the solidliquid interface.However, there are few studies on the interfacial assembly mechanism of LRC compounding collector molecules, which are not deep enough.Regarding how the compound collector and LRC interact, according to the prevalent theory now in use, the LRC surface is positively impacted by the synergistic effects of the different medicinal compounds in the compound collector, and the co-adsorption structure that results encourages the adsorption of the entire reagent [14,15].When the collector molecules in the solid-liquid interface system are adsorbing to the LRC surface, the different structures of the chemical molecules themselves will lead to different assembly structures, which is related to the interface synergy effect, and ultimately affects the LRC flotation efficiency.Therefore, the study of the interface assembly structure and behavior of chemical molecules is the key to realizing the controllable adsorption and efficient development of compound collectors on the LRC surface.
A breakthrough has been made in the simulation and characterization of interface assembly, but most of the research focuses on polymer material structure, nanomaterial structure, biofilm structure, and mineral crystal surface.However, the interfacial assembly mechanism of LRC collector molecules has not been studied, and the assembly structure and behavior involved have not been deeply explored.It is encouraging that the research on the interface assembly structure and system has become more and more mature in other fields, and its simulation and detection methods are expected to bring a reference for the research on the surface assembly structure and mechanism of LRC.Coarse-grained molecular dynamics simulation (CGMD) has been applied to the study of self-assembly systems [16][17][18], enabling the study of assembly processes at larger time scales and spatial scales.Additionally, Liu et al. [19] used CGMD to investigate the spreading characteristics of collector droplets at the coal-water interface.Due to the macroscopic process of molecular assembly, besides simulation and prediction, it is characterized by various methods, such as scanning probe microscopy, electrochemistry, spectroscopy, X-ray diffraction, and transmission electron microscopy.The arrangement, orientation, and spatial conformation of the molecules on the solid support can be analyzed by the sum frequency generation spectrometer (SFG).Champika Weeraman et al. [20] used SFG technology to demonstrate the relationship between the molecular conformation of dodecyl mercaptan ligands that have been chemically adsorbed and the surface curvature of spherical gold nanoparticles.Wang et al. [21] used SFG technology to study the adsorption state of dodecylamine on the quartz surface at various PH levels.All these provided the possibility for studying the compound collector's structure and assembly behavior on the LRC surface.
Therefore, this paper mainly studied the assembly behavior and structure of the compound collector made up of hydrocarbon oil and carboxylic acid at the LRC-water interface through SFG combined with CGMD, clarified the mechanism of carboxylic acid at different carboxylic group positions for the assembly mechanism of the compound collector, and provided a theoretical reference and guidance for the creation of a compound collector and the regulation of molecule assembly on the LRC surface.The trajectory results of CGMD simulation calculation were analyzed, and the coalwater interface adsorption structure diagrams at different time nodes from 0 ns to 1000 ns were selected to determine the accumulation, adsorption behavior, and assembly process of compound collector molecules at different carboxy-group positions at the LRC-water interface.Figures 1 and 2 respectively show the adsorption structures of the compound collector molecules with different carboxyl group positions at different time nodes of the coal-water interface, where because of the three-dimensional periodic boundary conditions, for ease of observation, Figure 1k or Figure 2k is a quadruple graph of Figure 1j or Figure 2j in the X × Y boundary range.By comparing the molecular structure of the reagent at different time nodes, it is found that in the process of adsorption to the LRC surface, the compound collector molecules are forced between molecules to form an assembly process (Figure 1c-j or Figure 2c-j), and the assembly structure tends to form semi-ordered clusters of several or a dozen molecules to seek to minimize the dynamic energy.Before 50 ns, that is, when the compound collector molecules gradually diffuse and rearrange in the form of individuals or clusters after adhering to the LRC surface, the LRC surface has a larger spreading area for the D-DA with a carboxyl group at the carbon chain's termination than the D-BA with a carboxyl group at the carbon chain's middle.It shows that the molecular rearrangement process of D-DA on the LRC surface occurs more quickly.When the final assembly structure is formed, compared with Figure 1j or Figure 2j of the two figures, the adsorption of the molecules in the D-BA is more orderly in the first layer.Compared with D-DA, there are fewer crossed molecules on its surface.The assembly process between carboxylic acid molecules and D molecules seems to be better controlled when the carboxylic group is in the carbon chain's middle.Because the D molecule and carboxyl group have a stronger non-bonding connection, the methyl group at the other end of the carboxylic acid molecule is arranged far away from the D molecule, making the assembly structure less parallel and orderly than that of D-BA.The blue is the coarse water particles, long chain alkanes are represented by a green bat model, the carboxyl group is represented by a red ball.In (a), the boundaries of cells are represented by white lines.(a) is the overall view of the simulation system, and (b) is the top view of (a).(k) is a quadruple graph of (j) in the X × Y boundary range.Note that (b-k), water is not shown to better show the assembly of D-BA.
Analysis of Action Mechanism during Assembly
The last 200 ns trajectory file was selected to analyze the interaction energy and radial distribution function (RDF) [22] between the D and carboxylic acid molecules in different kinds of compound collectors, and explore the action mechanism of different kinds of compound collectors in the process of coal-water interface assembly.As seen by the RDF outcomes in Figure 3a, for the compound collector with different carboxyl group locations, all RDF curves have two significant peaks at a distance of 10.0 Å, the first peak near 5.0 Å and the second peak near 8.5 Å.If the distance between two molecules from the center to the center is less than 10 Å, they are considered neighbors and can form small clusters or aggregates [23], so the two significant peaks in the RDF curve at a distance of 10.0 Å represent the possibility of carboxylic acid molecules forming clusters around the D molecule.For the two peaks of RDF, the order of peak intensity between carboxylic acid molecules and D molecules at different carboxylic group positions is as follows: DA < BA.Moreover, Figure 3b shows the RDF curve between carboxyl groups and D molecules.The RDF peak intensity between carboxyl groups and D molecules is also stronger, and the peak distance of RDF peaks between different carboxyl groups and D molecules is 8.125 Å (D-DA) and 7.725 Å (D-BA), respectively.It shows that the BA molecule with a middle-placed carboxyl group on its carbon chain interacts more strongly with the D molecule as a whole, and the interaction distance is also closer.Therefore, when forming the final assembly structure, the first layer adsorption of the molecules in the D-BA is more parallel and ordered.The last 200 ns trajectory file was selected to analyze the interaction energy and radial distribution function (RDF) [22] between the D and carboxylic acid molecules in different kinds of compound collectors, and explore the action mechanism of different kinds of compound collectors in the process of coal-water interface assembly.As seen by the RDF outcomes in Figure 3a, for the compound collector with different carboxyl group locations, all RDF curves have two significant peaks at a distance of 10.0 Å, the first peak near 5.0 Å and the second peak near 8.5 Å.If the distance between two molecules from the center to the center is less than 10 Å, they are considered neighbors and can form small clusters or aggregates [23], so the two significant peaks in the RDF curve at a distance of 10.0 Å represent the possibility of carboxylic acid molecules forming clusters around the D molecule.For the two peaks of RDF, the order of peak intensity between carboxylic acid molecules and D molecules at different carboxylic group positions is as follows: DA < BA.Moreover, Figure 3b shows the RDF curve between carboxyl groups and D molecules.The RDF peak intensity between carboxyl groups and D molecules is also stronger, and the peak distance of RDF peaks between different carboxyl groups and D molecules is 8.125 Å (D-DA) and 7.725 Å (D-BA), respectively.It shows that the BA molecule with a middle-placed carboxyl group on its carbon chain interacts more strongly with the D molecule as a whole, and the interaction distance is also closer.Therefore, when forming the final assembly structure, the first layer adsorption of the molecules in the D-BA is more parallel and ordered.In order to better explain the interaction in the compound collector between the D and the carboxylic acid molecules, the interaction energy between the D and carboxylic acid molecules was calculated.The formula for calculating the interaction energy ( In order to better explain the interaction in the compound collector between the D and the carboxylic acid molecules, the interaction energy between the D and car-boxylic acid molecules was calculated.The formula for calculating the interaction energy (E inter dodecane & acid ) between the D molecule and the carboxylic acid molecule is: E total , E dodecane , E acid , and E coal−water are the energy of the entire system as well as the energy of dodecane molecules, carboxylic acid molecules, and the entire coal-water system.E dodecane+coal−water , E acid+coal−water , and E dodecane+acid are the total energy of dodecane + coal/water, carboxylic acid + coal/water, and dodecane + carboxylic acid, respectively.Dodecane and carboxylic acid molecules interact with each other more strongly the higher the absolute value of E inter dodecane & acid .The interaction energies of E inter dodecane & DA and E inter dodecane & BA are −114.42and −116.35kcal/mol, respectively.At the coal-water interface, it can be seen that the interaction energies between carboxylic acid molecules at different carboxylic group positions and D molecules have little difference, while the absolute values of interaction energies between BA molecules and D molecules are larger than that between DA molecules and D molecules.The results show that carboxylic acid molecules with a carboxyl group in the middle of the carbon chain have a stronger interaction with the D molecule at the coal-water interface, which also complies with the observed assembly structure and RDF calculation results.
Assembly Structure of Carboxylic Acid Compound Collector with Different Carboxylic Group Positions at LRC/Water Interface 2.2.1. SFG Spectral Analysis
As depicted in Figure 4, the SFG spectra of the compound collector with different carboxyl group positions were obtained by fitting under different polarization states after adsorption on the surface of the coal chip.For contrast, Figure 4a displays the SFG spectra in various polarization conditions, showing no absorption of any substance by the coal chip.In the initial coal chip, when in a ppp polarization state, the C-H vibration exhibits distinct peak vibrations in these categories: the methyl group's symmetric stretching vibration peak (CH 3, ss , 2875 cm −1 ) and the asymmetric stretching vibration peak (CH 3, as , 2961 cm −1 ); the methylene groups' symmetric stretching vibration peaks (CH 2, ss , 2852 cm −1 ) and the asymmetric stretching vibration peaks (CH 2, as , 2924 cm −1 ).During ssp's polarization state, the primary group vibration peaks for C-H are CH 3, ss (2869 cm −1 ), CH 3, as (2955 cm −1 ), and CH 2, as (2924 cm −1 ) [24][25][26].Generally, in the ssp polarization state, the SFG signal from the solid chip surface is faint, with the methylene group exhibiting only a slight asymmetric stretching vibration.In contrast to the ssp polarization state, the ppp polarization state exhibits significantly greater spectral intensity.Additionally, the compound collector's abundance of methylene groups, owing to their varied locations on the carbon chain, results in a broader peak width for the methylene group vibration peak compared to the methyl group [27].
Upon adsorbing the compound collector, which has varied positions of carboxyl groups, onto the solid chip, Figure 4d,e and Table 1 displays the respective vibration peaks of these groups.Combined with the chart, it can be seen that the main difference in the ppp polarization state is the detected CH 2, ss vibration peak positions, which are 2829 cm −1 (D-DA) and 2852 cm −1 (D-BA), respectively.Compared with the original chip, the CH 2, ss vibration peak positions on the chip surface after D-DA treatment are more shifted.This also shows that the DA molecule with a carboxyl group at the carbon chain's termination requires a lower energy for the assembly process of the D molecule, and the BA molecule with a carboxyl group at the carbon chain's middle requires a higher energy for the assembly process due to its strong non-bond interaction with D molecule.At the same time, in the polarization state of ppp and ssp, the wave number of the symmetric stretching vibration peak (ss) of the chip treated with D-DA is smaller than that of the chip treated with D-BA.The wave number of the asymmetric stretching vibration peak (as) of the chip treated with D-BA is smaller than that of the chip treated with D-DA, which is also caused by the difference in the position of the carboxyl group.The carboxyl group has a great influence on the vibration frequency of the methyl or methylene groups around it due to its strong electron absorption.Therefore, the position of the carboxyl group at the carbon chain's termination has a greater influence on the displacement of methyl/methylene symmetric stretching vibration peak, and the position of the carboxyl group at the carbon chain's middle has a greater influence on the displacement of the methyl/methylene asymmetric stretching vibration peak.To better comprehend how various types of compound collector molecules differ in their assembly structures at the coal-water interface, the final output dynamic calculation structure analysis of each system was selected.The orientation of polar groups in different compound collector molecules in space is different during assembly.Based on the characteristics of coarse-grained modeling, the orientation distributions of polar groups of different carboxylic acid molecules relative to the model plane of LRC were calculated.The directional angle θ is referred to as the angle formed by the normal vector and the vector from the carboxyl group adjacent coarse particles to the carboxyl group coarse particles (see Figure 5).Figure 6 shows the distribution of carboxyl groups of carboxylic acid molecules at different carboxyl group positions relative to the LRC plane.It can be seen that the carboxyl groups in carboxylic acid molecules at different carboxyl group positions are 62.5 • (DA) and 65.24 • (BA), respectively.The orientation distribution angle of carboxyl groups in BA molecules is larger, indicating that BA molecules with carboxyl groups at the carbon chain's middle have a flatter spreading angle on the LRC surface than DA molecules with carboxyl groups at the carbon chain's termination, and the carboxyl groups are more closely related to the surface of LRC, and the adsorption with low-rank coal surfaces is more stable in the polar water phase.This also indicates that the carboxyl group at the carbon chain's middle interacts more strongly with the LRC surface.To better comprehend how various types of compound collector molecules differ in their assembly structures at the coal-water interface, the final output dynamic calculation structure analysis of each system was selected.The orientation of polar groups in different compound collector molecules in space is different during assembly.Based on the characteristics of coarse-grained modeling, the orientation distributions of polar groups of different carboxylic acid molecules relative to the model plane of LRC were calculated.The directional angle θ is referred to as the angle formed by the normal vector and the vector from the carboxyl group adjacent coarse particles to the carboxyl group coarse particles (see Figure 5).Figure 6 shows the distribution of carboxyl groups of carboxylic acid molecules at different carboxyl group positions relative to the LRC plane.It can be seen that the carboxyl groups in carboxylic acid molecules at different carboxyl group positions are 62.5° (DA) and 65.24° (BA), respectively.The orientation distribution angle of carboxyl groups in BA molecules is larger, indicating that BA molecules with carboxyl groups at the carbon chain's middle have a flatter spreading angle on the LRC surface than DA molecules with carboxyl groups at the carbon chain's termination, and the carboxyl groups are more closely related to the surface of LRC, and the adsorption with low-rank coal surfaces is more stable in the polar water phase.This also indicates that the carboxyl group at the carbon chain's middle interacts more strongly with the LRC surface.
Evaluation of Orientation Angle and Conformational Change in Methyl Groups at LRC/Water Interface
In the above section, the overall orientation angle and conformation changes in carboxylic acid groups in carboxylic acid molecules were calculated by simulation.To better explain the overall interfacial adsorption conformation of compound collector molecules, the methyl group orientation angle distribution at the gas-solid interface after different kinds of compound collector adsorbed on the surface of solid chip was calculated by combining the SFG spectrogram characteristics and the formula for measuring the orientation angle of interfacial molecular groups.The orientation angles corresponding to the methyl groups on the surface of the solid chip are 43.06°(Original), 49.30° (D-DA), and 46.74° (D-BA), respectively, before and after the adsorption of the compound collector at different carboxyl group positions.Combined with the orientation angle distribution of the carboxyl groups obtained by CGMD, although the overall orientation angle of the carboxyl group of BA molecule is larger, that is, it is more parallel to the LRC surface because the carboxyl group is in the carbon chain's middle, the carbon chain on both sides of the carboxyl group will form a certain angle, so there is a minor orientation angle for the methyl group on the chip surface after D-BA treatment.That is, the methyl groups of some BA molecules tend to be vertically distributed in the direction of the LRC surface.In general, D-BA spreads to be more flat on the LRC surface than D-DA, and the effective adsorption area of the assembled structure on the LRC surface is larger.
AFM Adsorption Morphology Characterization
As shown in Figure 7, (a) is the surface of the chip without adsorption of any reagent, (b) and (c) are respectively the surface of the chip on which a compound collector with different carboxyl group positions is adsorbed (the compound collector is D-DA, D-BA in turn), and each diagram includes the plane plan and 3D diagram of the chip surface.The atomic force microscopy (AFM) image of the compound collector with different carboxyl group positions after adsorption on the surface of the chip has obvious changes.The surface plan of the chip shows bright spots with different numbers, shapes, and sizes, and the average roughness of the locations where bright spots appear in the 3D image is greater than that of the original surface of the chip (9 nm).There are many mountainshaped bright spots, indicating that the bright spots appearing on the surface of the chip
Evaluation of Orientation Angle and Conformational Change in Methyl Groups at LRC/Water Interface
In the above section, the overall orientation angle and conformation changes in carboxylic acid groups in carboxylic acid molecules were calculated by simulation.To better explain the overall interfacial adsorption conformation of compound collector molecules, the methyl group orientation angle distribution at the gas-solid interface after different kinds of compound collector adsorbed on the surface of solid chip was calculated by combining the SFG spectrogram characteristics and the formula for measuring the orientation angle of interfacial molecular groups.The orientation angles corresponding to the methyl groups on the surface of the solid chip are 43.06 • (Original), 49.30 • (D-DA), and 46.74 • (D-BA), respectively, before and after the adsorption of the compound collector at different carboxyl group positions.Combined with the orientation angle distribution of the carboxyl groups obtained by CGMD, although the overall orientation angle of the carboxyl group of BA molecule is larger, that is, it is more parallel to the LRC surface because the carboxyl group is in the carbon chain's middle, the carbon chain on both sides of the carboxyl group will form a certain angle, so there is a minor orientation angle for the methyl group on the chip surface after D-BA treatment.That is, the methyl groups of some BA molecules tend to be vertically distributed in the direction of the LRC surface.In general, D-BA spreads to be more flat on the LRC surface than D-DA, and the effective adsorption area of the assembled structure on the LRC surface is larger.
AFM Adsorption Morphology Characterization
As shown in Figure 7, (a) is the surface of the chip without adsorption of any reagent, (b) and (c) are respectively the surface of the chip on which a compound collector with different carboxyl group positions is adsorbed (the compound collector is D-DA, D-BA in turn), and each diagram includes the plane plan and 3D diagram of the chip surface.The atomic force microscopy (AFM) image of the compound collector with different carboxyl group positions after adsorption on the surface of the chip has obvious changes.The surface plan of the chip shows bright spots with different numbers, shapes, and sizes, and the average roughness of the locations where bright spots appear in the 3D image is greater than that of the original surface of the chip (9 nm).There are many mountain-shaped bright spots, indicating that the bright spots appearing on the surface of the chip are the compound collector adsorbed there.Comparing the AFM images of (b) and (c), it was found that more bright spots appeared on the surface of the chip treated with different compound collector solutions, and the adsorption amount could not be directly judged.However, compared with 3D images, it was found that the average height of bright spots adsorbed with D-DA was higher than that of D-BA, which also confirmed the special molecular topology of BA molecule, and echoed its conformation at the LRC/water interface.
Molecules 2024, 29, x FOR PEER REVIEW 11 of 21 are the compound collector adsorbed there.Comparing the AFM images of (b) and (c), it was found that more bright spots appeared on the surface of the chip treated with different compound collector solutions, and the adsorption amount could not be directly judged.However, compared with 3D images, it was found that the average height of bright spots adsorbed with D-DA was higher than that of D-BA, which also confirmed the special molecular topology of BA molecule, and echoed its conformation at the LRC/water interface.
Flotation Results
Figure 8 shows the flotation results of pure coal samples with mixed collectors at different carboxyl group positions.As can be observed from the figure, as the quantity of the compound collector increases, the yield of flotation clean coal increases in a different amplitude, and at the same dosage, the yield of cleaned coal obtained by combining D-BA with carboxyl group at the carbon chain's middle is higher than that obtained by combining D-DA with a carboxyl group at the carbon chain's termination.Among them, when the dosage is 0.6 kg/t and 1 kg/t, there is a large gap between the two kinds of flotation clean coal yields, and the cleaned coal yields obtained by D-BA are 30% and 12% higher than those obtained by D-DA, respectively.Using a 2 kg/t dose, the yield of flotation refined coal has reached 98.11% (D-BA) and 95.21% (D-DA), and almost all useful minerals have come to the surface.In general, the D-BA with a carboxyl group at the carbon chain's
Flotation Results
Figure 8 shows the flotation results of pure coal samples with mixed collectors at different carboxyl group positions.As can be observed from the figure, as the quantity of the compound collector increases, the yield of flotation clean coal increases in a different amplitude, and at the same dosage, the yield of cleaned coal obtained by combining D-BA with carboxyl group at the carbon chain's middle is higher than that obtained by combining D-DA with a carboxyl group at the carbon chain's termination.Among them, when the dosage is 0.6 kg/t and 1 kg/t, there is a large gap between the two kinds of flotation clean coal yields, and the cleaned coal yields obtained by D-BA are 30% and 12% higher than those obtained by D-DA, respectively.Using a 2 kg/t dose, the yield of flotation refined coal has reached 98.11% (D-BA) and 95.21% (D-DA), and almost all useful minerals have come to the surface.In general, the D-BA with a carboxyl group at the carbon chain's middle can increase the yield of cleaned coal than the D-DA with a carboxyl group at the carbon chain's termination.
middle can increase the yield of cleaned coal than the D-DA with a carboxyl group at the carbon chain's termination.In the pure coal flotation experiment, when the amount of collector reagent is more than 2 kg/t, a higher coal yield can be obtained, and with the increase in the amount of agent, the coal yield increases very slowly.Therefore, in the actual coal sample flotation test, the dose for the compound collector is set at 3 kg/t, and their flotation behavior on the actual LRC sample is investigated.Figure 9 displays the flotation results for actual coal samples using compound collectors with various carboxyl group positions.The chart shows that the combustible matter recovery occurs in the following order: D-BA (89.56%) > D-DA (85.18%), and the cleaned coal ash is in the order of D-DA (11.59%) > D-BA (10.89%).Under the same flotation conditions, the use of the D-BA can obtain a higher combustible matter recovery but can also obtain a lower cleaned coal ash content.To sum up, the order of the flotation effect of compound collectors with different carboxyl group positions on LRC is D-BA > D-DA.This also confirms that in the actual flotation process, the assembly behavior and final assembly structure of D-BA with the carboxyl group at the carbon chain's middle are more beneficial to the improvement of flotation efficiency.In the pure coal flotation experiment, when the amount of collector reagent is more than 2 kg/t, a higher coal yield can be obtained, and with the increase in the amount of agent, the coal yield increases very slowly.Therefore, in the actual coal sample flotation test, the dose for the compound collector is set at 3 kg/t, and their flotation behavior on the actual LRC sample is investigated.Figure 9 displays the flotation results for actual coal samples using compound collectors with various carboxyl group positions.The chart shows that the combustible matter recovery occurs in the following order: D-BA (89.56%) > D-DA (85.18%), and the cleaned coal ash is in the order of D-DA (11.59%) > D-BA (10.89%).Under the same flotation conditions, the use of the D-BA can obtain a higher combustible matter recovery but can also obtain a lower cleaned coal ash content.To sum up, the order of the flotation effect of compound collectors with different carboxyl group positions on LRC is D-BA > D-DA.This also confirms that in the actual flotation process, the assembly behavior and final assembly structure of D-BA with the carboxyl group at the carbon chain's middle are more beneficial to the improvement of flotation efficiency.
Materials
In this investigation, two LRC samples, raw coal and block refined coal, were used for experiments.Ultra-low ash coal with an ash content of 2.38% was achieved as a pure
Materials
In this investigation, two LRC samples, raw coal and block refined coal, were used for experiments.Ultra-low ash coal with an ash content of 2.38% was achieved as a pure coal sample through a float and dip experiment.Figure 10 shows the narrow sweep result of XPS C 1s of the pure coal sample.It could be seen that the hydrophilic functional group content was very high, which was a typical LRC, and the ratio of the oxygen-containing functional group content was about 3:1.The raw coal and pure coal samples were ground into powder and the −0.5 mm powder samples were obtained through the 0.5 mm sample screen for the flotation test.In other tests, since the equipment requires a smooth test surface, the corresponding coal chip was customized from the Biolin Company as a test object according to the XPS results of the coal samples, which was used to study the compound collectors' assembly structure and behavior on the LRC surface.
Materials
In this investigation, two LRC samples, raw coal and block refined coal, were used for experiments.Ultra-low ash coal with an ash content of 2.38% was achieved as a pure coal sample through a float and dip experiment.Figure 10 shows the narrow sweep result of XPS C1s of the pure coal sample.It could be seen that the hydrophilic functional group content was very high, which was a typical LRC, and the ratio of the oxygen-containing functional group content was about 3:1.The raw coal and pure coal samples were ground into powder and the −0.5 mm powder samples were obtained through the 0.5 mm sample screen for the flotation test.In other tests, since the equipment requires a smooth test surface, the corresponding coal chip was customized from the Biolin Company as a test object according to the XPS results of the coal samples, which was used to study the compound collectors' assembly structure and behavior on the LRC surface.This study mainly explored the influence mechanism of carboxylic acids with different carboxylic group positions on the compound collector at the LRC/water interface, so two different carboxylic acids were selected, one was decanoic acid (DA) with the carboxylic group at the carbon chain's termination, and the other was 2-butyl octanoic acid (BA) with the carboxylic group at the carbon chain's middle.The two carboxylic acids were combined with dodecane in a 1:4 mass ratio to obtain the compound collectors D-DA and D-BA.
Models
The LRC surface of 107 × 123 Å 2 (X × Y) all-atomic molecules was constructed by randomly and proportionally grafting oxygen-containing functional groups on a single layer of graphene, using the modeling method described in the literature [5].Then, graphene oxide/graphene is usually based on a Martini force field when studying mesoscopic dynamic behavior [28][29][30], which has a good matching and adaptability.Therefore, the Martini 2.0 force field [31,32] is also used in this investigation to simulate the CGMD of the LRC-compound collector-water system.The Martini model is based on the many-to-one mapping rule, which divides the coarse particles of the molecules in the system.According to the literature [19], the coarse-grained mapping of all-atomic LRC molecules, compound collector molecules, and water molecules was carried out and corresponding force fields were assigned.In addition, it should be noted that during the modeling process, BP4 coarse particles replaced P4 coarse particles with 10% [32] to prevent water from crystallizing at ambient temperature.Figure 11 shows the mapping structures of different coarse-grained molecules.
Martini 2.0 force field [31,32] is also used in this investigation to simulate the CGMD of the LRC-compound collector-water system.The Martini model is based on the many-toone mapping rule, which divides the coarse particles of the molecules in the system.According to the literature [19], the coarse-grained mapping of all-atomic LRC molecules, compound collector molecules, and water molecules was carried out and corresponding force fields were assigned.In addition, it should be noted that during the modeling process, BP4 coarse particles replaced P4 coarse particles with 10% [32] to prevent water from crystallizing at ambient temperature.Figure 11 shows the mapping structures of different coarse-grained molecules.
Computational Method
For the CGMD computation, the Materials Studio 7.0 software's Mesocite module was utilized.First, an initial LRC-water-compound collector coarse-grained system (X × Y × Z: 107 × 123 × 190 Å 3 ) was used, where the system contained a 120 Å-thick vacuum plate to prevent periodic boundary conditions on the Z axis, and the mass ratio of
Computational Method
For the CGMD computation, the Materials Studio 7.0 software's Mesocite module was utilized.First, an initial LRC-water-compound collector coarse-grained system (X × Y × Z: 107 × 123 × 190 Å 3 ) was used, where the system contained a 120 Å-thick vacuum plate to prevent periodic boundary conditions on the Z axis, and the mass ratio of dodecane molecules to carboxylic acid molecules in each compound collector was 4:1.The high energy of the initial system structure was avoided using the smart optimization strategy, and the CGMD with a 20 fs simulation time step and 1000 ns total time was calculated for the optimized structure.For specific simulation parameters and details such as the selection of ensemble and interaction calculation methods, see the Supplementary File S1.
SFG Experiments
Theoretical, technological, and analytical information on SFG has been widely disseminated elsewhere [33][34][35].The sum-frequency spectrum system from EKSPLA (Lithuania) consisted of five main parts: picosecond laser system, frequency doubling system, optical parameter system, signal generation system, and acquisition system.The picosecond pulsed laser (wavelength 1064 nm) generated tunable infrared light (1000~4000 cm −1 ) and visible light (λ = 532 nm) for the SFG experiment after frequency doubling and threeparameter processes.The two beams of light were incident on the sample interface at the same time to produce sum-frequency signals.The incident light's and the sum-frequency signal light's polarization was regulated to be either the s or p direction (that is, the photoelectric field's direction was either perpendicular or parallel to the incident plane), and the specific polarization of infrared light (s light or p light) to obtain the sum frequency vibration spectrum.The laser combination of these three specific polarization directions was referred to as a polarization combination of the frequency vibration spectrum.By convention, the names of the polarization combinations were given in the following order: sum frequency, visible, and infrared.For instance, ssp represented the sum-frequency signal light s polarization direction, visible incident laser s polarization direction, and infrared incident laser p polarization direction.Different polarization combinations give different spectral information.In this experiment, a reflective co-propagating configuration was used.The infrared and visible laser beams were incident at the sample interface at an incidence angle of (60 • ± 1 • ) and (55 • ± 1 • ), respectively.The delay was adjusted so that the two laser beams overlapped in space and time, that is, the sum-frequency signal light in the reflection direction was detected.The visible laser energy of the incident was about 200 µJ, and the infrared laser energy was about 150 µJ.The scanning step of infrared light was 2 cm −1 , and each data point was obtained by the accumulation and average of 100 laser pulse measurement signals.The intensity of the SFG signal was normalized to the incident laser energy in the output spectrum (that is, SFG output energy values were obtained by dividing the infrared and visible incident laser energy and normalizing the gold film signal).The experiment was carried out in a super-clean working room with a constant temperature [(22.5 ± 0.5) • C] and constant humidity (40%).
The SFG test sample was a coal chip adsorbed by a mixed collector in a solution environment.The unadsorbed residual reagent solution on the surface of the chip was rinsed with ethanol and a large amount of ultra-pure water, then dried with high-purity nitrogen and placed on the sample table for the SFG test.The C-H vibration region (2800 cm −1 to 3000 cm −1 ) was mainly measured according to the composition of the compound collector.Two polarization states, ppp and ssp, were measured for each part of each sample.The fitting details of SFG spectra are shown in the Supplementary File S1.
Calculation of Methyl Orientation Angle of the Interfacial Molecule
The biggest advantage of the SFG research interface was that it could not only correctly identify and vibrate spectral peaks by polarization analysis method, but it could also calculate the orientation angle of corresponding groups by the ratio of spectral peak intensity under different polarization combinations [36][37][38].Generally, the angle between the molecular symmetry axis and the laboratory coordinate z-axis forward (that is, the interface normal) was defined as the molecular orientation angle θ, to quantitatively characterize the adsorption state of molecules on the interface, and it was an important parameter for studying the adsorption form of molecules on the interface.In this investigation, the orientation angles of methyl groups of different compound collectors on the coal chips' surface were calculated.
First, the SFG signal strength is: where, ω, ω 1 , and ω 2 are the frequencies of signal light, visible light, and infrared light respectively, β is the exit angle of signal light, χ e f f is the second-order effective polarizability of the interface, and c is the speed of light.n(ω i ) is the wavelength-dependent refractive index of the bulk phase.It can be seen from Equation (1) that when the experimental configuration is unchanged, the SFG signal intensity directly reflects the second-order effective polarizability of the interface.To conveniently determine the orientation angle, the expression of χ (2) e f f can often be expressed as a functional of the orientation angle [36], that is: where N s is the interfacial molecular density, d is the strength factor, c is the generalized orientation parameter, and r(θ) is the orientation functional, which includes the information on the orientation angle of the molecular group and the orientation angle distribution."⟨⟩" represents the ensemble average of the probability distribution function for the interface molecules in a certain orientation.The orientation angle is determined using the δ distribution function.When calculating the orientation angle, the c values and d values must be calculated first.For a given experimental system and experimental conditions, c and d are constant numbers.The methyl group of the C 3v symmetry group under the polarization combination of ssp is utilized as a model to compute the c and d values.Thus where r is the methyl group's depolarization ratio (here r = 3.4 [39]) and α (2) ccc is the non-zero hyperpolarizability tensor of the methyl symmetric stretching vibration mode.Through a series of conversions, the c and d values of the second-order polarizability of C 3v symmetric methyl groups under the ssp polarization combination can be obtained in the following form: In the same way, the c and d values under the ppp polarization combination can be obtained.
After scanning the surface of the sample to obtain the SFG spectrum, the intensity values of each spectral peak of the SFG spectrum under different polarizations are obtained by fitting, and then the second-order nonlinear polarizability of a certain group under the combination of two polarizations is selected and its ratio (e.g., χ q,ppp /χ q,ssp ) is calculated.Then the group's orientation angle can be obtained by combining Formula (3).The general formula for calculating the orientation angle can be obtained from the above types: χ q,ppp χ q,ssp = d ppp ⟨cos θ⟩ − c ppp cos 3 θ d ssp ⟨cos θ⟩ − c ssp ⟨cos 3 θ⟩ (7)
AFM Measurements
AFM technology can help to analyze the adsorption morphology of surfactants or polymers on mineral surfaces [22,40,41].The samples were coal chips adsorbed by different kinds of compound collector solution, and the processing process was the same as that of the SFG test coal chips.NanoWizard Type 4 AFM (JPK, Berlin, Germany) was used to perform in situ imaging of the chip samples adsorbed with the collector in Quantitative Imaging mode (QI), and the chip surface without adsorbing any reagent was used as a blank control.QI mode is a mode based on the force spectrum, in which a complete force curve is recorded on every pixel of the sample.Because the probe is fully raised between each pixel, there is almost no lateral force in this mode.Figure 12 shows the NanoWizard 4 AFM test device and the schematic diagram of the scanning principle.The probe used was given a resonance frequency of 75 kHz, an elastic coefficient of 2.8 N/m, a Setpoint value of 0.2 V, and a scanning rate of 1.66 Hz.
Imaging mode (QI), and the chip surface without adsorbing any reagent was used as a blank control.QI mode is a mode based on the force spectrum, in which a complete force curve is recorded on every pixel of the sample.Because the probe is fully raised between each pixel, there is almost no lateral force in this mode.Figure 12 shows the NanoWizard 4 AFM test device and the schematic diagram of the scanning principle.The probe used was given a resonance frequency of 75 kHz, an elastic coefficient of 2.8 N/m, a Setpoint value of 0.2 V, and a scanning rate of 1.66 Hz.
Flotation Experiments
A 0.5 L XFD single-cell flotation machine was employed to carry out the LRC flotation test, and the flotation pulp concentration was 60 g/L.Firstly, the coal used for flotation was a pure coal sample, and the collection performance of the LRC with different saturation of hydrocarbon oil-fatty acid compound collector was mainly investigated.The addition amounts of each collector in the flotation process were 0.3, 0.6, 1, 2, 3, 4, and 5 kg/t, respectively, using sec-octanol as a frother; each addition was 0.3 kg/t.In the flotation process, the aeration was 0.25 m 3 /h, and the machine impeller speed was 1800 r/min.The pulp was mixed for three minutes, the compound collector was added and stirred for three minutes, and after adding the frother and stirring it for 30 s, opening the intake valve allowed the floating, cleaned coal to be collected for three minutes.The collected coal samples were filtered, dried, and weighed following flotation, and the yield of flotationrefined coal was computed.
Next, the pure coal sample was changed into the actual coal sample.In accordance with the pure coal sample's flotational behavior, the appropriate compound collector addition system was selected.The dosage of other flotation reagents, pulp concentration, and parameters were the same as the flotation settings of the pure coal sample, and it was investigated how the effects of several types of compound collectors compared.At the end of flotation, to evaluate the ash content of coal samples, a Muffle furnace was used to dry, weigh, and burn the final collected floating cleaned coal and tailings.Ash content and the combustible matter recovery (Formula (8)) were calculated to assess the flotation effect: Combustible matter recovery(%) = M C (100 In the formula, M represents the weight, A represents the ash content, and C and F respectively represent the flotation of cleaned coal and the feed.
Conclusions
In this work, the assembly behavior and structure of the compound collector with different carboxyl group positions at the LRC-water interface were investigated through CGMD combined with SFG.The CGMD results showed that the BA molecule interacts strongly with the D molecule as a whole, and the interaction distance is also closer.Therefore, when forming the final assembly structure, the first layer adsorption of the molecules in the D-BA was more parallel and ordered.The orientation distribution angle of carboxyl groups in BA molecules was larger, indicating that BA molecules with carboxyl groups in the middle of the carbon chain had a flatter spreading angle on the LRC surface as a whole and were more firmly adsorbed to the LRC surface in polar water phase than DA molecules with carboxyl groups at the carbon chain's termination.SFG research found that the carboxyl group at the carbon chain's termination had a greater impact on the displacement of the methyl/methylene symmetric stretching vibration peak, while the carboxyl group at the carbon chain's middle had a more significant effect on the displacement of the methyl/methylene asymmetric stretching vibration peak.Moreover, after the D-BA treatment, the methyl group on the chip surface had a minor orientation angle, which was because the carboxyl group was in the carbon chain's middle so the carbon chain on either side of the carboxyl group formed a certain angle.In general, D-BA spread more flatly on the LRC surface than D-DA, and the effective adsorption area of the assembled structure on the LRC surface was larger.AFM 3D scanning images showed that the average height of bright spots adsorbed with D-DA was higher than that of D-BA, which also confirmed the special molecular topology of the BA molecule and echoed its conformation at the LRC/water interface.
The flotation experiments of pure coal and actual coal samples indicated that the D-BA can obtain a higher combustible matter recovery and cleaned coal yield than the D-DA.This also confirmed that throughout the actual flotation procedure, the assembly behavior and final assembly structure of D-BA with the carboxyl group in the carbon chain's middle at the LRC-water interface were more beneficial to the improvement of flotation efficiency, and it also expressed that the research on the interface assembly behavior and assembly structure can serve as a valuable resource and direction for the creation of effective compound collectors.The combination of SFG and CGMD is also an effective means to study interface assembly.
Figure 1 .
Figure 1.Adsorption structure of the compound collector molecule D-DA at different time nodes of the coal-water interface (The blue is the coarse water particles, which are hidden at the interface of different nodes for easy observation).
Figure 2 .
Figure 2. Adsorption structure of the compound collector molecule D-BA at different time nodes of the coal-water interface (The blue is the coarse water particles, which are hidden at the interface of different nodes for easy observation).
Figure 1 . 20 Figure 1 .
Figure 1.Adsorption and assembly structure of the compound collector molecule D-DA at the coal-water interface.Before 50 ns, D-DA molecules gradually diffuse and rearrange in the form of individuals or clusters after adhering to the LRC surface (b-f).50ns-1000ns, D-DA molecules complete the rearrangement assembly (f-j).The blue is the coarse water particles, long chain alkanes are represented by a green bat model, the carboxyl group is represented by a red ball.In (a), the boundaries of cells are represented by white lines.(a) is the overall view of the simulation system, and (b) is the top view of (a).(k) is a quadruple graph of (j) in the X × Y boundary range.Note that (b-k), water is not shown to better show the assembly of D-DA.
Figure 2 .
Figure 2. Adsorption structure of the compound collector molecule D-BA at different time nodes of the coal-water interface (The blue is the coarse water particles, which are hidden at the interface of different nodes for easy observation).
Figure 2 .
Figure 2. Adsorption and assembly structure of the compound collector molecule D-BA at the coal-water interface.Before 50 ns, D-BA molecules gradually diffuse and rearrange in the form of
Figure 3 .
Figure 3. (a) The RDF curves between the D and carboxylic acid molecules in different kinds of compound collectors.(b) The RDF curves between the D and carboxyl groups in different kinds of compound collectors.
Figure 3 .
Figure 3. (a) The RDF curves between the D and carboxylic acid molecules in different kinds of compound collectors.(b) The RDF curves between the D and carboxyl groups in different kinds of compound collectors.
Figure 4 .Figure 4 .
Figure 4. (a) Sum-frequency vibration spectra of different polarization states of coal chip without adsorbing any reagent; (b,c) Sum-frequency vibration spectra of compound collectors with different carboxyl positions adsorbed on the surface of coal chip in different polarization states; (d,e) The fitted total peak lines by the sum-frequency vibration spectra of different polarization states; red lines are the fitted total peak lines.
Figure 5 .
Figure 5. Schematic diagram of the azimuth angle of the carboxyl group (The red arrow indicates the normal vector, and the black arrow indicates the vector from the carboxyl group adjacent coarse particles to the carboxyl group coarse particles).
Figure 5 .
Figure 5. Schematic diagram of the azimuth angle of the carboxyl group (The red arrow indicates the normal vector, and the black arrow indicates the vector from the carboxyl group adjacent coarse particles to the carboxyl group coarse particles).
Figure 6 .
Figure 6.The orientation distribution of carboxyl groups of carboxylic acid molecules with different carboxyl positions.
Figure 6 .
Figure 6.The orientation distribution of carboxyl groups of carboxylic acid molecules with different carboxyl positions.
Figure 7 .
Figure 7. (a) Chip surface topography without adsorption of any reagent; (b,c) Chip surface topography of adsorbed mixed collectors with D-DA and D-BA.
Figure 7 .
Figure 7. (a) Chip surface topography without adsorption of any reagent; (b,c) Chip surface topography of adsorbed mixed collectors with D-DA and D-BA.
Figure 8 .
Figure 8. Flotation results of compound collectors with different carboxyl positions on pure coal samples.
Figure 8 .
Figure 8. Flotation results of compound collectors with different carboxyl positions on pure coal samples.
Molecules 2024 , 20 Figure 9 .
Figure 9. Flotation results of compound collectors with different carboxyl positions on actual coal samples.
Figure 9 .
Figure 9. Flotation results of compound collectors with different carboxyl positions on actual coal samples.
Figure 9 .
Figure 9. Flotation results of compound collectors with different carboxyl positions on actual coal samples.
Figure 10 .Figure 10 .
Figure 10.The C1s fitting spectra of the pure coal sample.
Figure 11 .
Figure 11.The mapping structures of different coarse-grained molecules (The red, gray, and white balls represent O, C, H respectively).
Figure 11 .
Figure 11.The mapping structures of different coarse-grained molecules (The red, gray, and white balls represent O, C, H respectively).
2.1.CGMD Simulation of Assembly Behavior of Carboxylic Acid Compound Collector with Different Carboxylic Group Positions at Low-Rank Coal/Water Interface 2.1.1.Assembly Process Analysis
Table 1 .
Assignments of sum-frequency vibration spectra of compound collectors with different carboxyl positions adsorbed on the surface of coal chip in different polarization.
Table 1 .
Assignments of sum-frequency vibration spectra of compound collectors with different carboxyl positions adsorbed on the surface of coal chip in different polarization. | 12,597.2 | 2024-02-28T00:00:00.000 | [
"Chemistry",
"Environmental Science",
"Materials Science"
] |
AlgoLabel: A Large Dataset for Multi-Label Classification of Algorithmic Challenges
: While semantic parsing has been an important problem in natural language processing for decades, recent years have seen a wide interest in automatic generation of code from text. We propose an alternative problem to code generation: labelling the algorithmic solution for programming challenges. While this may seem an easier task, we highlight that current deep learning techniques are still far from offering a reliable solution. The contributions of the paper are twofold. First, we propose a large multi-modal dataset of text and code pairs consisting of algorithmic challenges and their solutions, called AlgoLabel. Second, we show that vanilla deep learning solutions need to be greatly improved to solve this task and we propose a dual text-code neural model for detecting the algorithmic solution type for a programming challenge. While the proposed text-code model increases the performance of using the text or code alone, the improvement is rather small highlighting that we require better methods to combine text and code features.
Introduction
Recent years have seen an increased interest in semantic parsing, especially due to the advances of data-driven methods using large corpora and deep learning architectures [1,2]. However, in addition to semantic parsing which has been an important Natural Language Processing (NLP) task for decades, several new studies aim to generate complex snippets of code, such as Python or C++, directly from natural language [3,4]. While semantic parsing and code generation are similar, we consider that there are several important differences mainly related to the complexity of the artificial language that needs to be generated. Semantic parsing is aimed to generate queries or logical forms that have a simpler representation or artificial language. At the same time, code generation requires a more complex representation using a programming language that has not only a more complex syntax, but also a larger number of tokens and very difficult semantics and high level programming constructs.
We consider that in order to be able to efficiently generate code from natural language, it is first important to solve some intermediate tasks related to high level programming constructs, such as algorithmic thinking, data structures, and algorithm design techniques. To this extent, a first step is to be able to understand the algorithmic solution required to solve a programming challenge. Thus, we define a multi-label classification task using a large set of challenges gathered from several relevant online resources for competitive programming. We introduce AlgoLabel, a multi-modal text-code dataset that contains both problem statements and C++ code snippets with solutions for the problems. This dataset can be employed for tagging programming statements with the correct algorithmic solution using the text and code, but also for more complex semantic parsing using real-world problem statements and code snippets.
Our main contributions are twofold. First, AlgoLabel is larger than existing datasets for this task [5] and it has been carefully curated to a small number of classes that are balanced and with data splits employing iterative stratification [6]. Second, it is a multi-modal text-code dataset and we show that a dual text-code classifier achieves better results than text or code alone. We hope that the AlgoLabel dataset and the proposed multi-modal solution will open up new research directions in multi-modal text-code research.
The paper continues as follows. Section 2 contains an overview of the most promising directions in code generation from text and other related tasks using text-code datasets. In Section 3 we define the proposed multi-label classification task using both problem statements and code snippets, continuing with a detailed description of the AlgoLabel dataset in Section 4. The proposed text-code multi-modal architecture, called AlgoLabelNet, for predicting the solution of an algorithmic challenge is presented in Section 5. Section 6 presents the performance of AlgoLabelNet compared with several other strong baselines, while Section 7 provides a discussion on the current limitations and possible improvements.
Code Generation
The domain of code generation refers to converting natural language descriptions to executable logical forms. We may differentiate between existing challenges in the field based on the complexity and generality of the logical forms and the level of abstraction reached in the natural language descriptions.
We would like to identify two datasets that are directly related to our task: AlgoLISP [2] and NAPS [7]. AlgoLISP leverages algorithmic challenges, automatically synthesized from a small set of computer science homework assignments. The aim of AlgoLISP is to test the ability of learning to compose basic programming routines from simple instructions. Since they are automatically generated, the descriptions exhibit a limited vocabulary and variability. NAPS features problem solutions from programming competitions. The associated natural language statements, obtained via crowd-sourcing, directly specify the precise order in which code instructions have to be called.
Other manually annotated datasets, such as Django [4] and CoNaLa [3], tackle more general programming tasks (e.g., I/O operations, graph plotting, interactions with the OS). Notably, the annotations follow a similar imperative structure, describing the methods that need to be called and their associated arguments. Alternatively, large collections of code-descriptions pairs can be obtained automatically, by scraping open source code repositories [8,9]. However, while the code snippets obtained with this approach can be arbitrarily complex, the descriptions tend to be vague or incomplete.
On the other hand, statements from programming competitions, focus on comprehensively presenting the tasks themselves instead of the steps needed to solve them. Consequently, while this makes them significantly more challenging to understand, even for experienced human programmers, it also provides a more realistic reflection of real-world use cases.
Semantic Code Representations
Several methods have been published for computing code embeddings-continuous vectors which encapsulate the semantics of a code snippet. Research in this area has been driven by the possibility of improving downstream tasks such as automatic code review, API discovery [10] or detecting encryption functions in malware [11]. Code2Vec [10] extracts paths from the abstract syntax tree (AST) of the source code. These paths are then merged into a single distributed representation by using an attention mechanism. SAFE [11] learns Word2Vec [12] representations for assembly (byte-code) instructions. The instructions corresponding to a function are then reduced to a vector representation via a self-attentive network [13]. Recently, methods which leverage transformers for obtaining contextualized embeddings for code have also been explored [14,15]. Remarkably, transformer-based architectures that have shown very good performance on translation tasks between natural languages [16] can substantially outperform rule-based systems when trained to convert a code snippet from one programming language to another [17].
Multi-modal approaches have also been explored, but to a lesser extent. They have an important advantage as they can provide a method to encode both code snippets and text descriptions to enable applications such as source code retrieval and source code captioning [18,19]. These multi-modal models may be jointly trained to generate natural language summaries of code and code snippets from natural language queries, improving performance on both tasks tackled separately [20].
Algorithm Label Prediction
Investigations into the landscape of competitive programming have revealed that the difficulty of the proposed algorithmic challenges has been consistently increasing [21,22]. The challenge of predicting algorithm labels from natural language descriptions has been introduced recently [5,23]. The task is treated as a multi-label classification problem because algorithmic challenges may have multiple relevant labels as detailed in Section 3. AlgoLabel improves on prior work by also tackling two related tasks: providing algorithmic labels for solution implementations and pairs comprising both the problem statement and corresponding solutions.
The problem of classifying text with multiple labels has received significant attention from researchers [24]. Modern solutions tackle this problem by building complex, deep neural networks [25][26][27]. One of the latest developments in the NLP community has been to leverage pre-trained Bidirectional Encoder Representations from Transformers (BERT) [28] for downstream tasks, including multi-label document classification [29].
Task Definition
We ground our exploration with three classification experiments. The first experiment is to assign one or more labels to a programming word problem, given the standard elements received in most competitive programming competitions: natural language description of the statement, the description of the input and output formats and of the time and memory constraints [5]. The second experiment is to classify in the same manner the source code corresponding to a problem solution given as a code snippet. In this formulation we use as inputs data from three different life stages of a solution: the tokens from the original source code, the abstract syntax tree (AST), and its byte-code representation. Finally, we aim to merge the two research threads in a multi-modal setting, by providing annotations for dual pairs consisting of problem statements and their associated code solutions. An example is depicted in Figure 1.
For all experiments, we chose four representative target labels: math, graphs, implementation, and dynamic programming (dp) & greedy. The first two tags pertain to the general knowledge required to solve the problem. In particular, we have grouped under the math label problems which require specific mathematical insight, such as those already annotated as requiring knowledge of probabilities, number theory, game theory, geometry, etc. The graphs label describes problems which require modeling the data using a graph structure. Note that the graph is generally not referenced explicitly within the text description. The following tag, implementation refers to problems for which the main difficulty relies in converting the abstract solution to actual code. This is a subjective tag which covers a broad selection of problem types: from simple problems which require the solver to follow a set of instructions (e.g., "simulation"-type problems) to problems covering well-known topics that nevertheless require correct implementation of complex data structures or algorithms. Finally, dp & greedy depicts tasks which can be tackled using either dynamic programming or a greedy choice.
The task is inherently a multi-label classification problem as problem statements might have a tag related to the required general knowledge to solve it (e.g., math, graphs), but also to the method necessary to devise an efficient algorithmic solution (e.g., implementation, dp & greedy). Nevertheless, there might be statements with only one tag and others with more than two depending on the nature of the problem and on the quality of the tagging process.
Data Collection
The main difficulty we encountered when building the AlgoLabel corpus was finding open resources which provide a wide range of problem statements from programming competitions, paired with high quality labels and correct solution implementations. One platform which meets all these criteria is Codeforces (https://www.codeforces.com (accessed on 24 June 2020)), a popular online judge which hosts weekly programming contests. These contests range in difficulty, from educational challenges aimed for beginners to particularly difficult tasks designed for skilled coders preparing for competitions.
The data extracted from Codeforces represents the core part of our dataset, with 6374 problems. We filtered problems which required interactions with the online judge and those with an unconventional format (e.g., without a problem statement) or theme (e.g., quantum programming).
Each problem is tagged with labels provided either by the problem writer or by high-rated contestants -all of them can be considered experts or at least highly knowledgeable in algorithms and problem solving. There is however an inherent level of noise in the available labels, which can be explained by the subjective nature of some tags and the possibility of approaching a problem from multiple angles.
We extended the dataset by leveraging Competitive Programming [30], a book which provides a list of high quality label annotations for tasks from various national and international contests. These tasks are hosted on two other popular programming e-judge platforms, Kattis (https://open.kattis.com/ (accessed on 24 June 2020)) and OnlineJudge (https://onlinejudge.org/ (accessed on 24 June 2020)), and feature a similar format to the ones on Codeforces.
Additionally, for each problem statement from Codeforces we extracted on average 2.48 correct solutions written in C++. Notably, Codeforces competitions test not only the efficiency of the algorithmic approach but also the ability of the contestants to quickly write code. Therefore, submissions often feature macros or other instructions which reduce the size of the code and the time spent on programming. They may also feature unused, pre-written classic algorithms. These approaches tend to affect code readability, adding to the difficulty of applying automated methods to derive meaning from code.
We also collected 22, 655 solutions from another online judge, infoarena (https://infoarena.ro (accessed on 24 June 2020)). The majority of submissions extracted from infoarena were not coded within the time constraints of a contest, leading to more readable solutions, with fewer irrelevant snippets. While most problems on infoarena are not labeled, the collected solutions can still be used to improve data driven models on classification tasks through semi-supervised techniques.
Finally, the code corpus also contains 3860 solutions, implemented by university students as programming assignments for the Algorithm Design course at University Politehnica of Bucharest. The students were specifically evaluated on the quality of their coding style, which provided an incentive to write clean, consistent, and well-documented code. We did not include the original problem statements associated to these problems since they were not written in English. All extracted code submissions are written in various dialects of C++.
Text Dataset
In Table 1 we report the number of extracted problems that have associated labels in the AlgoLabel dataset, the average number of labels for each labeled problem, and the number of samples without relevant labels. The latter are provided without a label in order to be leveraged by semi-supervised techniques or to build better language models or representations [12] for this task. We compare AlgoLabel with previously published datasets, CFML10 and CFML20 [5] in Table 2. The problems in CFML10 represent a subset of the problems encountered in CFML20, which was extracted from a single source, Codeforces. While in our classification experiments we only leverage 6279 of the total 13,508 samples in AlgoLabel, we believe the remaining examples, both labeled and unlabeled, can drive future exploration in the field. Problems from the Codeforces platform have an associated tag which is designed to estimate their difficulty based on the average performance of the participants in contest (For more details, visit codeforces. com/blog/entry/62865 (accessed on 24 June 2020)). The difficulty tag is a numeric value, ranging from 800 (easiest) to 3500 (hardest). We have considered the problems with a rating less than 1200 to be "easy", between 1200 and 1500 to be "medium", and the remaining problems "hard".
Notably, problems with the implementation label appear most frequently among the challenges rated as "easy", while graphs are more often encountered among "hard" tasks. We have accounted both for the problem difficulty rating and the overall distribution of labels in the dataset when we have split the data into subsets for training, development, and testing. In order to obtain balanced sets we have applied the iterative stratification technique [6,31]. We included the remaining problems, extracted from other sources than Codeforces, in the training set.
In Table 3, we present general statistics pertaining to the data split for text classification. Each problem sample is separated in three distinct sections: the problem statement and two additional sections depicting in natural language the expected format of the problem input and output. The length and structure of the three sections is consistent across the splits, however the vocabulary encountered during training is remarkably more expansive, accounting for the larger size of the train set. As can be observed from Table 4, graphs and math problems tend to feature a more specialized vocabulary, with terms that depict relevant concepts appearing more often. Conversely, implementation type problems are more general.
Code Dataset
In Table 5 we present general statistics for the extracted code solutions. The code samples from Codeforces feature the most number of solutions, annotated with a difficulty rating and several labels. On Infoarena there are fewer available tags and therefore most submissions, although relevant for this task, are unlabeled. The subset with university assignments has the highest degree of redundancy, with 3860 solutions associated to merely 31 problems. When separating the data for the code classification task, in order to allow for a fair comparison between the two research threads, we accounted for the way we performed the split for the natural language classification dataset. Therefore, the subsets used for development and testing contain solutions associated to the same problems from the original text data split. We also added to the training set all the solutions which lacked a difficulty rating.
Although the quality of the coding style varies across the three platforms, the input features we used are designed to mitigate this issue. Thus, we process each source on three different levels of abstraction. The first approach was to extract and anonymize the code tokens. While this is the simplest technique, it is also the most vulnerable to obfuscation and natural variations in implementations, such as the order of instructions. Secondly, we selected paths from the abstract syntax tree, according to the Code2Vec approach for computing semantic representations for code [10]. Finally, we leverage a pre-trained model to derive SAFE embeddings from the source byte-code [11]. With this method we obtain a sequence of distributed representations, one for each compiled function. In Table 6 we capture the average number for these features, as they appear across the data splits.
Text Preprocessing
We split each problem statement into sentences and individual words using NLTK [32] and we remove stopwords. Notably, we do not apply lemmatization or stemming as we have observed this hurts the performance of the models.
A particular feature we had to account for was the presence of mathematical formulas used to specify problem input constraints or other relations relevant for the problem statement. First of all, we noticed that mathematical symbols appeared inconsistently (e.g., symbol '<' could appear as '\le' or '<'), due to the fact that we have extracted text from different types of sources (html and pdf files). Therefore, we replaced analogous symbols with a unique token.
Numeric constants can provide a useful hint on the expected complexity of the solution and, by extension, the algorithmic technique that needs to be employed. However, there is no meaningful distinction between constants of the same order of magnitude when computing the asymptotic complexity of code. We normalized numeric constants by replacing them with fixed placeholders according to their number of digits.
Moreover, as exemplified in Table 7, we simplify the surface form for three types of formulas that appeared most frequently in the dataset, by grouping together common expressions that shared the same meaning. Thus, we denote the fact that a variable x is defined to have an upper bound limit of n using the expression range(x, n). Likewise, if a sequence x is described textually as having n elements, we replace the snippet with the expression sequence(x, n). We believe that being able to automatically extract this type of information about individual variables can enable more complex reasoning models. Table 7. Examples of formulas with normalized surface form.
Code Preprocessing
We used clang-format [33] to normalize the surface form of each code sample. Then we applied cppcheck [34] to statically determine unused functions, which we remove from the representation. Additionally, we eliminate comments and headers. We split the source code into distinct tokens with an open-source code tokenizer [35]. We apply astminer [36] with default parameters to extract AST paths in a format compatible with Code2Vec [10]. We extract at most 400 AST paths for each solution. Afterwards, we compile the solution to byte-code to derive SAFE function embeddings using a pretrained model [11].
Non-Neural Models
Several classifiers that do not leverage neural networks, but that provide good results on textual classification tasks were used: Random Forest [37], SVM [38], and Xgboost [39]. Their input consisted of TF-IDF features computed using scikit-learn [40] and we employed grid search to tune their hyperparameters. The Random Forest model comprised 200 decision trees with a maximum depth of 150. while the XGBoost solution consisted of 200 trees with a maximum height of 20.
Neural Models
We first trained Word2Vec embeddings [12] on the AlgoLabel training set and on the remaining samples that are not provided with a label. The parameters were initialized using the Xavier method [41] and we used an Adam optimizer [42]. We trained for 10 epochs, with an early stopping mechanism, using mini-batches of 128 samples and cross-entropy loss. We apply L2 regularization and a dropout mechanism to avoid overfitting.
AlgoLabelNet
For the text classification experiments we truncate the size of the problem statement, input, and output sections to 250 tokens. We encode each section separately, using the same bidirectional LSTM [43]. We have augmented the output representation of the encoder using a soft-attention mechanism over the entire input sequence [44]. Next, we concatenate the three inputs and pass the result successively through two fully connected layers. The last layer has a sigmoid activation function for label prediction.
In order to replicate the model for text classification proposed by Athavale et al. [5] we have applied a convolutional neural network on the concatenated input sections. The output is passed through a ReLU activation function, followed by a max pooling layer.
For the code classification experiments we only adapt the encoder to the type of available input features for code. The code tokens are encoded using the same procedure as we did for text, using a bidirectional LSTM, however we set the maximum size to 745. Likewise, we pass the SAFE function embeddings to a distinct bidirectional LSTM, truncated to size 20. The AST paths are aggregated using an attention mechanism, following the methodology proposed in Code2Vec [10].
For all biLSTM models, including AlgoLabelNet, we fix the size of all embeddings, hidden, and fully-connected layers to 100. These hyperparameters were chosen by using the validation set. As an additional neural baseline, we used the BERT base [28] uncased implementation available in the Huggingface library [45].
Metrics
For evaluation, we apply two metrics that are standard for multi-label classification: Hamming loss and the F1 (micro) score. Hamming loss [46] computes the proportion of misclassified examples. Micro-averaged F1 is computed based on the individual true positives, false positives, and false negative values across labels.
Text Classification
We capture the performance of the baseline methods in Table 8. Non-neural models leverage bag-of-words features, which cannot fully capture the task complexity. In this experiment, the proposed AlgoLabelNet model achieves the best F1 score of 0.62. However, this still leaves significant room available for improvement. On another hand, the performance of a pre-trained BERT base model fine-tuned on our dataset was poor, with a F1 score of 0.40. We believe this score can be improved with additional parameter tuning on computer science and algorithm books or other similar texts. We also replicated the model proposed by Athavale et al. [5] which uses a CNN encoder and obtained an F1 score of 0.55. Table 8. Performance on the text classification task, measured using Hamming loss (lower is better) and F1 score (higher is better).
Ablation study
We also report the impact of removing either the statement or the input/output sections from the model input. Remarkably, the model performs better with only the input/output format description than when it only receives the actual content of the problem statement. This suggests the model is prone to exploiting language cues (e.g., input size constraints or types of input and output) rather then the text of the problem statement. We explore this issue in more depth in the error analysis section.
In another experiment, we have concatenated the three sections (statement, input, and output) and passed the resulting sequence to a single biLSTM encoder instead of processing them separately. However, this approach proved detrimental to performance for all target labels, yielding an average F1 of 0.52.
Additionally, we measure the impact of improving the word embeddings with data derived from unlabeled problem statements. In this scenario, both metrics degrade significantly. This suggests that learning contextual word embeddings is crucial to getting closer to solving this task.
Label Analysis
In Table 9 we present the precision, recall, and F1 values achieved by XGBoost and AlgoLabelNet for each class. The fact that we obtain better scores for the graphs labels is in line with our observation regarding the specialized nature of the vocabulary for these types of problems. Notably, two models with worse overall F1 scores (Random Forest and XGBoost) achieved a lower Hamming loss than AlgoLabelNet. This can be explained by the fact that the formula for the Hamming loss penalizes equally the situation where a label is wrongly predicted and the case where a correct label is missing from the result. In other words, true positives and true negatives impact the score equally. On the other hand, the F1 metric does not directly account for the number of true negatives encountered.
The samples from the test set feature only 1.39 labels on average from the four chosen target labels. Consequently, a model may trade off recall and F1 score for a better Hamming loss by being biased to assign lower probabilities to all labels. Compared to AlgoLabelNet, XGBoost achieved significantly lower recall for all labels except for the "graphs" category.
Code Classification
For the code classification task we evaluate several baselines and report the results in Table 10. The simplest solution uses a bidirectional LSTM to encode the source code tokens. Despite achieving competitive results in our benchmark, this approach overestimates the importance of variable names. Thus, when changing the original variable names to anonymised placeholders, we notice a significant performance degradation. On our test set, both Code2Vec and SAFE obtain similar scores for the two chosen metrics. Our best neural model, called AlgoCode, merges the two approaches by concatenating the outputs of the two encoders and reaches an F1 score of 0.56. Notably, this score is lower than the one achieved by AlgoLabelNet that uses the textual description of the problems.
Ablation study
We improved the representation of the SAFE embedding encoder by adding an attention mechanism. With this change, the model achieved 0.50 F1 score, from the initial 0.46. We noticed a similar performance gain by varying the maximum number of AST paths for Code2Vec (from 300 to 500, see Table 10). Table 10. Performance on the code classification task, highlighting the different input features: byte-code (BC), AST paths, source code tokens, and anonymised code tokens.
Model
Input In Table 11 we present the aggregated code classification performance of the best AlgoCode model for each label. Similar to the natural language challenge, the graphs category is the easiest to recognize, while implementation lags behind according to our metrics.
Dual Text-Code Classification
For this final experiment, we combined our two best models in a single unitary framework. Thus, we have a neural model which takes both the natural language description of a problem and its corresponding coded solution and classifies the problem accordingly into one or more algorithmic labels. As natural language inputs we use the statement, input and output sections. From the source code, we leverage the SAFE embeddings and the Code2Vec distributed representation. All these inputs are encoded as previously presented, then concatenated and passed to a fully connected feedforward layer.
We restricted the training dataset to only Codeforces problems, since this is the only section of our dataset that features both problems and their solutions. The size of the test and development set remains unchanged, since we keep the same problem distribution from the text classification experiment. For each problem we randomly selected a corresponding solution.
As depicted in Table 12, despite the reduction in the size of the training set, the dual text-code model achieves better aggregated results than the two models evaluated separately. For every label, except for implementation, the F1 score improves compared to the results obtained on a single type of input. However, the improvement achieved by the dual model is small compared to AlgoLabelNet, suggesting the code brings little information in addition to the text, at least given the current code features.
Text Classification
The thought process behind understanding a problem statement often resembles solving a riddle: in a competition setting, in many cases, the author of the statement tries to hide the problem requirements behind a story. The chosen story may sometimes reference a relevant real-life application.
Thus, the statement often begins with a prologue, which provides the setting of the story. Typically, since it does not reference any constraints, the prologue can be safely ignored, without affecting the ability to understand the problem. This section is followed by sentences that depict the actual requirements. Among these, we may distinguish the ones that surmise the problem objective (e.g., "Find the minimum value for X"). Additionally, the statement may also contain hints regarding the automatic evaluation platform where the problem is hosted on. For example, a problem may include tips related to how to implement a solution in a specific programming language: particular functions that should be called, the fact that the output requires a 64 bit representation, etc. As in the case of the prologue, these sentences provide no insight regarding the solution for the problem to a human reader, however they may confuse a data driven neural model.
Unravelling the relevant facts from the narrative is a non-trivial but essential challenge, particularly for real-world applications. In our experiments, we have observed that the input and output sections, which are typically written in a more formal language, have a larger impact on the classification score than the statement. This finding was also reported in a previous study [5].
Case study: Graph problems Problems which can be modeled using a graph data structure are typically easier to identify due to the specialized vocabulary used (e.g., explicit references to vertexes or graph edges). However, in situations where the graph modelling is not obvious from the problem statement, the proposed models struggle to recognize the type of the problem.
In the majority of misclassified graph problems from the test set, the graph is not explicitly provided as input. For example, the statement may describe an initial state (e.g., a string), how to transition from one state to another (e.g., permute the letters in the given string) and enquire about the path that leads to a desired final state. If the problem introduces a novel concept as a state, such as an image, a symbolic expression or a number sequence, the models fail to recognize the abstract nature of the task.
Specialized graph data structures may constitute only an auxiliary component of the solution, used to optimize certain operations. We investigate this scenario in Figure 2, where we encounter the statement of a problem (For more details, visit https://codeforces.com/problemset/problem/566/A (accessed on 24 June 2020)) annotated with the graph label by the author. The neural baseline assigns a low probability for the graph label (0.19) and a high probability for dp&greedy (0.75). The low graph probability can be explained by the fact that there is no explicit graph provided as input. Nevertheless, the solution requires the construction of a trie data structure to efficiently store and process the collection of input strings. Instead, guided by the presence of expressions such as "largest common" or "maximum", the model makes the assumption that this is a combinatorial optimization problem. Although the label is missing from the dataset, the assumption is actually correct in this case: the official solution does indeed apply a greedy algorithm on the trie in order to derive the correct answer. Attention scores for the statement of a graph problem that was misclassified as dp & greedy. Stopwords 'the', 'of', 'to' are filtered and every word is converted to lowercase prior to training the model. Remaining words without an associated embedding are marked with brackets.
Code Classification
It may seem easier, given its unambiguous nature, to extract semantic knowledge from code than it is from natural language. However, in practice, approaches that work well for natural languages need to be carefully tailored to account for the specific structure of programming languages. In our code classification experiments, we have encoded the solution as it was represented during three different compilation stages: as a text sequence, as a collection of AST paths, and as byte-code.
As described in Section 5.2, processing the original snippet as a text sequence is laborious, requiring several steps to eliminate uninformative segments from the input. Code styling conventions mandate the need for self-explanatory names for variables, however, in a competition environment, solution authors frequently rely on short, generic names that can be typed faster (e.g., i, var, res). The model needs to account for the type as the multiple contexts in which a variable is referenced, in order to understand its role in the solution. Notably, in a programming language that allows access to arbitrary locations in memory, a variable may be referenced indirectly.
The second approach we considered were context paths derived from the abstract syntax tree. These paths are more generic and may be better at capturing the high level structure of the solution. However, a weakness of the current method [10] is that novel AST paths, not available in the training set, can not be represented during test time. Moreover, this approach is sensible to the number of AST paths considered and the maximum height allowed for a path.
Finally, we have encoded byte-code representations of each function using SAFE embeddings. These embeddings were pre-trained on a related but different semantic classification task. We believe we can improve results on our task by pre-training SAFE or a similar model on a specialized labeled dataset comprising algorithmic solutions. At the same time, learning unsupervised contextual code embeddings, as presented in recent literature, by training different transformer models on collections of very large code repositories [14,15,17], may hold the key to improving semantic understanding of code.
An interesting extension to our work would be to encode program execution traces, as suggested by Wang and Su [47]. Exploiting runtime information provides the opportunity to capture more complex program semantics, and thus outperform syntax-based methods. However, this approach requires both a runtime to execute the code as well as access to test input data, which may not always be available and that is not part of the AlgoLabel dataset at this time.
Leveraging the solution, in addition to the problem statement, increases the performance of the classification compared to using the problem statement alone. However, this improvement is small, especially due to the fact that the code classification task has significantly poorer results than the text classification and due to the smaller training dataset for the multi-model text-code dataset than for the unimodal (text or code) datasets.
Computational Efficiency
The memory space used by the neural architectures that we have experimented with is independent of the input size, being determined by a small set of hyper-parameters: the size of hidden state for the LSTM encoder, the number of filters and their size for the CNN module and the size of the fully connected layer used to transform each AST path for the Code2Vec attention method. Regarding the time complexity, we can identify precise asymptotic upper-bounds for training the non-neural baselines. On the other hand, for the solutions employing neural networks it is difficult to predict the number of training steps required to achieve good performance.
From a practical perspective, the main resource consumption difference between the specified encoders is the potential for parallelization: the LSTM module has to scan the input sequentially, while the other encoders apply independent computations that may be performed efficiently in parallel.
Annotation issues
The labeling of problem statements were provided by their authors and field experts. However, there are several issues that may be identified with the annotation process. First of all, given the number of people involved and the subjective nature of some labels, such as implementation, it is possible for labeling inconsistencies to occur in the dataset. Thus, different annotators may select different labels as relevant for a given problem, even while considering the same solution. Moreover, problems may also accept multiple, fundamentally different solutions. Finally, we have also identified few erroneously added labels in our dataset.
Explainability
The proposed neural models are only able to highlight the key words or phrases that have determined a particular prediction. We cannot use them to automatically infer why a particular phrase is relevant. Moreover, solving hard problems requires several reasoning steps. Identifying these steps goes beyond analysing the surface description of the problem statement.
Data variation
Another issue with the present experiment is the small number of labels considered. A more fine-grained study, covering rare labels, could provide valuable insights. Finally, the proposed dataset is still relatively small, which may prevent more data-hungry models, such as BERT, from leveraging it successfully.
Conclusions
In this paper we have introduced AlgoLabel, a new multi-modal dataset comprising problem statements for algorithmic challenges and their associated code solutions. We believe this new resource will encourage further research in extracting algorithmic knowledge from text, a necessary stepping stone towards general semantic parsing. A tool that can recognize the problem requirements from an informal story, and thus help design a solution, may prove of significant value, even to an experienced programmer. Moreover, we have showed that efforts towards deriving semantic understanding from either text and code can benefit from jointly modeling data from the two domains.
Furthermore, we have investigated several baselines for the task of multi-label classification of problem statements (AlgoLabelNet) and their corresponding source code (AlgoCode). AlgoLabelNet leverages a biLSTM encoder model to separately encode the three sections of a problem statement, with word embeddings pre-trained on a larger collection of unlabeled algorithmic challenges, provided in our dataset. For the code experiments, we have captured the solution representation from three different compilation stages and explored the advantages and disadvantages of each representation. Additionally, we have experimented with a dual text-code neural model, which achieves improved performance over considering the code or the text alone. | 8,815.8 | 2020-11-09T00:00:00.000 | [
"Computer Science"
] |
Hematological and gene co-expression network analyses of high-risk beef cattle defines immunological mechanisms and biological complexes involved in bovine respiratory disease and weight gain
Bovine respiratory disease (BRD), the leading disease complex in beef cattle production systems, remains highly elusive regarding diagnostics and disease prediction. Previous research has employed cellular and molecular techniques to describe hematological and gene expression variation that coincides with BRD development. Here, we utilized weighted gene co-expression network analysis (WGCNA) to leverage total gene expression patterns from cattle at arrival and generate hematological and clinical trait associations to describe mechanisms that may predict BRD development. Gene expression counts of previously published RNA-Seq data from 23 cattle (2017; n = 11 Healthy, n = 12 BRD) were used to construct gene co-expression modules and correlation patterns with complete blood count (CBC) and clinical datasets. Modules were further evaluated for cross-populational preservation of expression with RNA-Seq data from 24 cattle in an independent population (2019; n = 12 Healthy, n = 12 BRD). Genes within well-preserved modules were subject to functional enrichment analysis for significant Gene Ontology terms and pathways. Genes which possessed high module membership and association with BRD development, regardless of module preservation (“hub genes”), were utilized for protein-protein physical interaction network and clustering analyses. Five well-preserved modules of co-expressed genes were identified. One module (“steelblue”), involved in alpha-beta T-cell complexes and Th2-type immunity, possessed significant correlation with increased erythrocytes, platelets, and BRD development. One module (“purple”), involved in mitochondrial metabolism and rRNA maturation, possessed significant correlation with increased eosinophils, fecal egg count per gram, and weight gain over time. Fifty-two interacting hub genes, stratified into 11 clusters, may possess transient function involved in BRD development not previously described in literature. This study identifies co-expressed genes and coordinated mechanisms associated with BRD, which necessitates further investigation in BRD-prediction research.
Introduction
Despite decades of research involved in discovering novel management tools, developing interventional systems, and advancing antimicrobial therapeutics, bovine respiratory disease (BRD) remains the leading cause of morbidity and mortality in beef cattle operations across North America [1][2][3]. Due to its widespread prevalence, BRD is considered one of the most economically devastating components of beef cattle production systems [2][3][4]. BRD is a polymicrobial, multifactorial disease complex, incorporating infectious agents, host immunity, and environmental elements as predisposing factors [5][6][7]. Previous research over the past several decades has greatly detailed these factors and risks associated with BRD, yet there is minimal evidence that overall rates of disease have improved [5,[8][9][10]. Furthermore, diagnostic evaluation of BRD often relies on visual signs attributed to the disease complex, which are commonly non-specific to airway and lung disease, and lack clinical sensitivity [11,12]. Therefore, data driven approaches which capture the biological intricacies associated with clinical BRD development and provide candidate molecular targets capable of stratifying or predicting risk of disease and/or production loss would offer a more precise method of managing BRD.
Clinical BRD progression and severity often presents as an acute inflammatory disease [13]. However, molecular and cellular changes precede physiological changes in terms of disease development. As such, identifying consistent molecular and/or cellular components that relate to BRD development would allow for the development of rapid diagnostics capable of being performed with cattle at the time of facility arrival. Such a tool could facilitate precision medicine practices in stocker and feedlot operations and improve both speed and success of targeted therapy. Accordingly, hematological samples are ideal, as they represent a relatively noninvasive, cost effective, and readily obtainable source that reflects dynamic biological processes throughout the body [14,15].
Previous research has investigated cellular and molecular components that may indicate or predict clinical BRD. Richeson and colleagues, utilizing complete blood count (CBC) variables and castration status at facility arrival, identified significant associations with BRD in calves with comparatively decreased numbers of eosinophils and increased numbers of erythrocytes [16]. When evaluating the relationships between cytokine gene expression and CBC data in cattle with concurrent BRD, Lindholm-Perry and colleagues discovered that cattle with BRD possessed a comparative increase in numbers of neutrophils, decrease in numbers of basophils, and increased expression of CCL16, CXCR1, and CCR1 [17]. Recent RNA sequencing studies, performed by both our group and others, have identified mechanisms and candidate biomarkers in whole blood associated with BRD development [18][19][20]. However, these studies primarily sought to identify differentially expressed genes (DEGs) between cattle that were or were not treated for BRD based on clinical signs. Focus on identifying DEGs meant that much of the data generated by these studies was neglected. Therefore, we aimed to leverage global gene expression patterns across high-risk cattle, and incorporate available cellular-level hematological data from the same cattle, to infer mechanisms associated with BRD development with a more holistic approach.
As gene expression operates in tandem with biological regulatory networks and complexes, investigation of gene co-expression levels may reveal transcriptional coordination, distinguish protein production relationships, and measure cellular composition and function relevant to specific disease states such as BRD [21,22]. This analysis approach falls into the field of systems biology, where, in contrast to reductionist biology, molecular components are pieced and scaled together to better understand disease and generate novel hypotheses [23,24]. In this respect, we sought to build networks of co-expressed genes, utilizing the full structure of previously published gene expression data [20], and discover relationships between gene expression and cellular hematological components, which may elucidate and/or further confirm genes and mechanisms related to BRD development or resistance.
Animal enrollment
All animal use and procedures were approved by the Mississippi State University Animal Care and Use Committee (IACUC protocol #17-120) and carried out in accordance with relevant IACUC and agency guidelines and regulations. This study was carried out in accordance with Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines (https:// arriveguidelines.org). This study was conducted in accompaniment with previous work focused on differential gene expression analysis and candidate biomarker validation [20]; the RNA-Seq data of these animals were previously deposited in the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO) database under accession number GSE161396. Briefly, 24 samples (n = 12 BRD, n = 12 Healthy) from the 2017 study were previously selected based on randomized stratification of vaccine and oral anthelminthic administration upon facility arrival (d0), and 24 samples from the 2019 study randomly selected with equal distribution of clinical BRD development within 28 days of arrival [20]. All cattle within each population (year) were of proportional arrival weight (S1 Table) and age (estimated 4-6 months). All animals enrolled in these two groups were commercial cattle, with unknown genetic characteristics and background; this is a typical attribute of newly received stocker cattle in commercial production systems. Of the 24 cattle from the 2017 population having RNA-Seq data, one individual (ID: 162-2017_S24; GSM4906455) was not incorporated into the network analysis due to missing CBC data. The following clinical data were recorded for each animal: at-arrival fecal egg counts per gram via modified-Wisconsin procedure (FEC-d0), body weight in pounds (WT) at arrival, Day 12, Day 26, and Day 82, average daily weight gain at each time point (ADG), growth rate (slope of weight over days recorded; GR), at-arrival castration status (Sex), at-arrival rectal temperature (Temp-d0), development of clinical BRD within 28 days post-arrival (BRD), number of clinical BRD treatments (Treat_Freq), and timing to first BRD treatment (Risk_Days). Ages (not recorded) were estimated to be similar upon facility arrival. Clinical data for these cattle are found in S1 Table. Hematology analysis Approximately 6 mL of whole blood was collected at arrival into K 3 -EDTA glass blood tubes (BD Vacutainer; Franklin Lakes, NJ, USA) via jugular venipuncture. Blood samples were stored at 4˚C and analyzed the same day of collection with the flow cytometry-based Advia 2120i hematology analyzer (Siemens Healthcare Diagnostics Inc., Tarrytown, NY, USA), testing for the following parameters: white blood cells (WBC; K/μL), erythrocytes (RBC; M/μL), hemoglobin (HGB; g/dL), hematocrit (HCT; %), mean corpuscular volume (MCV; fL), mean corpuscular hemoglobin (MCH; pg), mean corpuscular hemoglobin concentration (MCHC; g/dL), red blood cell distribution width (RDW; %), and platelets (PLT; K/μL). Blood smear staining was performed with a Hematek 3000 Slide Stainer (Siemens Healthcare Diagnostics Inc., Tarrytown, NY, USA) via Wright-Giemsa stain reagents. Stained blood smears were evaluated for leukocyte distribution via a manual 300-count white blood cell differential by trained clinical pathology technical staff at Mississippi State University College of Veterinary Medicine. Neutrophil, eosinophil, basophil, monocyte, and lymphocyte percentages were recorded, with accompanying neutrophil-to-lymphocyte ratios (NL Ratio). Hematology data for these cattle are found in S2 Table. RNA-Seq data processing and normalization The gene-level raw count matrix generated from our previous research was utilized for this study [20] Raw gene counts were imported to R v4.0.4 and processed with the filterByExpr toolkit [32], removing genes with a minimum total count of less than 200 and counts-per-million (CPM) below 1.0 across a minimum of 12 libraries. Libraries were normalized with the trimmed mean of M-values method (TMM) [33, 34] and converted into log2-counts per million values (log2CPM). A total of 12,795 genes were identified after count processing and were utilized for weighted network analysis.
Weighted gene co-expression network analysis (WGCNA)
Weighted network analysis was performed with the R package WGCNA v1. 70.3 [35]. Clinical and hematology trait data were compiled and aligned to each respective sample library. To remove any outlier sample, canonical Euclidean distance-based network adjacency matrices were estimated and used to identify outliers based on standardized connectivity. Estimated adjacency matrices had network connectivity standardized with the provided equation, where the z-score normalized network connectivity (Z.k μ ) for each sample is calculated from meancenter scaling of the raw network adjacencies (k) [36]: Samples with a standardized connectivity < -5.0 were considered outliers and to be removed from further analysis; no samples were considered outliers in this study (S1 Fig). An adjacency matrix was constructed from the calculated signed Pearson coefficients between all genes across all samples. We utilized signed networks as they better capture gene expression trends (up-and down-regulation) and classify co-expressed gene modules which improve the ability to identify functional enrichment, when compared to unsigned networks [24,[35][36][37]. Soft thresholding was used to calculate the power parameter (β) required to exponentially raise the adjacency matrix, to reach a scale-free topology fitting index (R 2 ) of >80%; β = 8 was selected for this study. The relationship between each unit β and R 2 is seen in S2 Fig. Coexpression modules were constructed with the automatic, one-step blockwiseModules function within the WGCNA R package, using the following parameters: power = 8, corType = "pearson," TOMType = "signed," networkType = "signed," maxBlockSize = 12795, minMo-duleSize = 30, mergeCutHeight = 0.25, and pamRespectsDendro = FALSE; all other parameters were set to default. Constructed co-expression modules were assigned a color by the WGCNA R package, with any gene not assembling into a specific module placed in the "grey" module. Module-trait associations were identified with Pearson correlation between module eigengene (ME; first principal component of the co-expression matrix [38] and clinical and hematology data). Modules were considered weakly or strongly correlated with each trait having a p-value � 0.10 and |R| � 0.3 or p-value � 0.05 and |R| � 0.4, respectively. Color scaling was performed with the Bioconductor package viridis v0.6.1 [39] to allow ease of visual interpretation for individuals with color blindness.
Cross-population module preservation analysis
Based on our previous work, it can be inferred that host gene expression captured at facility arrival is variable across BRD severity cohorts [20, 40,41]. Therefore, we assessed whether the at-arrival co-expression patterns and modules found in this study were well preserved across an RNA-Seq data set from an independent population of cattle. We investigated cross-populational module preservation across the whole blood transcriptomes of cattle previously assessed for differential gene expression (GSE161396; 2019 population (n = 24)) with the modulePreservation function found within the WGCNA R package. The gene-level raw count matrix from previous analysis [20] was utilized and processed, filtered, and normalized in identical procedures as the 2017 RNA-Seq data set (see RNA-Seq data processing and normalization section); a total of 12,803 genes were identified in the 2019 data set after count processing and normalization. Permutation testing (n = 200 permutations) was conducted to assess the significance of module preservation across the 2017 and 2019 RNA-Seq data sets, utilizing the two composite statistical measurements Zsummary and medianRank scores [36,42]. Briefly, the identified modules within the test network are randomly permuted n times, where, for each permuted index, the mean and standard deviation is calculated for defining the corresponding Z statistic [42,43]. Through the combination of additional preservation statistics (average of Zdensity and Zconnectivity), the calculated Zsummary statistic determines the level of mean connectivity among all genes within a module (i.e., network density) across the two data sets [24,42]. Higher Zsummary values indicate a stronger level of module preservation between data sets but is dependent on the number of genes within the module (i.e., module size) [42]. To further evaluate preservation in a module size-independent manner, medianRank scores are calculated from the mean connectivity and density measurements observed from each module and assigned a rank score [42]. Lower medianRank values indicate a stronger level of module preservation between data sets. For this study, any module possessing Zsummary � 10 and medianRank � 5 was considered highly preserved.
Functional enrichment analysis of preserved modules
WebGestalt 2019 [44] (WEB-based Gene SeT AnaLysis Toolkit; accessed September 13, 2021) was utilized for over-representation analysis to identify enriched Gene Ontology (GO) biological processes, cellular components, molecular functions, and pathways from genes found in each module considered well preserved. Pathway enrichment analysis was performed with the pathway database Reactome [45]. Human (Homo sapiens) gene orthologs and functional databases were utilized for GO term and pathway enrichment analyses. Over-representation analysis parameters within WebGestalt 2019 included between 3 and 3000 genes per category, Benjamini-Hochberg (BH) procedure for multiple hypothesis correction, adjusted p-value (FDR) cutoff of 0.05 for significance, and a total of 10 expected reduced sets of the weighted set cover algorithm for redundancy reduction.
BRD-associated hub gene identification and network analyses
Hub genes are those genes found within a module (eigengenes) that possess high connectivity which may exhibit a greater degree of biological significance in respect to significantly associated clinical traits, when compared to all other eigengenes [38,46,47]. Here, we sought to identify hub genes found from modules which are significantly associated with any of the clinical BRD categories (BRD, Treat_Freq, and Risk_Days). This was performed in the WGCNA R package with two procedures. First, Pearson correlation between gene expression and module eigengenes was calculated, resulting in the level of module membership (k ME ) for each gene. Second, the Pearson correlation between individual gene expression level and clinical trait was calculated, resulting in the level of gene significance (GS) for each gene. Any gene possessing k ME and GS values � 0.7 and � 0.3, respectively, were considered hub genes for clinical traits [36]. All BRD-associated hub genes were used for network construction of known and predicted protein-protein interactions with the Search Tool for the Retrieval of Interacting Genes (STRING) database v11.5 [48], utilizing bovine (Bos taurus) annotations. STRING analysis was performed with the physical subnetwork setting, where edges only display protein interactions that have evidence of binding to or forming a physical complex. Any interaction above a combined score (confidence) of 0.200 was incorporated into the complete network prior to network clustering; disconnected nodes were removed from the network. The Markov Cluster (MCL) algorithm was utilized for network clustering due to its superior performance in complex extraction without the need of additional parameter tuning [49]. Hub genes within the interaction network were placed into distinct clusters based on MCL clustering of the distance matrix acquired from the combined interaction scores, using a MCL inflation parameter of 1.4.
Statistical analysis
Clinical and hematology data (described in animal enrollment and hematology analysis) were compared between cattle treated for naturally-acquired clinical BRD within the first 28 days following facility arrival (BRD) and those never being diagnosed nor treated (Healthy). Residual normality was assessed in R v4.0.4 with the Shapiro-Wilk test [50], with an a priori level of significance set at 0.10; neutrophil percentage (Neu%), eosinophil percentage (Eos%), basophil percentage (Baso%), lymphocyte percentage (Lymph%), neutrophil-to-lymphocyte ratio (NL ratio), FEC-d0, MCHC, RDW, and Sex were considered non-normally distributed. Differences in normally distributed variables between BRD and Healthy cattle were assessed with the Student's t-test. Differences in non-normally distributed variables were assessed with the Welch's t-test; differences between the two groups with respect to Sex was assessed with Pearson's chisquare test with Yates' continuity correction. Differences between BRD and Healthy cattle were considered significant having a p-value � 0.05.
Statistical analysis of clinical and hematological parameters
Descriptive statistics for the clinical and hematological data are provided in Table 1. Regarding the hematological parameters, average values of Lymph%, RDW, and PLT were outside of the internal reference intervals for both BRD and Healthy cattle. In this study, RBC was considered significantly higher at arrival in BRD cattle compared to Healthy cattle; no other parameter was considered significantly different between the two groups. Regarding clinical data, BRD cattle possessed significantly lower weight gain by end of study (ADG-d82; 2.273 lbs/day in BRD and 2.946 lbs/day in Healthy) and lower calculated slopes of weight gain over time (Growth Rate; 2.370 in BRD and 2.995 in Healthy); no other clinical parameter was considered significantly different between the two groups. We did not include the at arrival weight (WT-d0) as an analysis variable because there was no significant difference between the Healthy (mean: 477.0, s.d.:24.8) and BRD (mean: 475.3, s.d.:26.9) cohorts.
Weighted gene co-expression network construction
The remaining filtered genes (n = 12,795) were used for WGCNA network and module construction. The resulting network identified a total of 41 color-coded modules of co-expressed genes, excluding the grey module which incorporates uncorrelated genes (n = 1,235) (Fig 1). Across the 41 assigned modules, the turquoise module possessed the largest number of coexpressed genes (n = 2,503) and the lightsteelblue1 module possessed the smallest number of co-expressed genes (n = 38); the average size of each module was approximately 282 genes. The complete list of genes and module assignment is found in S3 Table. Automated block-wise module detection of interconnected genes were grouped into 41 unique color-coded modules, excluding the grey module (uncorrelated genes). The x-axis corresponds to the gene-module assignment and the y-axis (Height) depicts the calculated distance between co-expressed genes from hierarchical average linkage clustering.
Module-trait relationship with hematological and clinical datasets
Pearson correlation heatmaps were generated to assess the relationship between all modules and hematological clinical datasets. Regarding hematological data, several significant relationships of interest exist (Fig 2). The tan module possessed the highest number of significant correlations with the hematological data (8), followed by turquoise, pink, lightgreen, and lightcyan modules (7). With respect to RBC, considered significantly higher at arrival in BRD cattle compared to Healthy cattle, six modules were strongly correlated: Violet was weakly associated with Treat_Freq (R = -0.38, p = 0.07). Regarding production traits (ADG-d12, ADG-d26, ADG-d82, and GR), ten modules possessed significant correlations: darkgreen, skyblue, darkturquoise, darkmagenta, purple, yellowgreen, orange, orangered4, darkred, and lightyellow. However, to mitigate unexplained variation which may confound differences in ADG-d12 and ADG-d26, coupled with the lack of significance between disease cohorts, eight modules correlating with ADG-d82 and GR were prioritized. Darkred was
Pearson correlations between each of the unique color-coordinated modules and clinical traits are visualized and represented as a heatmap. Each row represents a distinct co-expression module, and each column represents clinical traits as follows: at-arrival fecal egg counts per gram via modified-Wisconsin procedure (FEC-d0), body weight in pounds (WT) at arrival, Day 12, Day 26, and Day 82, calculated average daily weight gain at each time point (ADG), growth rate (slope of weight over days recorded; GR), at-arrival castration status (Sex), atarrival rectal temperature (Temp-d0), development of clinical BRD within 28 days post-arrival (BRD), number of clinical BRD treatments (Treat_Freq), and timing to first BRD treatment (Risk_Days). Cells are represented by how positive (yellow/white) or negative (purple/black) the correlation is between module and clinical trait, respectively.
The medianRank and Zsummary values across all modules are depicted through the scatterplot x-and y-axes, respectively. Zsummary values � 10.0 and medianRank values � 5.0, indicated by dashed lines, denote that a module identified with the 2017 gene expression data is well preserved across the 2019 gene expression data.
Functional enrichment analysis of well-preserved modules
To explore the functionality and biological relevance of the five well preserved modules, we performed over-representation analysis with all genes from each module (black, purple, lightgreen, tan, and steelblue; S4 Table). Analysis of genes from the black module revealed 47 biological process terms, 49 cellular component terms, 17 molecular function terms, and five significantly enriched pathways. Biological processes identified from genes within the black module were related to neutrophil activity and degranulation, aldehyde metabolism, nitrogen compound response and catabolism, and cellular transport. Cellular components identified from genes within the black module involved intracellular and extracellular vesicles, secretory granules, cellular junctions, and lysosomes. Molecular functions identified from genes within the black module involve cytokine, enzyme, and calcium-dependent protein binding, aldehyde dehydrogenase (NAD) activity, and interleukin-1 receptor activity. Enriched pathways identified from genes within the black module involved neutrophil degranulation, metabolic disease, and signaling via tyrosine kinase receptor.
Analysis of genes from the purple module revealed 54 biological process terms, 46 cellular component terms, 16 molecular function terms, and 40 significantly enriched pathways. Biological processes identified from genes within the purple module involved mitochondrial processes (cristae formation, respiratory chain complex assembly), non-coding RNA processing and maturation, cellular protein transport, and metabolic processes and biosynthesis. Cellular components identified from genes within the purple module involved cell substrate and adhesion junction, ribosomes, cytoplasmic side of endoplasmic reticulum, mitochondrial inner membrane and envelope, and the 48S preinitiation complex. Molecular functions identified from genes within the purple module involved mRNA/rRNA binding, ubiquitin ligase inhibition, ATP synthase activity, and NADH dehydrogenase. Enriched pathways identified from genes within the purple module involved infectious disease/viral infection, amino acid metabolism, translation initiation/termination, rRNA processing, and ATP synthesis and respiratory electron transport.
Analysis of genes from the lightgreen module revealed 38 biological process terms, 49 cellular component terms, three molecular function terms, and one significantly enriched pathway. Biological processes identified from genes within the lightgreen module involved leukocyte/ neutrophil differentiation, activation, and degranulation, tissue remodeling, cell secretion and exocytosis, phagocytosis and micropinocytosis, dendritic cell activation, and interleukin-8 secretion. Cellular components identified from genes within the lightgreen module involved lysosome, secretory/azurophil granule, vesicular/vacuolar membrane, granule lumen, and macropinosome. Molecular functions identified from genes within the lightgreen module involved symporter activity, potassium-chloride symporter activity, and phosphatidylinositol binding. The single enriched pathway identified from genes within the lightgreen module was neutrophil degranulation.
Analysis of genes from the tan module revealed 35 biological process terms, 32 cellular component terms, four molecular function terms, and two significantly enriched pathways. Biological processes identified from genes within the tan module involved B-cell activation, receptor signaling, and regulation, immunoglobulin production, cytokine production, positive regulation of interferon-gamma production, and mononuclear cell proliferation. Cellular components identified from genes within the tan module involved MHC class II protein complex, lytic vacuole membrane, clathrin-coated endocytic vesicle, endosomal membrane, and Bcell receptor complex. Molecular functions identified from genes within the tan module involved MHC class II receptor activity, MHC class II protein complex binding, and peptide antigen binding. Enriched pathways identified from genes within the tan module were antigen activates B-cell receptor (BCR) leading to generation of second messengers and CD22-mediated BCR regulation.
Analysis of genes from the steelblue module revealed three biological process terms, three cellular component terms, no molecular function terms, and no significantly enriched pathways. Biological processes identified from genes within the steelblue module were cell surface receptor signaling pathway, negative regulation of fibroblast growth factor receptor signaling pathway, and antigen receptor-mediated signaling pathway. Cellular components identified from genes within the steelblue module involved side of membrane, plasma membrane part, and alpha-beta T cell receptor complex.
BRD-associated hub gene identification and in silico protein-protein interaction and clustering analyses
Hub gene identification analysis included co-expressed genes from the following modules: violet (54), orange (68), royalblue (100), mediumpurple3 (41), and steelblue (59). The k ME and GS value cutoffs within each module resulted in 24, 46, 30, 22, and 32 BRD-associated hub genes from the violet, orange, royalblue, mediumpurple3, and steelblue modules, respectively (S5 Table). These resulting hub genes were further utilized for physical subnetwork protein-protein interactions and network clustering. After removal of all disconnected nodes, the interaction network demonstrated significant connectivity between 52 proteins across 11 distinct clusters with high inter-nodal connectivity ( Fig 5); these gene products and their combined interaction scores are found in S6 Table. These connected gene products demonstrate possible at-arrival biomolecular complexes associated with BRD development and severity.
Interaction score analysis reveals 52 genes, with high intramodular and BRD-trait relationship, which possess high connectivity. Interconnected gene products (nodes) were further grouped into distinct clusters based on their interaction scores (edges). Edge thickness represents the level of interaction confidence between nodes.
Discussion
While at-arrival management practices are somewhat dependent upon anticipated risk of BRD development, both inter-and intra-herd level disease prevalence is highly variable [5,51]. To counter this variability, beef production systems will often administer antimicrobials and/or immunostimulants at arrival to reduce the risk of clinical BRD development and associated production losses [52,53]. However, immunostimulant administration alone as a metaphylactic protocol for controlling BRD appears to have minimal impact on rates of morbidity [54][55][56]. Metaphylactic use of antimicrobials at arrival reduces risk of morbidity and mortality across beef production systems, however this management practice is variable in efficacy, in both rates of disease across cattle populations and in pharmacological choice, and the practice is suspected to drive expansion of antimicrobial resistance, a growing societal concern [52,57,58]. Given this background, our research group and others have focused on evaluating host transcriptomes at arrival, to better characterize host-driven mechanisms and develop candidate mRNA biomarkers associated with clinical BRD outcomes [18,19,20]. These studies have provided valuable information regarding cattle treated based on clinical signs of BRD, but these studies heavily rely on semi-objective evaluation of BRD cases and may miss underlying subclinical or misdiagnosed disease. As such, the underlying host mechanisms involved in BRD development remain disputed. Therefore, to identify at facility arrival genes and mechanisms which may represent the variable development of BRD cases and leverage the total expression profile of individual cattle, we employed a systems biology approach with weighted co-expression network analysis. This methodology allows us to identify networks of genes exclusively coexpressed, and to evaluate said networks in a reduced manner in order to identify molecules and mechanisms of interest for future BRD prediction studies. Importantly, co-expression network analysis serves as a complementary, yet distinct, approach to identifying genes and mechanisms associated with disease status, when compared to differential expression analyses. The network approach performed in this study evaluates and identifies genes that are strongly coordinated in terms of expression, and determines correlation with overlapping metadata (clinical data), whereas differential expression analyses typically follow a pairwise approach to determine level of effect and probability of gene differences between groups. Co-expression network analyses consider greater biological context when evaluating gene expression differences, compared to more traditional pairwise approaches. Additionally, through utilization of hematological parameters, we could capture changes in the cellular composition of whole blood as they may relate to cellular and gross pathophysiology across individuals.
PLOS ONE
While we recognize that dynamic changes captured in whole blood may not completely encompass biomolecular characteristics seen within lung tissue, whole blood serves as a practical and easily obtainable sample for respiratory and inflammatory disease diagnostics [14,59]. After initial statistical assessment of CBC data, we identified that both BRD and Healthy cattle possessed comparable lymphocytosis, thrombocytosis, and erythrocytic macrocytosis; the distribution of these values were not considered significantly different between the two groups. Notably, mild to moderate levels of dehydration, a common condition in newly arrived postweaned beef animals, may cause elevated changes in hematocrit levels and lymphocytes [60,61]. Lymphocytosis and thrombocytosis may also result from host responses to infection or inflammation. Additionally, reticulocytosis (i.e., immature erythrocytosis) is the most common cause of erythrocytic macrocytosis [60] and was noted as a common feature found across all blood samples submitted for analysis. While these cattle did not possess physiological nor hematological evidence of hemolysis or blood loss upon facility arrival, this finding may be associated with early regenerative anemia, systemic inflammation, or mineral deficiencies [60][61][62]. Furthermore, blood-borne pathogens were not reported from blood smear assessment. Nevertheless, it does not rule out the possibility of mild/subclinical intraerythrocytic pathology or asymptomatic convalescence that may result in these increased hematological changes. Such pathology is often caused by parasitic diseases such as anaplasmosis, a common infectious disease of cattle across the United States [63,64]. It is plausible that these findings indicate that cattle at facility entry are undergoing similar physiological changes as it relates to stressful and/or pathogenic events (long-distance transportation, co-mingling, etc.) and underlying genomic mechanisms serve to resolve or prolong deleterious physiological conditions that result in BRD.
With respect to distributions, we found that the majority of variables tested for module correlations were normally distributed. Of the nine (of the 26 total) non-parametric testing variables, six demonstrated relative linearity upon visual distribution assessment (data not shown; Neu%, Lymph%, NL Ratio, MCHC, RDW, and FEC-d0). Moreover, the non-parametric nature of Eos%, Baso%, and Sex is perceived to be due to data sparsity and relative rarity of the expected cell counts attributed to eosinophils and basophils in cattle (Table 1) [65]. We elected to utilize Pearson correlation models as they can measure discrete and continuous datasets without need for transformation, and preserve linearity from the raw data structure when assessing these variables with gene co-expression modules. Additionally, calculated Pearson correlations from WGCNA can better handle datasets with missing or censored data and is highly computationally efficient [66]. We identified that RBCs were significantly increased in cattle that would go on to develop BRD versus those that did not. Although this result was identified in a relatively small number of cattle, it corresponds with the work of Richeson and colleagues [16]. As discussed within their prior research, elevated RBCs may indicate dehydration and subsequent predisposition with BRD development [5,16]. Interestingly, we were able to identify one well-preserved co-expression module which possessed significant correlations with RBCs, RDW, PLT, BRD, and Risk_Days (i.e., shorter time to first treatment): steelblue. Upon further investigation, we discovered that the genes within this module were related to antigen receptor-mediated signaling (BLK, CD247, CD276, CD3G, GATA3, and PLEKHA1) and negative regulation of fibroblast growth factor receptor signaling (CREB3L1, GATA3, and WNT5A), and specifically components of alpha-beta T-cell receptor complexes (CD247 and CD3G). The upregulation of IL-7R and associated signaling molecules, which include CD3G and CD247, initiate NOTCH-dependent proliferation of T-cell precursors [67]. Furthermore, elevated levels of BLK and GATA3 tend to skew the immune response towards Th2-type immunity [68][69][70]. In terms of RBC relationship, previous research has demonstrated that Th2-stimulated bone marrow T-cells promote erythroid differentiation and lead to the development of erythroblasts [71]. Additionally, CXCL12, also identified within the steelblue module and previously identified as a differentially expressed gene associated with BRD development [20], is involved in Th2-cell migration and immune response [71,72]. HNRNPH3, found within the steelblue module, has previously been identified as a key transcription factor associated with clinical BRD [18]. Lastly, several genes identified in the steelblue module were also found in the "turquoise" module identified by Hasankhani and colleagues [24], which enriched positive regulation of activated T-cell proliferation and Th1/Th2-cell differentiation pathways. While this study cannot elucidate the exact mechanistic components nor temporality of molecular events, it suggests that promotion of Th2-mediated T-cells at arrival shares a common mechanism with RBC elevation and risk of BRD development. Our previous research has indicated that genes elevated at arrival in cattle that eventually develop BRD interact, and may enhance, TLR-4 and IL-6 responses [20, 40,73], which may contribute to the coexpressed pattern related to Th2-mediated T-cell development [74]. Overall, this pattern of Th2-mediated immunity is strongly associated with clinical BRD development and timing to first treatment, and may further strengthen the depiction that early Th2 responses indicate clinical disease development and lung pathology [75,76].
While steelblue was the only well-preserved BRD-associated module detected, four other modules were determined to be well-preserved across populations and warranted specific functional enrichment investigation: black, purple, lightgreen, and tan. Genes within the black module, largely involved with neutrophil activation and degranulation, IL-1 activity, and metabolic disease, was only significantly associated with hemoglobin and erythrocyte parameters (HGB, HCT, MCV, and MCH); notably, the black module did not possess any significant associations with clinical variables. This may indicate that neutrophilic and IL-1 activity was not indicative of BRD within this population of cattle, and/or additional disease-associated variables were not recorded in this study. Genes within the purple module, associated with increased eosinophil percentage, decreased neutrophil-lymphocyte ratio, decreased MCV and MCH, increased at-arrival fecal parasitic egg count, and increased growth rate (weight gain over 82 days), largely enriched for mitochondrial function and aerobic metabolism and RNA processing. Importantly, this module possessed positive association to weight gain independent of BRD development. Previous research has investigated many of these ribosomal protein-encoding genes for their potential for immune effector capacity [77] and cell regulation [78], however this marks the first time, to our knowledge, that they have been implicated in contributing to weight gain potential in high-risk cattle. Notably, one gene (RPS26) has been previously identified as a differentially decreased marker in the diseased lungs of cattle experimentally challenged with BRD-associated pathogens [79,80]. Similar to the black module, genes identified within the lightgreen module were associated with hemoglobin and erythrocyte parameters, but additionally positively correlated with neutrophil percentage and neutrophil-lymphocyte ratio, and negatively correlated with basophil percentage; likewise, the lightgreen module did not possess significant associations with clinical variables. Lastly, the tan module, possessing several significant hematological associations, and was negatively correlated with castration status at arrival, possessed genes which enriched for B-cell receptor complexes and regulation and interferon-gamma production. Unfortunately, the underlying physiological impact of co-expressed genes identified within the black, lightgreen, and tan modules were not captured in this study. As this study was primarily focused on BRD development and severity, the genes within these three modules may possess a role in other disease complexes or immune-mediated events, such as gastrointestinal or apoptotic/necrotic diseases.
Utilizing hub gene and interaction network analyses, we further identified genes related to BRD development and severity. Here, we detected and mapped 52 genes into a protein-protein interaction network, further stratified into 11 distinct clusters based on their combined interaction scores. This procedure helps describe the physical relationship that multiple BRD-associated gene products possess with one another in a more holistic approach. Here, we may infer that these interactions possess accompanying transient functions involved in BRD development not previously described in literature. As such, these predicted protein-protein network interactions may infer potential modular units which participate in BRD development or resistance [81,82]. Further evidence of the associative importance related to BRD development exists with these genes, as CXCL12 [20], TLL2 [20], ALOX15 [18,20,40], and LOC100298356 [73,79,80,83] have been previously identified as differentially expressed when comparing cattle with and without BRD development. Specifically, CXCL12, TLL2, and LOC100298356 are considered drivers of innate surveillance and inflammation associated with BRD development, whereas ALOX15 encodes for an enzyme involved in specialized proresolving mediator biosynthesis and associated with cattle that do not develop clinical BRD in high-risk systems [18,20,40,79,83]. Collectively, we detected these previously identified differentially expressed genes, associated to BRD development, with an independent approach. This overlap emphasizes the potential capability of these genes and mechanisms to serve a predictive role for BRD. Proteomic approaches have detailed that proteins infrequently operate as single biological entities and, when involved in similar biological functions, interact in dynamic, yet organized complexes [84][85][86][87]. As such, these findings provide candidate protein complexes related to BRD development and severity, which warrants further investigation for avenues of confirmation in larger populations of cattle and novel therapeutic target development.
Conclusions
This study was conducted to utilize systems biology methodology to further establish genes, mechanisms, and coordinated biological complexes associated with dynamic hematological changes and BRD development. Utilizing our previously published RNA-Seq data and WGCNA, we identified five well-preserved modules of highly co-expressed genes with significant associations with hematological and clinical traits in cattle at facility arrival. The "steelblue" module, containing genes involved in alpha-beta T cell receptor complex and negative regulation of fibroblast growth factor receptor signaling, possessed significant positive correlations with erythrocyte count, platelet count, red cell width, and BRD diagnosis, and negative correlation with days at risk for BRD. The "purple" module, containing genes involved in mitochondrial processes and non-coding RNA processing and maturation, possessed significant correlation with increased eosinophil percentage, decreased neutrophil-lymphocyte ratio, and increased growth rate (weight gain over time). Protein-protein interaction network and clustering analyses of BRD-related hub genes identified possible at-arrival biological complexes strongly associated with BRD development; many of these hub genes have been described as differentially expressed genes in previous BRD research. Through this holistic molecular approach, we provide genes, mechanisms, and predicted protein complexes associated with BRD development and performance which are warranted for future analyses targeted in predicting BRD at facility arrival.
Supporting information S1 | 8,701 | 2022-02-19T00:00:00.000 | [
"Biology"
] |
A machine learning based approach for user privacy preservation in social networks
With the development of Internet technology, service providers can provide users with personalized services to enrich user experience, however, this often requires a large number of users’ private data. Meanwhile, the protection of their private data and the evaluation of the risk of leaked datasets become a matter of great concern to many people. To resolve these issues, in this paper, we develop a machine learning-based approach in online social networks (OSNs) to efficiently correlate the leaked datasets and accurately learn millions of users’ confidential information. Moreover, a trust evaluation model is developed in OSNs to identify malicious service providers and secure users’ social activities via direct trust computing and indirect trust computing. Extensive experiments are conducted by using real-world leaked datasets, and the results show that the efficiency and effectiveness of the proposed approach in terms of user privacy protection and accuracy of privacy leakage evaluation.
to offer personalized and contextual services for users and promised to reshape our daily lives [1][2][3]. In current OSNs (e.g., Facebook, QQ, and Wechat), users are recognized as identities (e.g., real name, username, nickname, email address, and cellphone numbers) in involving with these Internet and mobile social services [4][5][6][7][8]. However, many online services such as dating and shopping websites and offline services such as grocery delivery services require users to provide some of their identification information. As reported, hundreds of millions of users' confidential information such as username, email, password, and network activities (e.g., which Tencent QQ groups they have joined) on several Chinese websites including Tencent QQ have been leaked over the past few years [9,10].
Different from existing security mechanisms developed for traditional Internet services, the privacy leakage issue of users in large-scale OSN exhibit its special features. On one hand, apart from the direct privacy leakage of the user due to his/her improper operations or network intrusions, the private information can also be indirectly or unconscious leaked by his/her friends or other third parties [11]. For example, the photo wall and public chat history of his friends may reveal a user's gender, age, and name. On the other hand, given above identity information of one user, other confidential information (such as the gender and password) of this user can be inferred by misbehaving companies and hackers via data mining and in-depth data analytic [12]. Hence, an ongoing challenge is to protect users' privacy in OSNs while evaluating the potential risks of privacy violation after the identification information is leaked.
Existing works on privacy leakage of Internet users, however, cannot work well in the large-scale social Internet context. Firstly, existing practices in industry mainly rely on the nicknames (or aliases) to protect the anonymity of Internet users. The real names in OSNs are still vulnerable to privacy leakages [13][14][15][16][17]. For example, one user's information such as age and political affiliation can be accurately determined by a third party via aggregating information provided by the user's online friends, even when the user does not intend to make it available to the public. Secondly, the correlations of sensitive user attributes, which can be learned from the leaked datasets to build user profiles [18], are not well studied to prevent the privacy leakage in large-scale OSNs. Thirdly, existing studied mainly focus on the general Internet services, while few of them consider the social features in the analysis of privacy leakage of public social Internet applications. Moreover, despite recent efforts in studying privacy leakages in OSNs, little attention has been given to evaluating the risk of leaked datasets. Therefore, it is still an open and vital issue to preserve user private information from misbehaving companies and hackers in an efficient fashion in large-scale OSNs.
In this paper, to resolve the above issues, we first develop a machine learning based approach for user privacy preservation and efficient evaluation of privacy leakage from leaked datasets via feature extraction and user attribute correlation. With the obtained users' attributes (e.g., real name, OSN identity (ID), age, gender, birth date, email address, and social relationship), a support vector machine (SVM) based prediction algorithm is devised to evaluate the potential privacy leakage with the existence of distinct Internet services offered by third parties. Moreover, we build a distributed trust evaluation model from direct trust and indirect trust calculation to filter and detect malicious OSN service providers with consideration of users' social features. Finally, we conduct extensive experiments to demonstrate the efficiency of the proposed approach in terms of detection and classification accuracy. The results also show that the learned user profiles facilitate attackers to successfully launch a variety of attacks such as spoofing attacks and password guessing attacks. The main contributions of this paper are summarized as follows: -We investigate user privacy preservation in OSNs from a machine learning perspective. To assess users' privacy violation from the public leaked datasets, we develop a user profiling system (UPS) to accurately correlate and learn user attributes. We collected 16 leaked datasets and study the privacy issue of 611,140,530 users. -We take the learned user profiles in search engines and obtain these users' other information which is publicly available on Internet applications such as Renren and Qzone. We develop a SVM-based prediction algorithm to evaluate the potentials that malicious third parties can obtain a large number of users' attributes such as real name, gender, age, and social connections from the leaked datasets and online Internet services. Besides, by leveraging users' social features, a trust model is established to detect malicious social service providers. -We conduct extensive experiments by using realworld leaked datasets to demonstrate that the proposed approach can attain satisfactory detection accuracy and classification accuracy. In addition, the results also show that the learned user profiles can facilitate attackers to successfully launch a variety of attacks such as spoofing attacks and password guessing attacks.
The rest of the paper is organized as follows. Section 2 summarizes related work. In Section 3, we describe the system overview. In Section 4, we introduce our methods to learn users' attributes from leaked datasets and online Internet services. In Section 5, we evaluate the performance of the proposed approach. Finally, we give discussions of the proposed approach in Section 6, and present concluding remarks in Section 7.
Privacy preservation in OSNs
Previous works on OSNs such as Facebook and Twitter focused on determining users' privacy information such as age, gender, sexual orientation, and political affiliation [6,13,14,16,17,19]. [20] matched user accounts across social networks based on username and display name to help build better user profiles. [21] observed and analysed the phenomenon that different generations have different preferred names to infer users' age range by their names. [22] tried to classify the user ages range with the 1-grams constructed from the tweets. [13,23,24] used the ages of friends to determine the age of a given user. The basic idea is to determine one user's private attributes by aggregating the information from the online friends of the user, although the user does not intend to make it available to the public. To deal with OSNs (i.e., microblogging) where age information is scarce, [25] proposed a framework that explores public content (i.e., tweets, microblogs) and interaction information to explore the hidden ages of users. In addition to inferring users' confidential information, the social network information has also been used for applications such as friend/interest recommendation [1,[26][27][28][29][30], sentiment analysis [31][32][33][34], spammer detection [36,37], and user activity classification [38]. Distinguished from previous researches in OSNs, our work utilizes SVM-based approaches to automatically identify the real names of users in anonymous OSNs, where users' private and public information are gathered via Social Engineering Engines (SEEs). To the best of our knowledge, we are the first to study the problem of automatically identifying users' Chinese real names in anonymous OSNs.
Security in OSNs
For publicly available passwords from Chinese and international websites in recent years, previous works focused on password security such as password guessing [5,8,35,[39][40][41][42][43], password strength [2,4,7,[44][45][46][47][48], and password creation policies [3,[49][50][51]. [9,52,53] conducted measurement studies on the large scale leaked Chinese password datasets. To be specific, Li et al. [9] studied the differences between passwords from Chinese and English speaking users. Wang et al. [53] made a substantial step forward in understanding the underlying distributions of passwords, and Ji et al. [52] conducted a large-scale measurement study on the crackability, correlation, and security of leaked passwords. Li et al. [54] studied the usage of personal information in passwords and its security implications, which demonstrates that passwords may contain users' privacy information. While focusing on the passwords of Chinese web users, [42] and [55] showed the popular structures of Chinese passwords in which Pinyin plays an important role, they also improved the password guessing efficiency. For the leaked dataset QQ GRP, You et al. [10] studied topology statistics (e.g., degree distribution) of the bipartite graph consisting of QQ accounts and QQ groups. The previous works on passwords security studied the users' habits in choosing passwords and demonstrated the correlations between users' privacy information and passwords. However, they paid little attention on how to obtain the privacy information. We observe that the QQ email is the main form of users' emails in leaked datasets, therefore, using the QQ emails (86.7% QQ emails are directly constructed by QQ IDs), we can connect the user's privacy information leaked in different datasets (e.g. joined groups in the QQ GRP dataset and username in the Renren dataset). We also propose several methods to extract users' privacy information by correlating the leaked datasets and show the serious potential risks to users whose privacy leaked in the datasets.
System model
In this section, we first introduce the overview of system. Then, the descriptions of used datasets are presented. After that, we present several attacks on Internet users based on learned personal information. Figure 1 shows the overview of our user profiling system (UPS). UPS learns users' attributes or profiling information from both leaked datasets and online public social information. In UPS, we first train a SVM-based classifier to learn users' profiles from social network datasets (e.g., identify the real name of a user from the names the user used in different groups). Besides, based on the collected public social information from online social platforms (e.g., Facebook and Tencent QQ), more private and sensitive user profiling information can be inferred in our UPS. For example, by using the group information (i.e., group name and group introduction) that the user joined, the education background and interesting of the user can be inferred. Furthermore, we also leverage the homophily property of social networks to predict the real age and gender of a user based on his/her friends in the same groups he/she joined. In this paper, we mainly learn family names, birth dates, cellphone numbers, citizen IDs' last 12 digits, email addresses, and user names from emails and passwords datasets.
Dataset descriptions
In recent years, the datasets of large-scale user data leaked in China involves 406.2 million accounts in total, where the most of leaked datasets are publicly available on the Internet. In specific, hundreds of millions of Chinese Internet users' information such as real name, email, password, gender, and age could be easily obtained or inferred by thirdparties as well as attackers. In this work, two large-scale social network datesets (i.e., QQ GRP and QQ PSW), which are collected from Tencent QQ, are considered in the privacy leakage analysis. Both QQ GRP and QQ PSW are leaked datasets of QQ users on the Internet. The detailed description of datasets QQ GRP and QQ PSW are shown as below.
-The QQ GRP dataset includes 85.3 million QQ groups' information such as group ID, group name, group brief introduction, their group members' names displayed in the QQ groups, genders, and ages. There are 318 million distinct QQ accounts in QQ GRP. -TheQQ PSW dataset includes 300 million QQ accounts' passwords and the corresponding QQ IDs and IPs. ...
Password guessing Spoofing attacks
Internet fraud Phone fraud ...
Misbehaving Internet Companies
Leaked information User profiling Public information Fig. 1 The overview of the user profiling system (UPS)
The proposed approach
In this section, we first analyse the potential attacks arising from password analysis and guessing from the leaked OSN datasets. Then, we develop novel methods to extract and learn users' attributes in terms of real name and age from the leaked datasets. These user attributes learned from the leaked datasets facilitate attackers to collect other users' information of social Internet services. Furthermore, a trust based secure online service provider (SP) selection mechanism is proposed via direct trust computing and indirect trust computing.
Password analysis and guessing
We observe that a large number of Chinese Internet users' passwords contain the users' personal information such as family name and birth date. Based on this knowledge, attackers can use methods such as [5] to leverage users' personal information learned from Internet online services and leaked datasets in Sections 4 and 5 to accelerate the speed of guessing the users' passwords. Moreover, in what follows we observe that users' personal information can also help attackers to identify and select users with weak passwords.
Let be the universal set of all the possible passwords. Then, any leaked dataset of passwords can be viewed as a subset, denoted by S, of with a specific probability distribution P = {p i |i ∈ S}, where p i is the probability of a password i ∈ appearing in S. Let l = |S| and without loss of any generality, within P (S), we assume p 1 ≥ p 2 ≥ . . . ≥ p l . We use the following metrics introduced in [2] to measure the password strength of users with different attributes such as gender and age. Here, the min-entropy is defined as which denotes the worst-case security metric against an attacker.
Next, the guesswork measures the expected number of guesses to find the password of an account in the optimal guessing order (password probability decreasing order). G(P ) is the bit/entropy form ofĜ(P ). Formally, we havê Besides, β-success-rateλ β (P ) measures the expected success probability to find the password of an account given β guesses.
Real name identification
In this subsection, we focus on studying private information of 313 million QQ users in QQ GRP. Let U QQ denote the set of users, and QQ ID be the unique identifier of a QQ user. According to our analysis of special features of Chinese names, among the 341,826 QQ users in U QQ , we observe that 199,664 ( 199,664 341,826 = 58.4%) users use their real names in at least one QQ group. Among the 750 million QQ display names in QQ GRP, 577 million names are classified as nicknames based on filtering rules defined on the length, the first and second character of a name. To tackle this problem, we develop a novel method RNI (Real Name Identification) to classify these names using both their content and OSN features. Formally, we let u denote a QQ user in a QQ group and dn denote its name displayed in the QQ group. For each QQID name pair (u, dn) of the 174 million display names in QQ groups, its feature vector x = [e, s] T is described as follows.
Firstly, let e ∈ R h be the content features, which is a bag-of-words representation. h is the number of Chinese characters appearing in names. Secondly, we consider the OSN features s ∈ R 2 , which has two elements s 1 and s 2 , measuring the tendency of u using dn in nickname QQ groups and real name QQ groups. For each group G j , we intuitively label some obvious nicknames by hand-crafted rules, i.e., checking the display names' length, first and second characters. Let t j be the number of these labeled nicknames in G j . Denoted by N(u, dn) the set of the QQ groups that user u joins with name dn. s 1 and s 2 of (u, dn) are defined as where |G j | is the number of users belonging to group G j . The core idea behind the usage of OSN features in our method RNI for identifying real names is: QQ users usually tend to use their real names in friendship-driven QQ groups such as classmate and colleague groups, while preferring to use nicknames in interest-driven groups in order to protect their privacy. From the QQ users in U QQ , we have their real names that are found from Hotel Guest, and all nicknames used by these QQ users in QQ GRP. Therefore, we construct a data set LU QQ , feature vectors {x i , i = 1, ..., m} consisting of 199,664 real names (with label y i = 1) and 225,419 nicknames (with label y i = −1). In what follows, we build a supervised learning model with LU QQ as the training data, and then use the model to predict the labels of the remaining 174 million names displayed in QQ groups, which hand-crafted rules cannot differentiate.
Thirdly, for efficient real name identification, we build the RNI model based on SVM, due to its capacity of learning from high-dimensional feature space: where ξ i are slack variables and C is a penalty parameter. After obtaining w * and b * by solving (8), for each QQID name pair (u, dn), we define its real name score as where x is the feature vector of (u, dn) defined previously.
In the testing stage, let u denote a set of names QQ user u used in different QQ groups. The mostly likely true name in u is Since u at most has one real name, there are two cases when identifying real names and nicknames in u . Case 1: real name score of π u is larger than 0, π u is real name, u \ π u are all nicknames; Case 2: otherwise, all in u are nicknames.
Age prediction
Generally, most of the QQ users provide their age information when setting their profile page (either by providing birth date or by giving a calculated age). However, not all ages are precisely true as some users might give the wrong ages. In this subsection, our target is to predict the real age of a user based on his/her friends in the same groups he/she joined. Intuitively, QQ groups (especially classmate groups) usually consist of users with similar ages due to the homophily property of social networks. For a group G j , we first filter out users with intentionally wrong ages less than 4 and greater than 100. Formally, we define a set of users in G j with abnormal ages as: G ab j = {u, u ∈ G j ∧ (a u ≤ 4 ∨ a u ≥ 100}. Then, we calculate the mean and standard deviation of user ages in G j − G ab j (excluding G ab j ) as: Users with ages 3σ j greater or less than μ j are added to G ab j as: The expected average age for users in G j is where n j,\u = |G j − G ab j | − 1 when u ∈ G j − G ab j , and n j,\u = |G j − G ab j | when u ∈ G ab j . According to the observation that users in the same group have similar ages, we estimate the real age of a user u aŝ a u = j :u∈G j ρ u,j μ j,\u where ρ u,j is a weight measuring how much a group G j , to which user u belongs, contributes to the estimation ofâ u . Intuitively, a group including more users with normal ages should contribute more than a small group. Also, a group with a smaller variance in ages should contribute more than a group with larger variance. Therefore, ρ u,j is defined as where σ j,\u denotes the standard error of the ages of the users in group G j − G ab j − u, that is, As we know, classmate groups are more likely to have similar ages. Therefore, we introduce another estimation which follows the same steps, but using only classmate groups, i.e.,
Trustworthy online service provider selection mechanism
In general, for a targeted Internet service (e.g., online chatting and online game), there exist a set of service providers (SPs) that can offer the same service to desired users with different quality of service (QoS). Let M = {1, · · · , m, · · · , M} denote the set of online service providers for the desired internet service. The trust approach is employed to assess the trustworthiness of SPs in providing online services. Here, the trust that each user u put in the service provider m is constructed from two aspects, namely, the direct trust and indirect trust [6]. Let direct u,m and indirect u,m denote the direct trust and indirect trust of user u in SP m, respectively. In specific, for each user, its direct trust originates from direct historical experience in interacting with SP m. Meanwhile, the indirect trust, which also known the social reputation, evolves from the aggregated experience based on other users' experience such as the recommendations given from his/her friends. With the assistance of indirect trust evaluation, each user u can acquire more profiling information about the SP m especially in the case that their direct interactions are infrequent.
1) Direct Trust Evaluation.
The direct trust value of user u in SP m is associated with its satisfaction of online service. In specific, the service satisfaction degree is quantified by the rating value ra u,m,h ∈ [0, 1] that the user gives to the h-th service offered by SP m, Here, ra u,m,h = 1 indicates the absolutely positive experience, while ra u,m,h = 0 represents the absolutely negative one. Furthermore, let N u,m be the total number of services that user u has obtained from SP m. Then, for user u, his/her direct trust value in SP m can be calculated by the accumulative weighted sum of all historical ratings [15], i.e., where (τ h ) means the exponential time decay [6,58]. By utilizing exponential time fading, the weight of past experience can be gradually reduced while the recent experience are assigned with a relatively higher weight. The explicit form of exponential decay function is expressed as: where α denotes the exponential decay rate, τ h is the h-th service time, and τ is the current time. Obviously, we have direct u,m ∈ [0, 1]. 2) Indirect Trust Evaluation. Users typically have different social relations such as friends, workmates, and strangers. Let L = {1, · · · , l, · · · , L} be the set of social relations among users. Besides, users with different social relations can have different trustworthiness degrees or or social intimacy degrees [19]. We use ϕ l > 0 to represent the trustworthiness degree [6] or social intimacy degree [56] between two users with lth social relation. ϕ l is a trust factor indicating to what extent the intimacy will exist between two users, which can be obtained similar to [56,57]. A binary variable q u,u ,l is utilized to denote whether two users have a l-th relation. Here, q u,u ,l = 1 users u and u have a relation l; otherwise, q u,u ,l = 0. In general, adversary users may give fake recommendations to mislead the trust evaluation process. Hence, the credibility of each rating needs to be assessed to capture the recommendation reliability. For EV user u, his/her credibility value of the recommendation given from another EV user u is associated with their social relation and their rating similarity u,u , i.e., Obviously, we have ϒ u,u ∈ [0, 1]. Generally, the higher similarity between two users' rating profile, the higher credibility of his/her recommendation. Here, we employ the Pearson correlation coefficient (PCC) to calculate the similarity value between two users u and u , i.e., by comparing their ratings on the aggregators that both of them have charged in. Here, M u,u represents the set of SPs which have offered online services for two users u and u . ra u,m and ra u ,m separately denote the rating profiles of users u and u on SP m. ra u and ra u are the average ratings that user u and user u have ever sent, respectively. Therefore, for user u, his/her indirect trust value in SP m is shown as: As a consequence, the global trust value of user u in SP m can be attained by combining the direct and indirect trust values. Here, we have where > 0 is a dynamic weight factor, which is calculated by = N u,m 1+N u,m . Note that with the increase of interaction times N u,m , the increases, as well as the effect of direct trust in global trust calculation. It conforms to the fact that direct trust can be more reliable if with enough iterations. Obviously, we have u,m ∈ [0, 1]. After trust assessment for each SP m, each user selects the SP with highest global trust value to receive Internet service. In the best case, the computational complexity of our proposed trust mechanism is O(M), while in the worst case, the computational complexity is O(U * M). Here, U and M are the number of social users and online service providers, respectively.
Ground truth dataset
The Hotel Guest dataset is regarded as a reliable information source, which serves as ground truth for our user profiling methods. In specific, the Hotel Guest dataset includes a large number of real-names of hotel guests, as well as other private information. Given the truthfulness of the Hotel Guest dataset, we use it to construct training data and evaluate the prediction accuracy of test data. To identify attack targets, we use the email address as the unique identifier of one person, who may leave the same email when registering at different websites [42,59].
Result analysis
We use dataset LU QQ to train the model, and then use the model to predict the labels of the 174 million names displayed in QQ groups, which hand-crafted rules cannot differentiate. We identify 128 million QQ ID's real-names. To evaluate the correctness of these results, we evaluate our RNI model by data derived from U QQ . We randomly sample 10% of QQ users in U QQ , and use their real names and nicknames as training data. The names of the remaining QQ users in U QQ are used as test data. We repeat this process 10 times and report the average performance in terms of precision, recall, and F1, they are defined as: precision = #detected real names #reported real names , recall = #detected real names #real names , where the reported real names refer to the real names given by our method RNI, which may include false results (i.e., nicknames wrongly identified as real names). The results are shown in Table 1, which includes results when using only OSN features, only content features or both. The precisions and recalls reported in Table 1 are quite high. which means that a large number of real names can be correctly identified, and potentially be used to reveal more The results also show that RNI based on content features exhibits higher recall but lower precision than RNI based on OSN features. When combining two sets of features, the performance of RNI is improved with higher precision. Moreover, we look into the falsely detected real names that should be nicknames. We found that nearly 40% of them actually include real family names in the full nicknames. The above results reflect that RNI can accurately identify real names in QQ GRP, and the real names of the 128 million QQ users we identified are highly reliable. Figure 2 shows the age distribution of 313 million QQ users in QQ GRP. As seen in Fig. 2, it is a reasonable distribution. We can observe that 86% of QQ users are between 11 and 40, while nearly half of QQ users' ages are between 21 and 30. In addition, we can see that 7.8% of QQ users provide ages younger than 3 years, and 2.4% of users provide ages older than 100 years. It is clear that most of those younger than 3 years and older than 100 years intentionally provide the wrong ages.
In order to evaluate our estimation, we calculate the real age of 341,826 users in U QQ based on their birth date shown in citizen IDs in the Hotel Guest dataset. These real ages are accurate and used as ground truth. Figure 3a shows the histogram of these users' real ages. For comparison, we also show their ages displayed in QQ. Comparing displayed ages in QQ and the real ages, in age group [0-20] many QQ users set them older than the actual age, while in age group [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40] many users set them younger than the actual age. Our estimation has a close distribution to the actual age distribution. Here, we estimate the age of a user u asâ class δ is the absolute difference between actual age and QQ displayed age; -δ is the absolute difference between actual age and our estimated age.
We can see that our method exhibits smaller errors. That is to say, we can make a good prediction of user ages based on their friends in the same groups, even some of them intentionally set their ages as wrong values. We are interested to study these users who intentionally provide a wrong age. Actually, about 10% of users in U QQ set their age with values smaller than 4 or larger than 99. Let U ab denote the set of these users displayed the wrong ages on QQ. Figure 3b shows the CCDF of the age estimation errors when applying our method on the users in U ab . We can The fraction of QQ users revealing their attributes to strangers on the QQ chatting system see that our method can estimate the ages of 80% and 90% of users in U ab with an error smaller than 5 and 10 years, respectively. The above results demonstrate the capability of our method for estimating QQ users' real ages. Finally, we used our method to estimate the real ages of all QQ users (not all of them are in U QQ and have ground truth of real ages). The distribution of estimated ages is shown in Fig. 2. Comparing with the displayed ages, we don't have an estimation less than 10 or greater than 90, but have more in the interval of [21][22][23][24][25][26][27][28][29][30]. This is because that our method corrects the age of many users in [21][22][23][24][25][26][27][28][29][30] who provided the wrong ages.
Next, we randomly sample 200 QQ IDs from the QQ GRP, and search these QQ IDs on the search engine of QQ chatting system. As shown in Fig. 4, we observe that a large fraction of these QQ users make their attributes such as birth date and blood type publicly available to strangers on the QQ chatting system. In addition to the age and gender leaked in QQ GRP, we find that 31%, 50%, 69%, 27%, 73%, 77%, and 81% of the sampled 200 QQ users reveal their emails, blood types, birth dates, hometown, locations, Chinese zodiac signs, and astrology zodiac signs to the public, respectively. Chinese zodiac signs and astrology zodiac signs can be used to determine the years and month of birth, which can be used to narrow the search criteria on other OSNs' search engines. Finally, to evaluate the performance of our method in mining personal information (e.g., family names, birth dates, cellphone numbers, citizen IDs' last 12 digits from their passwords, email addresses, and user names) on the ground truth dataset U . Table 2(b) shows our identified fraction of users who use their personal information in their passwords/email addresses. The accuracy of our method is shown in Table 2(c). We can see that 60%, 43%, and 76% of family names in Pinyin, birth dates, and cellphone numbers we learned from email addresses are correct, 29%, 31%, 14%, and 38% of family names in Pinyin, birth dates, cellphone numbers, and citizen IDs' last 12 digits we learned from passwords are correct. Table 2(d) shows the result of our method on the 335 million pairs of email addresses and passwords. We see that family name is widely used in the email address (31%) and password (22%). Birth date and mobile number are often used in passwords, around 10%. Such a significant fraction of detected personal information should catch our attention. People are suggested not to use this information in their email and password, as attackers can launch spoofing attacks on the 335 million email addresses with quite a lot of correct personal information.
Discussion
Our work has some value for future studies on both privacy information on OSNs and passwords security. The study of automatically identifying users' Chinese real names on anonymous OSNs, which enables us to identify users and correlate them with the public or privacy information through Social Engineering Engines (SEEs). The information gathered through SEEs facilitates the inference of users' privacy information on OSNs and the generation of more precise embeddings of users for diverse applications such as friend/interest recommendation, sentiment analysis, spammer detection, and user activity classification. As for passwords security, we conclude in terms of passwords strength and guessing. First, our works can back up the further study on passwords strength for Chinese of different groups. According to the QQ groups that users joined, we can obtain the users' profile including age, educational background, interests and so on. Based on the users' profile, we are able to evaluate the passwords strength for people of different groups. This can help to find groups in low passwords strength, and give them proper suggestions. Second, we propose some methods to correctly mine users' privacy information, which has been demonstrated of high relevance with users' passwords. Therefore, our work helps to address the key challenge for passwords guessing, how to choose the most effective password candidates. Furthermore, our work can make a good prediction for some privacy information even intentionally set as a wrong value.
Conclusion
To the best of our knowledge, this is the first work to comprehensively study the leaked datasets of recent years in China through social network analytic. We evaluate the risk of leaked dataset and show that third parties such as attackers and misbehaving companies are able to successfully correlate these leaked datasets and accurately learn users' profiles such as real name, OSN IDs, age, gender, birth date, social connections, cellphone numbers, and citizen IDs. By studying these leaked datasets together, we find the privacy information leaking and passwords cracking facilitate each other. To mitigate the risks of these leaked datasets, we noticed that the privacy settings given by some internet service providers are weak for the users who involved in this data leaking, and more measures are needed (e.g., removing the profiles from search engines) to protect these users rather than to change passwords. Finally, we also give our suggestions for both Internet service providers and users. | 8,254.4 | 2021-03-09T00:00:00.000 | [
"Computer Science"
] |
Extracellular vesicles from vaginal Gardnerella vaginalis and Mobiluncus mulieris contain distinct proteomic cargo and induce inflammatory pathways
Colonization of the vaginal space with bacteria such as Gardnerella vaginalis and Mobiluncus mulieris is associated with increased risk for STIs, bacterial vaginosis, and preterm birth, while Lactobacillus crispatus is associated with optimal reproductive health. Although host-microbe interactions are hypothesized to contribute to reproductive health and disease, the bacterial mediators that are critical to this response remain unclear. Bacterial extracellular vesicles (bEVs) are proposed to participate in host-microbe communication by providing protection of bacterial cargo, delivery to intracellular targets, and ultimately induction of immune responses from the host. We evaluated the proteome of bEVs produced in vitro from G. vaginalis, M. mulieris, and L. crispatus, identifying specific proteins of immunologic interest. We found that bEVs from each bacterial species internalize within cervical and vaginal epithelial cells, and that epithelial and immune cells express a multi-cytokine response when exposed to bEVs from G. vaginalis and M. mulieris but not L. crispatus. Further, we demonstrate that the inflammatory response induced by G. vaginalis and M. mulieris bEVs is TLR2-specific. Our results provide evidence that vaginal bacteria communicate with host cells through secreted bEVs, revealing a mechanism by which bacteria lead to adverse reproductive outcomes associated with inflammation. Elucidating host-microbe interactions in the cervicovaginal space will provide further insight into the mechanisms contributing to microbiome-mediated adverse outcomes and may reveal new therapeutic targets.
Vaginal microbial communities are associated with a spectrum of outcomes in gynecological and reproductive health.When dominated by Lactobacillus, vaginal communities are rich with lactic acid, antimicrobial substances, immunomodulatory compounds, and other bacterial factors that are believed to protect against sexually transmitted infection (STI), bacterial vaginosis (BV), and spontaneous preterm birth [1][2][3] .In contrast, in Lactobacillus-deficient communities, a diverse array of strict and facultative anaerobes deteriorates epithelial integrity by producing mucin-degrading enzymes, cytolysins, and other pro-inflammatory compounds [4][5][6] .These communities often include Gardnerella vaginalis and species of Mobiluncus, Prevotella, and Sneathia.High-diversity, Lactobacillus-deficient, anaerobedominated vaginal microbial communities have been associated with increased risk for many adverse reproductive outcomes including BV, HIV, spontaneous preterm birth, endometriosis, and infertility [7][8][9][10][11] .A common anaerobic bacteria implicated in these adverse outcomes is G. vaginalis with its within-species high genomic diversity and pathogenic potential [8][9][10][11] .Another associated species, identified by recent work from our laboratory using a large pregnancy cohort, is the anaerobic bacteria Mobiluncus curtisii/ mulieris 12 .Despite our decades-long understanding that anaerobic vaginal bacteria confer risk of poor reproductive outcomes, the precise mechanisms and host-microbe interactions driving these adverse outcomes have remained elusive.
An emerging body of literature provides evidence that diverse set of bacteria produce extracellular vesicles (EVs) 13,14 .Similar to eukaryotic EVs, bacterial EVs (bEVs) facilitate the transfer of biomolecules between cells and have been implicated in bacteria-bacteria and bacteria-host interactions including antibiotic resistance, biofilm formation, quorum sensing, regulation of host immunity, and maintenance of epithelial integrity 15 .Although the role of bEVs in the reproductive tract has been less studied, recent work has suggested that bEVs may contribute to reproductive health and disease [16][17][18][19][20][21] .We now hypothesize that bEVs from common vaginal anaerobes have discrete effects in the lower reproductive tract and thus are drivers of adverse reproductive outcomes.
In this study, we sought to characterize and functionally assess bEVs produced by Lactobacillus crispatus as a representative microbe associated with optimal reproductive outcomes, and G. vaginalis and M. mulieris as anaerobic bacteria associated with adverse reproductive outcomes.Our objectives were to (1) characterize in vitro-produced bEVs by morphology and proteomic analysis, (2) assess internalization of bEVs by vaginal and cervical epithelial cells, (3) determine the ability of bEVs to induce immune responses in epithelial and immune cells, and (4) evaluate the role of TLR2 in bEV-induced immune responses in epithelial cells.
Results
L. crispatus, G. vaginalis, and M. mulieris produce extracellular vesicles Bacterial EVs were isolated from L. crispatus, G. vaginalis, and M. mulieris culture supernatants using differential ultracentrifugation as previously described 16,17,[21][22][23] .Transmission electron micrographs indicate the presence of spherical and cup-shaped structures of varying sizes (Fig. 1a).Images of the NYC culture medium alone show no such structures, indicating no contaminating vesicles of non-bacterial origin (Fig. 1a).These results were confirmed by ZetaView which indicated a nanoparticle size range of 90-420 nm in diameter for all bacterial samples (Fig. 1b).The mean ± standard deviation vesicle diameter was 159.5 ± 61.7 nm, 146.0 ± 50.9 nm, and 156.4 ± 55.0 nm for L. crispatus, G. vaginalis, and M. mulieris isolates, respectively.bEVs from L. crispatus, G. vaginalis, and M. mulieris contain distinct protein cargos Confirming production of bEVs from L. crispatus, G. vaginalis, and M. mulieris, we next sought to characterize their proteomic cargos.Gel electrophoresis showed the presence of distinct protein profiles between the three bEVs, and the absence of proteins in NYC culture medium (Supplemental Fig. 1).By liquid chromatography-mass spectrometry and peptide analysis by MaxQuant, we identified 1745 proteins across all samples, 650 and 21 of which originated from contaminants from horse (a component of NYC media) and human sources, respectively.A total of 491, 336, and 247 proteins of bacterial origin were from G. vaginalis, from M. mulieris, and from L. crispatus, respectively (Fig. 2a).The portion of these proteins which had orthologous functions between bacterial species are totaled in the overlapping areas of Fig. 2a; the proteins in common to all three samples are listed individually in Supplemental Table 1.These shared proteins, including ribosomal proteins and metabolic enzymes, are consistent with reports of bEV proteomes from multiple species 16,19,24 .
Finally, we categorized proteins based on predicted biological function (Table 1) similarly to previous reports 16,17,21 .L. crispatus, G. vaginalis, and M. mulieris bEVs each contained proteins associated with many metabolismand replication-associated functions.Only M. mulieris-derived bEVs contained any proteins related to cytoskeleton function, and only L. crispatusderived bEVs did not contain any proteins related to defense mechanisms or cell motility functions.
L. crispatus, G. vaginalis, and M. mulieris bEVs rapidly internalize within cervical and vaginal epithelial cells Having characterized the proteomic cargo of L. crispatus, G. vaginalis, and M. mulieris bEVs, we next determined whether these bEVs could be internalized in cervical and vaginal epithelial cells.Using confocal microscopy to visualize bEVs labeled with rhodamine B isothiocyanate (RBITC) and epithelial cells stained for E-cadherin, we observed cellular uptake of EVs after 1, 4, and 24 h of exposure to 10 9 bEVs (equivalent to 5 × 10 3 bEVs/ cell, consistent with previous studies) 16,27 (Fig. 3).At each timepoint assessed, bEVs from each vaginal bacteria were indicated by both punctate signal and diffuse fluorescence, suggesting the digestion of some bEVs and subsequent release of cell-permeant RBITC from the bEV interior to cell cytoplasm.No fluorescence in the red channel was observed after epithelial cell exposure to an equivalent concentration of RBITC alone at any timepoint (Fig. 3a).bEVs from L. crispatus were present at the cell surface of ectocervical, endocervical, and vaginal epithelial cells at 1 h, localized intracellularly by 4 h, and were largely cleared by 24 h (Fig. 3b).Similarly, bEVs from G. vaginalis were mostly observed at the cell interface at 1 h, within the cell by 4 h, and no longer abundant by 24 h (Fig. 3c).bEVs from M. mulieris were evident at the epithelial cell surface at 1 h but demonstrated an increased tendency to aggregate at the cell periphery at 4 h and distributed intracellularly by 24 h (Fig. 3d).When present intracellularly, bEVs from all three species appeared in cytoplasmic and perinuclear regions.Some differences in cell shape, like increased size and protrusions of the cell surface, were also observed at times of high bEV uptake.No differences in uptake between ectocervical, endocervical, and vaginal epithelial cell types were apparent in the time periods studied.Rapid endocytosis of bEVs by epithelial cells was confirmed by live imaging over 1 h for each bEV type and epithelial cell type (Supplemental Fig. 2 and Supplemental Videos 1-12).
Cervical and vaginal epithelial cells produce bacterial-specific cytokine responses to bEVs
Understanding that vaginal and cervical epithelial cells can quickly internalize L. crispatus, G. vaginalis, and M. mulieris bEVs, we next sought to determine the immune responses induced by these bEVs from each cell type.We first assessed whether bEVs increase IL-8 expression in a dose-dependent manner.Cervical and vaginal epithelial cells (200,000 per well) were exposed to each bEV isolate at 10 7 , 10 8 , and 10 9 bEVs/well for 1, 4, and 24 h.These doses are equivalent to 50, 500, and 5,000 bEVs/cell, which is consistent with previous work 27 .Cytokine responses were most robust after 24 h (Supplemental Fig. 3) and all subsequent assessments used this timepoint.In ectocervical cells at the highest dose of G. vaginalis bEVs, IL-8 expression increased 1.7-fold from non-treated (NT) cells.Exposure to the 10 8 and 10 9 doses of M. mulieris bEVs resulted in a 1.6-and 2.1-fold increase in IL-8 expression compared to NT, respectively.IL-8 expression was not significantly changed compared to NT after exposure to bEV preparations from NYC culture medium (control) and L. crispatus bEVs at any dose, G. vaginalis bEVs at the two lowest doses, and M. mulieris bEVs at the lowest dose (Fig. 4a).
A heightened dose-dependent response to G. vaginalis and M. mulieris bEVs was found in endocervical cells (Fig. 4b).Exposure to the 10 8 and 10 9 doses of G. vaginalis bEVs resulted in a 4.5-and 13.3-fold increase in IL-8 expression compared to NT, respectively.Doses of 10 7 , 10 8 , and 10 9 M. mulieris bEVs led to a 3.4-, 8.6-, and 13.8-fold increase in IL-8 expression compared to NT, respectively.The dose-dependent response to G. vaginalis and M. mulieris bEVs was similar in vaginal epithelial cells (Fig. 4c).The two highest doses of G. vaginalis bEVs led to a 2.1-and 5.6-fold increase in IL-8 expression from NT, respectively.Increasing doses of M. mulieris bEVs resulted in a 3.5-and 8.2-fold increase in IL-8 expression compared to NT, respectively.The observed changes in IL-8 expression were not due to bEV cytotoxicity and induction of cell death, as demonstrated by the lack of significant changes in lactate dehydrogenase release after 24 h of exposure to each bEV preparation at the highest dose (10 9 ) across the three epithelial cell types (Fig. 4d).
To more comprehensively assess bEV-induced immune responses from cervical and vaginal epithelial cells, we conducted a 29-plex Luminex array on cell media after 24 h of exposure to each bEV preparation (Fig. 4e-g).Three analytes, EGF, eotaxin, and IL-4, were undetectable across all samples.For the remaining 26 cytokines, expression after exposure to bEVs preparation from NYC culture medium (control) was not significantly different compared to non-treated ectocervical, endocervical, or vaginal epithelial cells.Therefore, Fig. 4e-g shows fold-changes in cytokine expression after bEV exposure relative to NYC culture medium controls.In ectocervical cells, exposure to G. vaginalis bEVs resulted in significant increases of 8 cytokines: G-CSF, GM-CSF, IL-13, IL-6, IL-7, IL-8, MIP-1α, and TNFα (Fig. 4e).Ectocervical cell exposure to M. mulieris bEVs led to significant overexpression of 9 cytokines: G-CSF, GM-CSF, IL-13, IL-6, IL-7, IL-8, MIP-1α, MIP-1β, and TNFα (Fig. 4e).Exposure to L. crispatus bEVs did not result in significant changes in cytokine levels.The fold-change in cytokine expression and corresponding adjusted p-values, for each epithelial cell type, are listed in Supplemental Table 5.
Again, exposure to L. crispatus bEVs did not induce significant overexpression of any cytokine.
L. crispatus, G. vaginalis, and M. mulieris bEVs induce immune responses from monocytes
Given that epithelial cells expressed a pro-inflammatory cytokine response to G. vaginalisand M. mulieris-derived bEVs, we next assessed the ability of these bEVs to induce immune responses from immune cells common to the cervicovaginal space.Using THP-1 monocytes as a representative immune cell type [28][29][30] , we exposed cells to each L. crispatus, G. vaginalis, and M. mulieris bEV for 24 h and then analyzed cell culture media with the 29-plex Luminex cytokine/chemokine array, as was done with cervicovaginal epithelial cells, to investigate the activation of an innate immune response.No significant changes in cytokine levels were induced by bEV preparations from NYC culture medium (control) relative to non-treated monocytes.Exposure to bEVs from L. crispatus altered the levels of just 2 cytokines while bEVs from G. vaginalis and M. mulieris induced significant increases in 22 and 26 cytokines relative to NYC-treated cells, respectively (Fig. 5).All of the cytokines overexpressed after exposure to G. vaginalis were also increased by M. mulieris bEVs: EGF, G-CSF, GM-CSF, IFNγ, IL-10, IL-12p40, IL-15, IL-17A, IL-1RA, IL-1β, IL-2, IL-4, IL-6, IL-7, IL-8, IP-10, MCP-1, MIP-1α, MIP-1β, TNFα, TNFβ, and VEGF.Exposure to M. mulieris bEVs additionally increased expression of eotaxin (761-fold), IFNα2 (33.9-fold),IL-12p70 (54.3-fold), and IL-1α (10.6-fold).A full list of cytokine expression levels with fold-changes and adjusted p-values can be found in Supplemental Table 6.Cytokine response to L. crispatus, G. vaginalis, and M. mulieris bEVs is mediated through TLR2-activated signaling pathways Studies from our lab and others have shown that G. vaginalis induces an innate immune response through TLR2 31,32 .Therefore, we sought to assess if the intracellular immune pathways activated by L. crispatus, G. vaginalis, and M. mulieris bEVs was dependent on TLR2 activation.Using HEK-TLR2 reporter cells, we found that only G. vaginalis and M. mulieris bEVs induced activation of NF-kB (Fig. 6a) and subsequent release of IL-8 (Fig. 6b) relative to non-treated cells.Both responses were found to be dose-dependent (Supplemental Fig. 4).In contrast, L. crispatus bEVs did not activate either NF-kB or IL-8 at any dose.Cell death was not significantly affected by treatment with any bEV types compared to non-treated cells (Fig. 6c).
Discussion
In this study, we have comprehensively characterized bEVs derived from clinically relevant vaginal bacteria, i.e.L. crispatus, G. vaginalis, and M. mulieris.These bacteria are known to be associated with reproductive health and disease.bEVs from these vaginal bacteria carry potent bacterial cargo that are capable of internalizing within cervical and vaginal epithelial cells.Ascribing a novel mechanism by which vaginal anaerobes drive adverse reproductive outcomes, bEVs from G. vaginalis and M. mulieris induce immune responses from both epithelial and immune cells.Similarly, providing biological rationale for the association between Lactobacillus spp.and reproductive health, bEVs from L. crispatus do not induce immune activation in these same cell types.Collectively, our results provide new insights into the molecular mechanisms by which these vaginal bacteria can alter the cervicovaginal environment leading to diverse adverse reproductive outcomes.
Novel to this study, we demonstrated bEV production from M. mulieris and are among the first to describe bEV production from L. crispatus and G. vaginalis 16,17,19 .Similar to previous reports, we isolated bEVs by differential ultracentrifugation and confirmed the presence of bEVs from cultures of all three bacteria grown in vitro under optimized conditions using electron microscopy and nanoparticle tracking analysis.bEVs ranged in size from 90 to 420 nm in diameter, similar to EVs derived from eukaryotic cells and other bacteria 15 .This characteristic size suggests bEV biogenesis by blebbing of the inner membrane rather than explosive cell lysis, which results in EVs up to 800 nm in diameter and death of the originating cell 13,14,33 .In contrast, blebbing-derived cytoplasmic membrane vesicles are not dependent on cell death and in fact can promote bacterial survival through communication with bacterial and host cells 14,[34][35][36][37][38] .Aligned with cytoplasmic biogenesis, proteomic analysis indicated that L. crispatus, G. vaginalis, and M. mulieris bEVs are enriched in cytoplasmic proteins relative to their live bacterial counterparts.While cytoplasmic proteins composed the majority of each bEV proteome, cell wall-, cell membrane-, and extracellular-associated proteins were also present in our analysis, similar to previous reports for Grampositive bacteria including Gardnerella and Lactobacillus isolates 16,17 .
Several proteins of functional interest were identified by proteomic analysis, providing insight into mechanisms of microbiome-mediated pathogenicity and protection of the vaginal microenvironment.G. vaginalis bEVs contain vaginolysin, a pore-forming toxin capable of inducing cell lysis in cervicovaginal epithelial cells and red blood cells, and which is present in higher concentrations in women with indicators of bacterial vaginosis 25 .M. mulieris, a flagellated motile bacteria, produce bEVs that contain several flagellin-family proteins (flagellin domain protein, basal body rod proteins, hook-basal body protein, and motor switch protein) which may contribute to stimulation of an immune response mediated by TLR5 39 .Additionally, M. mulieris bEVs contain phage proteins (capsid and tail) which are capable of immune stimulation 40 .Both G. vaginalis and M. mulieris bEVs contain CRISPR-associated proteins (CasA, CasC, CasD, and CasE) which are responsible for DNA targeting; iron-cluster proteins (SufB, SufC, SufD, and NifU) which are critical to electron transport but can contribute to oxidative stress 41 ; and penicillin-binding protein, which can bind penicillin and other β-lactam antibiotics to promote antibiotic resistance 26 .Comparisons of the bEV proteome across different bacterial species associated with adverse reproductive outcomes are novel and critical to further our understanding of the molecular pathways involved in microbe-host interactions in reproduction.These insights should inform future treatment strategies as the impact of bEVs appears to be associated with negative outcomes and likely worsened by current antibiotic therapeutics [42][43][44][45][46] .
Whereas proteins carried by G. vaginalis and M. mulieris bEVs are likely to promote a host inflammatory response, proteins in L. crispatus bEVs may serve to protect the epithelial barrier.Bacterial surface layer (Slayer) proteins, for example, mediate adherence to epithelial cells and protect against degradative enzymes in the mucus 47 .L. crispatus is the only known vaginal species to produce an S-layer which has been previously linked to high adherence to cervicovaginal epithelial cells and antagonism to pathogens of the genitourinary tract 48,49 .Incorporation of this protein into the L. crispatus bEVs suggests that there might be enhanced EV binding to epithelial cells, facilitating cellular internalization.Similarly, bacterial proteins with Ig-like domains (distinct from human antibodies) have many functional roles including adhesion.These proteins may be useful for facilitating bEV-cell contact and reducing motility of flagellated bacteria, protecting the vaginal microenvironment 50,51 .Lastly, bacteriocin helveticin-J family protein is an antimicrobial agent that inhibits the growth of closely related Lactobacillus species.Helveticin may contribute to L. crispatus regulation and dominance of the vaginal microbiome if bEVs are degraded extracellular or internalized by competing bacteria, which must be investigated in future studies 52 .
A major advantage of intercellular communication by bEVs compared to soluble factors is the ability to carry cargo to a destination cell.Here, we found that bEVs from each vaginal bacteria tested were internalized by vaginal (VK2), endocervical, and ectocervical epithelial cells within 1-4 h of exposure.While some previous studies have found VK2 uptake of live L. crispatus, G. vaginalis, and G. vaginalis-derived EVs within this time period, our study importantly adds the characterization of cervical epithelial cells 17,53 .Compared to vaginal epithelial cells, cervical epithelial cells have different embryological origins, cellular functions, and have previously demonstrated decreased responsiveness to microbial stimulation 32 .Despite different embryological origins, our findings support the ability of both cervical and vaginal cells to quickly internalize bEVs.Whether the mechanism of entry or the incorporation of bEV cargo is similar between the different epithelial cell lines requires further investigation.Further studies must investigate cervicovaginal mucus as a natural barrier to bEV uptake in vivo, although the small size and other biological properties of EVs may still enable cellular interaction.Inhibitors of endocytosis and cytoskeletal restructuring, such as cytochalasin-D, or of bacterial EV production, such as indole and amidine, are additional potential therapeutic strategies to prevent or treat the effects of EV uptake by cervical and vaginal epithelial cells 53,54 .
One main function of cervical and vaginal epithelial cells is to recruit an inflammatory response to bacterial threats.Several previous studies have shown a robust multi-cytokine response to live G. vaginalis and other BVassociated bacteria in vaginal, endocervical, and ectocervical epithelial cells, although the number of elevated cytokines and their increase in expression varies widely between bacteria and epithelial cell type [55][56][57] .Our laboratory has also previously found that culture supernatants from G. vaginalis and M. mulieris can recapitulate the pro-inflammatory response in the three epithelial cell lines, partially dependent on TLR2 activation and signaling 32,58 .In the present study, we observed robust inflammatory signaling from each epithelial cell line upon exposure to G. vaginalis and M. mulieris bEVs but not L. crispatus bEVs.In contrast to prior experiments with live and supernatant G. vaginalis exposure which found vaginal epithelial cells to have the most potent inflammatory response, our study found that endocervical cells were the most sensitive responders to bEVs 32 .This result reinforces the specificity of epithelial cells and bacterial product in mediating host-microbe interactions in the reproductive tract.
While epithelial cells in the reproductive tract can contribute to local inflammation, resident immune cells also play a critical role in immune regulation in the cervicovaginal space.Demonstrating the potential role of immune cells in host-microbe interactions, we show that stimulation of monocytes by G. vaginalis and M. mulieris bEVs induced a potent cytokine response; this response in monocytes was increased relative to epithelial cells at the same bEVs/cell ratio and nearly every measured cytokine was significantly increased after 24 hours of bEVs exposure.THP-1 cells have been previously used to study immune reactions to cervicovaginal bacteria 27,59,60 .In a previous study, exposure of THP-1 cells to live G. vaginalis resulted in cell death, production of cytokines and reactive oxygen species, and markers of NLRP3 inflammasome-mediated pyroptosis 60 .In another study, L. crispatus induced differentiation of THP-1 cells into a dendritic-like phenotype 59 .Our study adds to prior work by revealing how bEVs from common vaginal bacteria, and not just the live bacteria, can activate local immune cells.Our findings support the role of bEVs from L. crispatus, G. vaginalis, and M. mulieris, and certainly other vaginal bacteria, in mediating host-microbe interactions specifically by driving a proinflammatory response.
While there are likely diverse mechanisms by which vaginal bacteria produce and release bEVs and by which these bEVs induce molecular effects in cells, this study does demonstrate an important role for TLR2 signaling in the observed cytokine response to G. vaginalis and M. mulieris EVs but not to L. crispatus EVs.We have previously shown that live L. crispatus and G. vaginalis and their supernatants activate TLR2 signaling 32 .TLR2 is activated by bacterial components including lipoproteins, which are ubiquitous to all bacteria and highly expressed in the cytoplasmic membrane of Grampositive bacteria including L. crispatus, G. vaginalis, and M. mulieris; of note, the latter two bacteria usually stain Gram-negative 31 .While all of these vaginal bacteria have a Gram-positive cell wall, lipoproteins were only found in the proteome of G. vaginalis and M. mulieris EVs, not L. crispatus EVs.As such, our proteomic findings are consistent with our in vitro findings that bEVs from L. crispatus do not activate TLR2.Therefore, bEVs represent an important mechanism by which L. crispatus can evade immune recognition by host cells, and by which host cells recognize and respond to many nonoptimal vaginal bacteria.
Our study is limited by the use of ATCC, rather than clinical, strains of the selected vaginal bacteria.The selected ATCC strains of G. vaginalis and M. mulieris were originally isolated from women with bacterial vaginosis 61,62 , and each selected strain shares a high degree of genetic similarity with other clinical strains of the same species [63][64][65] .While the selected ATCC strain of L. crispatus has previously been shown to produce a high amount of lactic acid and be protective against Chlamydia trachomatis 66 , even phylogenetically closely related strains of L. crispatus have been shown to differ in production of lactic acid, bacteriocins, and other antimicrobial compounds 67 .Recent characterization of multiple clinical strains of G. vaginalis and even species of Gardnerella has shown marked variation in virulence properties (biofilm formation, sialidase activity, and antibiotic resistance), which may be linked to clinical outcomes 68 .Further study of the bEVs produced by clinical strains directly associated with adverse outcomes would provide insight into strainspecific differences in bacterial function and effects on reproductive outcomes.Additionally, our study only assesses three vaginal bacterial species associated with health and disease.While G. vaginalis and M. mulieris have been widely implicated in adverse reproductive outcomes, further studies will need to address whether additional non-optimal species (Prevotella, Sneathia) produce bEVs with functional importance; whether biogenesis, cargo, and functionality of bEVs are altered in polymicrobial states; and whether more complex models of the cervicovaginal epithelium and resident immune cell populations, especially models that include mucus, would reveal additional functional activities of bEVs.Metabolomic, RNA, and DNA cargo of bEVs must also be evaluated.Such investigation is currently technically challenging but would reveal bEV biology at the molecular scale and potentially lead to novel therapeutic strategies.Finally, recent results have demonstrated the ability of maternal gut microbiota-derived EVs to reach the fetal environment; analogously, whether vaginal bEVs can travel to the uterus and/or fetus must be studied 24 .
Overall, our study demonstrates that L. crispatus, G. vaginalis, and M. mulieris produce bEVs that carry specific proteomic cargo, are internalized within cervical and vaginal epithelial cells, and induce immune responses from epithelial and immune cells.bEVs represent a biological modulator for cellular crosstalk and for protected delivery of unstable cargo like proteins and genetic material to distant cells and tissues.Our results suggest that proteins from G. vaginalis and M. mulieris are delivered to host epithelial and immune cells by bEVs, inducing inflammation and potentially contributing to adverse reproductive outcomes.In contrast, several proteins from L. crispatus bEVs may confer protection to the host epithelium.This study provides a novel contribution by directly comparing the physical, biochemical, and immunogenic properties of bEVs from different vaginal bacterial species, enabling greater insight into protective and pathogenic host-microbe interactions in the vaginal environment.Further studies must evaluate bEVs as contributors to microbiome-mediated adverse outcomes and may reveal new therapeutic targets in the female reproductive tract.
Bacterial culture and isolation of extracellular vesicles Bacteria were grown at 37 °C in an anaerobic glove box (Coy Labs, Grass Lake, MI).G. vaginalis (ATCC 14018), L. crispatus (ATCC 33197), and M. mulieris (ATCC 35243) were grown in New York City (NYC) III broth supplemented with 1% horse serum (Gibco, Thermo Fisher Scientific) and pre-cleared of extracellular vesicles by overnight ultracentrifugation at 100,000 x g and 4 °C.Bacterial growth was measured and quantified by colony forming unit (CFU) assays.
To remove bacterial cells and cell debris, cultures were centrifuged at 3500 x g for 30 min, filtered through a 0.04 μm filter (Fisher Scientific), and centrifuged again at 30,000 x g for 33 min.bEVs were then isolated from the cleared supernatant by ultracentrifugation at 100,000 x g for 70 min and washed once in phosphate buffered saline (PBS).Finally, the pellet containing bEVs was resuspended in 100 μL of 10 mM HEPES and 25 mM NaCl.Particle analysis and concentration assessment using ZetaView/ Nanoparticle Tracking Analysis (Particle Metrix) were performed with 2 μl of each sample.Samples were stored in −80 °C until use.
Transmission electron microscopy
A 5 µL volume of sample was applied to a thin carbon grid that was glow discharged for 2 min using a Pelco Easyglow instrument.A 5 µL of freshly made 2% uranyl acetate stain solution was applied and incubated with the sample for 2 min on the grid.Excess sample and stain were blotted away with a Whatman filterpaper.The staining process was repeated and the grid was let to dry until imaged.
TEM micrographs were collected using Tecnai T12 TEM microscope at 100 KeV.The images were recorded on Gatan Oneview 4Kx4K camera.Each image was collected by exposing the sample for 4 s and a total of 100 dose fractionated images were collected and into a single micrograph.The data was collected at −1.5 to 2 microns under focus at 30 K-40 K magnification.
Protein extraction
Samples were solubilized in 55 µL of extraction buffer containing 5% sodium dodecyl sulfate (SDS, Affymetrix), 8 M urea (Bio-Rad), 100 mM Tris-HCl pH 8.0 (Rockland), and protease inhibitor cocktail (Roche cOmplete, EDTA free).To shear DNA and ensure complete solubilization, samples were sonicated for 10 min at 10 °C in a Covaris R230 focusedultrasonicator with the following settings: Dithering: Y = 3.0, Speed=20.0,PIP: 360.0, DF: 30, CPB: 200.Samples were centrifuged at 3000 x g for 10 min to clarify lysate. 1 µL of each sample was taken to estimate protein concentration by in-gel staining with Bradford Coomassie solution and intensity analysis with GelAnalyzer 19.1, using a serial dilution of an inhouse generated E. coli lysate standard.All samples were processed in parallel from the same experiment.
In-solution digestion 100 µg of each sample was digested per the S-Trap Micro (Protifi) manufacturer's protocol 69 .Briefly, proteins were reduced in 5 mM TCEP (Thermo), alkylated in 20 mM iodoacetamide (Sigma), then acidified with phosphoric acid (Aldrich) to a final concentration of 1.2%.Samples were diluted with 90% methanol (Fisher) in 100 mM TEAB, then loaded onto an S-trap column and washed three times with 50/50 chloroform/methanol (Fisher) followed by three washes of 90% methanol in 100 mM TEAB.A 1:10 ratio (enzyme: protein) of Trypsin (Promega) and LysC (Wako) suspended in 20 µL 50 mM TEAB was added, and samples were digested for 1.5 hours at 47 °C in a humidity chamber.After incubation, peptides were eluted with an additional 40 μL of 50 mM TEAB, followed by 40 μL of 0.1% trifluoroacetic acid (TFA) (Pierce) in water, and finally 40 μL of 50/50 acetonitrile:water (Fisher) in 0.1% TFA.Eluates were combined and organic solvent was dried off via vacuum centrifugation.Samples were then desalted using an Oasis HLB µElution plate (30um, Waters).Wells were conditioned two times with 200 µL of acetonitrile and equilibrated three times with 200 µL of 0.1% TFA.Samples were applied, washed three times with 200 µL 0.1% TFA, and eluted directly into autosampler vials in three increments of 65 µL of 50:50 acetonitrile:water.Eluates were then dried by vacuum centrifugation and reconstituted in 0.1% TFA containing iRT peptides (Biognosys, Schlieren, Switzerland).Peptides were quantified with A280 measurement on a NanoDrop 1000 (Thermo) and adjusted to 0.4 µg/µL for injection.
Mass spectrometry data acquisition
Samples were analyzed on a QExactive HF mass spectrometer (Thermofisher Scientific San Jose, CA) coupled with an Ultimate 3000 nano UPLC system and an EasySpray source.Peptides were separated by reverse phase (RP)-HPLC on Easy-Spray RSLC C18 2um 75 μm id × 50 cm column at 50 C. Mobile phase A consisted of 0.1% formic acid and mobile phase B of 0.1% formic acid/acetonitrile.Peptides were eluted into the mass spectrometer at 300 nL/min with each RP-LC run comprising a 95 min gradient from 1 to 3% B in 5 min, 3-45%B in 90 min.The mass spectrometer was set to repetitively scan m/z from 300 to 1400 (R = 120,000) followed by datadependent MS/MS scans on the twenty most abundant ions, dynamic exclusion with a repeat count of 1, repeat duration of 30 s, (R = 15,000) and a nce of 27.FTMS full scan AGC target value was 3e6, while MSn AGC was 2e5, respectively.MSn injection time was 32 ms; microscans were set at one.Rejection of unassigned, 1, 6-8 and > 8 charge states was set.
System suitability and quality control
The suitability of Q Exactive HF instrument was monitored using QuiC software (Biognosys, Schlieren, Switzerland) for the analysis of the spiked-in iRT peptides.Meanwhile, as a measure for quality control, we injected standard E. coli protein digest prior to and after injecting sample set and collected the data in the Data Dependent Acquisition (DDA) mode.The collected data were analyzed in MaxQuant and the output was subsequently visualized using the PTXQC package to track the quality of the instrumentation 70,71 .
Fluorescent labeling, immunocytochemistry, and confocal imaging Ectocervical, endocervical, and vaginal epithelial cells (n = 3 samples per condition) were plated at 2.0 × 10 5 cells/well in 4-chamber slides (ibidi 80426) coated with 0.1% gelatin.bEVs were stained with rhodamine B isothiocyanate (RBITC, 0.2 mg/mL in 20 mM HCl) for 30 min, collected by centrifugation at 100,000 x g for 25 min, and washed once in PBS.Freshly stained bEVs (10 9 bEVs/well) were added to the cells.Cells were fixed with 10% formalin at 1 h, 4 h, or 24 h and incubated with an E-cadherin primary antibody (ab231303, 1:50 dilution) and then secondary (ab150105, 1:750 dilution) for 1 h at room temperature each.Cells were washed three times with cold 1x PBS for 5 min between steps.Slides were washed and dried for 30 min in the dark.Mounting medium (Dako, Agilent Technologies, Santa Clara, CA, USA) was added to each slide, and a glass coverslip was placed on top.Slides were stored at 4 °C (until imaged by Zeiss 880 confocal microscope in theCell & Developmental Biology Microscopy Core Facility) and at −20 °C for long-term storage.
In a subset of samples for live imaging, cells were plated at 2.0 × 10 5 cells/dish on 35 mm high glass bottom μ-dishes (ibidi 81156) coated with 0.1% gelatin.After 24 h, Abcam Cytopainter staining solution (ab138891) was added to cells for 30 min and then washed three times in fresh media.Freshly stained bEVs (10 9 bEVs/sample) were added to the cells.After another 30 min, cells were moved to a temperature-controlled chamber and imaged by the Zeiss 880 confocal microscope.At 5 independent locations per sample, 1 image was captured every minute for 30 min and compressed into a video file with 6 frames per second.
Cell death, ELISA, and Luminex assays Ectocervical, endocervical, vaginal, and THP-1 cells (n = 3 samples per condition) were plated at 2.0 × 10 5 cells/well in 24-well plates containing cell media without antibiotics.The next day, the cells were treated with bEV preparations (10 9 bEVs/well) from L. crispatus, G. vaginalis, M. mulieris, or NYC culture medium in cell medium for 24 h.At the end of each experiment, cell culture medium was collected for analysis of cell death using a lactate dehydrogenase (LDH) assay, IL-8 production by ELISA, or multiple cytokine expression by Luminex.
The release of lactate dehydrogenase (LDH) from cells (n = 3-9 independent experiments per cell type) was measured using the CytoTox 96 nonradioactive cytotoxicity assay (Promega, Madison, WI).Absorbance values were recorded from a colorimetric plate reader at 490 nm.
The expression of IL-8 was measured by a ligand-specific commercially available ELISA kit that utilizes a quantitative sandwich enzyme immunoassay technique using reagents from R&D systems (Minneapolis, MN).
A 29-plex human cytokine/chemokine (HCYTMAG-60K-PX29) magnetic bead Luminex panel (EMD Millipore, Billerica, MA) were run on cell media (n = 3 samples from a representative experiment).All samples were run in duplicate, per the manufacturer's protocol on the FLEXMAP 3D Luminex platform (Luminex, Austin, TX).Absolute quantification in pg/mL was obtained using a standard curve generated by a five-parameter logistic (5PL) curve fit using xPONENT 4.2 software (Luminex).Fold change values were calculated between treatment groups and the non-treated and NYC EV controls.For fold change calculations, if the cytokines levels in the control group were undetectable, then a minimal detectable level was assigned equal to 0.01 pg/mL.Heatmaps were created using GraphPad.
Detection of TLR2-dependent NF-kB and IL-8 HEK-hTLR2 cells were plated at 7.5 × 10 4 cells/well in 96-well plates containing DMEM + 10% heat-inactivated FBS without antibiotics (n = 3 samples per condition).The next day, the cells were treated with EVs in DMEM cell culture media for 24 h.For detection of a nuclear factor kappa-B (NF-κB) response (SEAP reporter), cell culture supernatants were incubated with QUANTI-Blue solution (Invivogen) for 1 h, pictures were taken of the plate, and absorbance was read at 630 nm on a SpectraMax i3x plate reader (Molecular Devices).For detection of an IL-8 response (Lucia luciferase reporter), cell culture supernatants were mixed with QUANTI-Luc solution (InvivoGen), and luminescence was read immediately on a SpectraMax i3X plate reader.Additionally, cell death was measured as described above.
Statistical analysis
All statistical analyses were carried out in GraphPad Prism (GraphPad Software Inc, Version 9.0).A p-value < 0.05 was considered statistically significant.For data that were normally distributed, one-way analysis of variance (ANOVA) was performed.If statistical significance was reached, then pairwise comparison with a Tukey post-hoc test was performed for multiple comparisons.If data were not normally distributed, then the nonparametric Kruskal-Wallis test was used, and multiple comparisons were done using Dunnett's test.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article
Fig. 1 |
Fig. 1 | Size, morphology, and concentration of extracellular vesicles derived from L. crispatus, G. vaginalis, and M. mulieris grown in NYC culture medium.a Transmission electron microscopy indicates small, spherical vesicles isolated from bacterial culture samples but not NYC culture medium.b Particle size distributions from Nanoparticle Tracking Analysis indicate the presence of vesicles between 90 and 420 nm in diameter.All scale bars are 500 nm.
Fig. 2 |
Fig. 2 | Proteomic analysis of extracellular vesicles derived from L. crispatus, G. vaginalis, and M. mulieris grown in NYC culture medium.a Out of 1074 total bacterial proteins identified, 90 orthologs were shared between all three species' isolates, and 247, 491, and 336 were specific to L. crispatus, G. vaginalis, and M. mulieris isolates, respectively.b Select shared and distinct proteins are of functional interest; protein abundance (proteins per million) is shown by heat map.c Proteins were classified according to predicted subcellular localization.
Fig. 6 |
Fig. 6 | TLR2-specific pathways activated by L. crispatus, G. vaginalis, and M. mulieris bEVs.In HEK-TLR2 reporter cells exposed to bEVs for 24 h, G. vaginalis and M. mulieris bEVs induce significantly increased expression of (a) NF-kB and (b) IL-8.c Exposure to any bEV treatment does not induce cell death relative to non-treated controls, as measured by lactate dehydrogenase release.Error bars indicate standard deviation and p-values were calculated via one-way ANOVA with Tukey's correction for multiple comparisons.
Table 1 |
Functional classification of proteins carried by bEVs derived from L. crispatus, G. vaginalis, and M. mulieris grown in NYC culture medium | 8,710.8 | 2024-03-21T00:00:00.000 | [
"Medicine",
"Biology"
] |
Social intervention in improving smallholders welfare in realizing eco-friendly palm oil plantations
The Indonesian government continues to strive to increase the production, productivity, and quality of oil palm. Several studies on the welfare impact have shown that some smallholders benefit greatly in terms of income. Meanwhile, there are challenges to improve the welfare of smallholders, such as financial and knowledge barriers. Therefore, this article aims to identify social interventions' contribution to improving farmer welfare and realizing environmentally friendly oil palm plantations. Social intervention can be interpreted as an effort to help in the form of planned changes to individuals, groups, and communities, which can come from government, corporate, third parties, communities, and individuals. This article contributes to define and gather social intervention perspective in improving smallholder welfare. The study employs a method approach combining literature review and descriptive analysis. These social intervention approaches, ideally embedded in a community strategy developed by the stakeholders in palm oil practice, have the potential to improving smallholder’s welfare in realizing eco-friendly palm oil plantations. To complement previous researches, this research is important because it is specific to enhance smallholder welfare through social intervention approach. Social intervention can be a form of law intervention, financial scheme and support, community programs, and corporate social responsibility (CSR) programs. Through that can help palm oil farmers, especially smallholders, to increase productivity while paying attention to environmental sustainability.
Introduction
Palm oil is currently still a mainstay commodity and has the largest biomass in Indonesia. Palm oil plantation owners are ranging from State Plantation (PBN), Private Large Plantation (PBS), and Smallholder Nucleus Plantation (PIR) [1]. Indonesia and Malaysia produce about 85% of the palm oil traded worldwide, with estimated demand growth of 5% per annum [2].
Jambi Province is one of the areas known for its extensive palm oil plantations in Indonesia. Jambi was recorded as the second largest plantation crop after rubber in the past two decades. The expansion of palm oil plantations in Jambi continues to occur until it reaches more than 500 thousand hectares with the highest area in Muaro Jambi Regency, 96,587 hectares in 2018. The number of oil palm farmers in Jambi has also been recorded to have increased to reach the second largest plantation farmers since 2005 until the time. Palm oil production in Jambi in 2018 was recorded to be seven times greater than the production of rubber commodities, reaching more than 2.2 million tons [3]. The growth in demand for palm oil has increased. Therefore, Indonesian Sustainable Palm Oil (ISPO) and Roundtable on Sustainable Palm Oil (RSPO) certification requirements have increased to make the performance of palm oil plantation more sustainable in terms of production unit and development. Implementing actions that can support the increase in sustainable palm oil production and preserve and conserve biodiversity is needed for oil palm farmers [4].
The Indonesian government continues to strive to increase the production, productivity and quality of oil palm through several programs offered, including (a) increasing production through intensive programs (providing fertilizers, using quality certified seeds, and other inputs), extensification (utilizing available land), and rejuvenation (plantation revitalization) (b) increasing productivity through the development of palm oil plantation with government contributions and increased productivity missions to produce 35 Fresh Fruit Bunches per hectare per year, (c) improving product quality through the implementation of oil palm development with a sustainable palm oil label and product standardization such as ISPO certification, RSPO, and so on [4]. The assessment of the treatment of farmers in Jambi has been practicing sustainable palm oil plantations by considering economic, ecological, and social dimensions. The status of sustainability from the economic dimension (farmer income and welfare) and social (education and planting tradition) is sustainable, and the ecological dimension (planting other trees) is entirely sustainable [1].
Palm oil plantations are often in conflict with the environment due to plantation expansion which often leads to increased greenhouse gas (GHG) emissions. Data from 1988 to 2014 recorded an increase in palm oil plantation in one area in Jambi province showing an increase in the area of palm oil plantations reaching 6 times from the initial year, while the forest area had decreased by more than 50% [5]. The Indonesian government has made efforts to make plantation activities and oil palm production more environmentally friendly and sustainable. The government sets a national target to exploit oil palm biomass residue's full potential for renewable energy and emission reduction. However, policy implementation is progressing slowly. In 2015, Indonesia produced around 150 Mt of residual oil palm biomass, a significant source of GHG emissions. At the same time, it will result in lost opportunities for economic benefits from bio-based products that can be further processed to have added value [6]. Efforts to realize sustainable and environmentally friendly oil palm plantations cannot only rely on the government, it requires the involvement of many parties, namely, individual farmers themselves, communities, organizations, or companies.
Smallholders in Jambi are generally not aware of the ISPO program except with the existence of promotion and counseling from the government, but, although not known, most of the smallholders have implemented palm oil management actions that comply with ISPO standards. The practices used are not few and not fully standard because not all ISPO practices directly benefit smallholders. Therefore, a strategy for implementing ISPO for smallholders requires a gradual process and requires financial and technical support. The introduction of ISPO practices on a large scale can be seen as an intermediary step towards achieving internationally recognized certification of Indonesian palm oil [7]. Another study reveals that smallholders' main problems and challenges in implementing sustainable palm oil. The results of the identification of gaps between the standard requirements in sustainable palm oil and current practices show that there are still gaps in both the special requirements that are difficult to meet and the basic requirements such as land ownership and plantation management [8].
In Indonesia, oil palm has increased wealth and food balance for poor farming families. The adoption of oil palm has improved the quality of households among smallholders in Jambi Province. One of the reasons for adopting palm oil plantation is small quantity labor to use for plantation processes, which give the farmers or planter to involved other economic activities [9]. Several studies on the welfare impact have shown that some smallholders benefit greatly in terms of income. The challenges associated with the palm oil industry vary particularly for smallholders making certification like this that can help improve smallholders' welfare and promote cooperation between smallholders and companies. So, there is cooperation and shared responsibility between various stakeholders in improving the welfare of small farmers. The welfare is not only from a financial or socioeconomic perspective of farmers but also other dimensions of basic human welfare such as physical and natural or socioecological [10].
As one of the main suppliers of palm oil, it is very critical to optimize palm oil production in Indonesia, both by extending palm oil plantations without deforestation and by increasing palm oil cultivation. The benefits of oil palm in improving the welfare of farmers are certainly very beneficial, but it has many challenges. Efforts to improve the welfare of oil palm farmers require social intervention from both groups and communities, which can come from the government, companies, third parties, communities and individuals. These efforts can take the form of developing farmer knowledge, funding for palm oil cultivation, and so on, which can increase the productivity and welfare of farmers [6].
Another study revealed that Jambi Province still needs extra challenges in achieving certification standards for palm oil plantation sustainability. The challenges faced in this case is that smallholders often do not have a good understanding of their position in the oil palm business. So, when stakeholders make decisions about their palm oil plantation, smallholders are more likely to impact these decisions. That indicates there is information, knowledge, capability gap in smallholders [11]. Therefore, the large potential and opportunities, as well as the many challenges faced by oil palm farmers in Jambi, require further identification of the contribution of social interventions in improving farmer welfare and realizing environmentally friendly palm oil plantations. This research aims to identify social interventions' contribution to improving farmer welfare and realizing environmentally friendly palm oil plantations.
Method
This article was conducted by reviewing previous literature to identify the contribution of social interventions in improving farmer welfare and realizing environmentally friendly palm oil plantations. The study area analyzed and reviewed was Jambi Province with a major focus on smallholder farmers. The documents reviewed include government documents, newspapers, journals, and books. The data obtained from the documents are then reviewed and analyzed.
The research followed a method approach between literature review and descriptive analysis. A comprehensive understanding of palm oil potential, identification of stakeholders and social intervention was acquired through literature analysis. To associate the connection among the stakeholders, descriptive analysis was addressed. The analysis contributes to add perspective in terms of overcoming problems solving of economic, social and ecology in palm oil smallholders.
Results and discussion
The increase in the number of smallholders in Jambi every year makes smallholders one of the stakeholders who greatly influence the development of the palm oil sector. In Fig. 1 it is seen that smallholders in Jambi own about 68% of the total oil palm plantation land found in Jambi. It is also estimated that there is a 6% increase from 2018 -2020 for land owned by smallholders in Jambi. Therefore, due to the amount of land dominating in Jambi, smallholders' management of conflicts becomes very important to do. The increase in the number of oil palm lands owned by farmers needs to be accompanied by efforts to improve the quality of the farmers. Social intervention is one way to improve the quality of smallholders to achieve prosperity and sustainable palm oil practices. Oil palm farmers in Indonesia generally have 2 hectares of land to cultivate. However, there are some problems faced by palm oil farmers. One of them is the low productivity of palm oil plantations compared to palm oil plantations under private and government [13]. That can be addressed through more available capital, adequate agricultural counseling, and technology transfer for better planting. Thus, smallholder productivity is able to increase [14]. The low productivity conditions are not necessarily charged to the capabilities of palm oil farmers. Other factors that cause low palm oil productivity are the condition of plantation land, infrastructure, management, and handling after harvesting. Thus, social intervention is needed to palm oil farmers to increase productivity while paying attention to environmental sustainability.
Social intervention can be interpreted as an effort to help in the form of planned changes to individuals, groups, and communities. Changes must be measured and evaluated as a form of accountability to see what changes occur. The involvement of farmers is one factor in realizing the sustainability of palm oil plantations. Participation from various parties to realize environmentally-friendly palm oil plantations is indispensable. Social intervention is one of the efforts to improve farmers' welfare to realize environmentally-friendly palm oil plantations.
This form of social intervention can come from a variety of sources or stakeholders. One form of social intervention is from the Non-Government Organization (NGO). Social intervention by NGOs contains at least several dimensions such as (1) the creation of an engagement space of the stakeholders so that the parties know the relationship between different dimensions that are of concern to each (2) the creation of a connecting room so that the parties can fully participate and increase the empowerment of the parties (3) of interdependence spaces to become the basic foundation for the governance structure [15]. Social intervention presented by NGOs can be the initial basic structure so that the parties' involvement is recognized. Recognition of the parties' presence and existence is necessary so that each party knows the role and can see different perspectives of interests to achieve the same goals.
The Indonesian government has implemented social interventions to help the sustainability of palm oil plantations. One form of intervention by the government is through a finance scheme. The government provides sub-provision and credit or regulates financial institutions in the provision of [16]. Intervention in the form of subsidies is also carried out by the Thai government. They provided financial subsidizes with low-interest rates to increase palm oil production [13]. Another form of intervention by the government is through regulatory intervention and law enforcement. Some of the government interventions of concern to be applied to palm oil farmers are land ownership arrangement in forest areas, the introduction of land certification programs, value-chain improvement programs in the raw material market, and investment in field extension [17].
Social interventions can also arise from private parties or corporations. One form of social intervention from the company is corporate social responsibility (CSR). CSR activities are seen as a commitment of the company that voluntarily fulfills the responsibilities that arise from the community's expectations for the activities carried out by the company. CSR becomes a mechanism to bridge business development and community relationships. CSR also plays a role in improving its collaboration with governments, institutions, and other private parties to ensure long-term economic sustainability [18]. Generally, CSR's goal is to make a positive contribution to the development of the environment and the community around the company. That can help improve the development of the area for the better, improve education, and improve community welfare [19]. CSR implementation contributes to the fulfillment of people's needs and enhances the company's reputation. Nevertheless, the CSR strategy's implementation should involve the company's shareholders considering the activities carried out in the long-term and considering maximizing the company's profits in the long term [20]. The private sector, including companies affiliated with oil palm farmers, can provide social intervention through assistance. Smallholders affiliated with the company were better able to cope with certification challenges, particularly as they received technical assistance and knowledge transfer. This is different from independent farmers who do not receive assistance. A total of 57% of farmers attended (n = 194) in the sample never received training or extension services; those who receive training often only receive it once [21]. Social intervention in the form of assistance by the private sector is able to increase understanding of oil palm farmers in the direction of sustainable palm oil practices.
Social interventions made to palm oil farmers are not only a direct benefit in the form of palm oil plantation techniques. However, there are interventions related to the sustainability of palm oil combined with livestock farming. A company in Sumatra in 1996 initiated an empowerment program. Each household head is given three farm animals, where the animal is herded in the plantation of palm oil, with additional feeding from palm oil waste and kernel cake. By 2003, the number of livestock in the scheme had doubled. Meanwhile, the area of harvest per worker had increased from 10 to 15 ha, and their associated income had increased [22]. Thus, the implementation of social interventions can increase economic potential.
One of the efforts towards the sustainability of palm oil plantations is the fulfillment of ISPO and RSPO certification. This certification does not only assess palm oil mills but also oil palm farmers. In fact, not all oil palm farmers can meet certification requirements. Smallholders who are already RSPO certified can meet certification requirements of up to 70%, but non-certified smallholders have the highest gap between RSPO requirements and field practice [23]. Based on research conducted by Rietberg and Slingerland, there are 56 audit findings in 10 recertification reports for independent farmers [24]. The biggest challenge of the RSPO certification requirements for palm oil smallholders is Principle 2 (regulation and law), Principle 4 (conservation and environment) and Principle 6 (community and employee) [23]. Given the limited knowledge of the certification requirements, assistance is necessary for oil palm farmers. Assistance and technical assistance in palm oil plantation management for smallholders can increase agronomic productivity [25]. Assistance is a form of social intervention that can be carried out by the government, NGO or private institutions in supporting oil palm farmers towards sustainable certification.
The obstacles faced in the implementation of social intervention are the level of sustainability. In fact, social intervention becomes a long-term effort and becomes a sustainable system. However, some concepts of programs from social intervention meet challenges in terms of funding support. Mentoring to farmer groups often encounters limits in terms of funding and duration. Indeed, the assistance provided is only the initial capital that must be developed by the farmer group and not become a dependency. Nevertheless, programs built as a form of social intervention must also be an integrated program in terms of legality, sustainability, and productivity to not stand alone. Social intervention needs to return to the main goal of making positive and accountable changes. Thus, there will be a solid foundation when social intervention encounters limitations in its implementation. A solid social foundation will realize community development. By focusing more on community development, instead of focusing solely on economic development, the communities can integrate land-use planning [26].
The development of the palm oil farming community aims to encourage farmers' ability and independence in palm oil plantation management activities so that it can be in line with the sustainable requirement of the stakeholders in palm oil [27][28][29]. Social intervention at the community level of palm oil farmers can be one of the methods used in the framework of farmer development through a series of strategies and processes carried out by the interventionist based on a desire and commitment to help for the progressive improvement of farmer welfare. There are changes in local communities' living conditions, especially in the social and economic fields, through applying social intervention models in the development of local communities. Among others are increased community income, clear livelihoods, regional development, and relationships, or social interactions are intertwined because conflict can be reduced [27].
Conclusion
The increase of palm oil plantations area in Indonesia with low productivity of smallholder plantations has an impact on the life and survival of farmers. However, farmers' limited ability, especially smallholders, to get better resources to develop their plantations required social intervention from governments, companies, communities, or related individuals. Social intervention can be in the form of financial assistance, commitment to environmental and community development, and supporting their plantation support process. Social intervention in the form of assistance by the stakeholders is able to increase understanding of palm oil smallholders towards sustainable palm oil practices. The implementation of social interventions can increase economic potential. Support in the form of social intervention will encourage the process of growing farmers to be better and produce high plantation yields by minimizing environmental issues that will arise from the palm oil plantation process. Further studies are needed to show the impact of the studied social intervention in an Information and Communication Technology (ICT) approach. A suggestion would be to elaborate the correlation among stakeholders in integrated social intervention through ICT. Thus, it can be seen the ideal type of social intervention approach according to the characteristics of the smallholder's region. | 4,436.6 | 2020-01-01T00:00:00.000 | [
"Economics"
] |
SQUAB I: The first release of Strange QUasar candidates with ABnormal astrometric characteristics from Gaia EDR3 and SDSS
Given their extremely large distances and small apparent sizes, quasars are generally considered as objects with near-zero parallax and proper motion. However, some special quasars may have abnormal astrometric characteristics, such as quasar pairs, lensed quasars, AGNs with bright parsec-scale optical jets, which are scientifically interesting objects, such as binary black holes. These quasars may come with astrometric jitter detectable with Gaia data, or significant changes in the position at different wavelengths. In this work, we aim to find these quasar candidates from Gaia EDR3 astrometric data combining with Sloan Digital Sky Survey (SDSS) spectroscopic data to provide a candidate catalog to the science community. We propose a series of criteria for selecting abnormal quasars based on Gaia astrometric data. We obtain two catalogs containing 155 sources and 44 sources, respectively. They are potential candidates of quasar pairs.
INTRODUCTION
Since the discovery of the first quasar in 1963 (Schmidt, 1963), this type of extremely distant active galactic nuclei (AGN) has gradually become the focus of astronomical research. In astrometry, a large number of evenly distributed quasars can be used to establish a celestial reference frame (Ma, 1997;Ma et al., 2009;Mignard et al., 2018;Charlot et al., 2020) because they have almost zero proper motions and point-like shapes. On the other hand, quasars are also a critical pathway to explore the evolution and mergers of galaxies in astrophysics (Begelman et al., 1980;Shen et al., 2021).
There are many surveys concerning the identification of quasars such as the large Bright Quasar Survey (Hewett et al., 1995), the 2DF Quasar Redshift Survey (2QZ, Croom et al. 2004), the quasars from Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST, Luo et al. 2012) and Solan Digital Sky Survey (SDSS, Pâris et al. 2018;Lyke et al. 2020). A large number of quasars have also been identified through astrometry and mid-infrared methods (see, e.g. Secrest et al. 2015;Guo et al. 2018). The total number of identified quasars has exceeded one million, and these quasars have been collected and compiled into various catalogs (see, e.g. Véron-Cetty and Véron 2010; Souchay et al. 2019;Liao et al. 2019;Flesch 2021). Among these confirmed quasars, some spectroscopically identified quasars show abnormal astrometric characteristics in the Gaia high-precision astrometric observation (Wu et al., 2021). These abnormal quasars have large proper motions or significant astrometric noises, which means that they are not suitable to be used to establish the celestial reference frame. Shen et al. (2019) emphasize that quasars with significant astrometric noises may be dual quasars. These dual quasars are precursors of the binary supermassive black holes, which play an important role in the study of galaxy evolution and gravitational waves (GWs). At present, most of the known dual AGN are at low redshifts or have large physical separation (>20kpc), and only several known small-separation dual quasars are at high redshifts (Chen et al., 2022), while Gaia's high-precision astrometric data has not been seriously considered.
Gaia is an astrometric satellite launched by the European Space Agency (ESA) on 19 December 2013 (Prusti et al., 2016). At present, Gaia has provided high-precision astrometric data for more than 1.8 billion sources in the G magnitude range from 3 to 21 mag . With the accurate position data and a large number of identified quasars, Gaia has been committed to establishing its own optical non-rotating celestial reference frame (CRF) (Mignard et al., 2018). Lindegren et al. (2018) selected 556,869 quasars from the 3rd International Celestial Sphere Reference Frame (ICRF3) and AllWISE AGN catalog (Secrest et al., 2015) to establish the Gaia-CRF2 (see also Mignard et al. 2018). In Gaia Early Data Release 3 (EDR3), the AGN catalog, which contains 1,614,173 sources, is obtained by cross-matching with 17 external AGN catalogs. The systematic errors in EDR3 have been greatly improved compared with DR2. The astrometric properties of the EDR3 quasars show that no significant residuals are found globally (Liao et al., 2021a,b), which provides us with a unique opportunity to select abnormal quasars in EDR3.
In EDR3, there are 585 million 5-parameter 1 and 882 million 6-parameter sources with the measurement of parallax and proper motion, while the remaining 344 million 2-parameter sources have only positional data. The quasars used by Gaia were obtained by a cross-match of the full Gaia catalog with the external QSO/AGN catalogs, the matched sources were further selected to have parallaxes and proper motions compatible with zero within five times the respective uncertainty Klioner et al., 2021). Therefore, among the common sources of Gaia EDR3 and the 14th data release of SDSS Quasars (SDSS DR14Q, Pâris et al. 2018), 308,601 of 367,516 quasars are contained in the Gaia EDR3 AGN catalog. For the remaining 58915 quasars, 206 sources are ruled out due to excessive proper motion or parallax, and 58707 quasars are excluded just because they do not have the measurement of proper motion and parallax. To make full use of the position information of these 2-parameter quasars, we need to judge the reliability of their astronomical information through other criteria.
In this paper, we try to explore the selection of quasars with abnormal astrometric characteristics using different combinations of appropriate astrometric parameters in addition to parallaxes and proper motions. In this way, we can not only evaluate the 5-parameter or 6-parameter sources more comprehensively but also appropriately select the 2-parameter sources to further expand the sample of quasars we can use in Gaia. Note that we are not selecting quasars with good observation parameters. On the contrary, we want to mark the quasars with poor astrometric parameters, which will provide some candidates for studying galaxy evolution and binary black holes. This paper is organized as follows. In section 2, we introduce the data and criteria for selecting quasars with abnormal astrometric characteristics. We show the results and evaluate these quasars in section 3. In section 4, we make some discussions about the extension of the catalogs and the identification of quasar pairs, and the conclusions are given in section 5.
Data used
As addressed in the previous section, the AGN catalog in Gaia EDR3 (GEAC hereafter) is obtained by cross-matching with 17 external AGN catalogs. GEAC contains 1,215,942 5-parameter sources and 398,231 6-parameter sources. Besides, to calculate the rotation of the Gaia reference frame, the Gaia team selected 429,249 5-parameter solution quasars as frame rotator sources (FRS hereafter, Brown et al. 2021). Therefore, FRS is currently the most reliable quasar catalog in Gaia, and will be used as a comparison sample to evaluate the astrometric parameters of other quasar candidates.
As mentioned in the previous section, there have been many compiled quasar catalogs. The spectra classified quasars from SDSS contributed a large proportion. Considering the reliability and the indispensable images and spectra data of SDSS, we decided to use the SDSS quasar catalog as our input catalog to select the abnormal quasars. SDSS Data Release 16 (DR16, Jönsson et al. 2020) is the latest data product from Apache Point Observatory Galactic Evolution Experiment (APOGEE)-2/Sloan Digital Sky Survey-IV (Blanton et al., 2017). And the quasar catalog of SDSS DR16 (Lyke et al., 2020) contains two catalogs: the quasar-only catalog and the "superset" objects targeted as quasars. The "superset" of all SDSS-IV objects targeted as quasars containing 1,440,615 sources and the quasar-only catalog containing 750,414 quasars. Due to the high completeness (99.8%) and low contamination (0.3%-1.3%), we choose the quasar-only catalog as our initial sample of quasars (SDSS DR16Q hereafter).
The selection criteria
With a large number of quasars identified by SDSS spectrum, after cross-match with Gaia EDR3 in a 1 radius, we obtain 489,402 common sources in Gaia EDR3 and SDSS DR16Q. Among them, there are 153 SDSS quasars with two Gaia matches, two SDSS quasars with three Gaia matches and one Gaia source with two SDSS quasars matches. We then exclude two SDSS quasars whose corresponding four Gaia matched sources are all with significant proper motion or parallax. These sources are compiled into the type A catalog of quasars with abnormal astrometric characteristics (Catalog A hereafter). These multiple-matched sources are potential quasar pairs or star-quasar pairs, which will be further discussed later in this paper.
For the remaining 489,285 quasars with only one Gaia source matched within a 1 radius, to lower the possibility of star contamination in cross-matching, we exclude those sources with significant proper motion or parallax using the criteria mentioned in the previous section. Then we select several astrometric parameters emphasized in Lindegren et al. (2021) to evaluate their accuracy and reliability. These parameters can characterize the goodness of the point spread function (PSF) model fitting of each source and the reliability of the observation data. We will introduce them and describe in detail the criteria of our selection in the following parts.
astrometric_gof _al represents the Goodness-of-fit statistic of the astrometric solution for the source in the along-scan direction. The Gaia EDR3 documentation proposed a rough value of this criterion to distinguish between good and bad fitting of the data: if the astrometric_gof _al is greater than 3, it may indicate a bad fitting of the data. We have analyzed the reliability of this criterion by checking the statistical value of astrometric_gof _al from FRS sources. There are only about 3% of quasars in FRS that have the excessive astrometric_gof _al ( > 3) as indicated from Fig. 1, which means this criterion could select some extreme quasars while ensuring that most reliable quasars are ruled out. Fig. 2 shows that the median line of astrometric_gof _al is almost parallel to the x-axis, so there is no obvious correlation between astrometric_gof _al and the brightness of source when G < 20.9 mag. With these studies in mind, we choose astrometric_gof _al > 3 as one of the criteria to select the quasars with abnormal astrometric characteristics. astrometric_excess_noise represents the disagreement between the Gaia observations of a source and the best-fitting standard astrometric model, and a large value signifies that the residuals are statistically larger than expected. There is no doubt that astrometric_excess_noise is an important indicator of whether the source is astrometrically "well-behaved", but we need to make sensible cutoffs to ensure that the sources we selected are reliable and logical. With high accuracy and reliability, FRS is an ideal reference to determine the criterion of noise. As seen in Fig. 3, with the magnitudes of the sources becoming fainter, the observation noises of the sources are also rapidly increasing. The 99.9% quantile line can retain most of the reliable quasars, and the blue points outside this line show obvious bias from the whole sample. Therefore, the red curve may be an empirically feasible criterion. We choose the 20.9 mag as the magnitude limit of this criterion since there are only 138 FRS sources fainter than this limit. We plot the quasars of SDSS DR16Q in the same figure and find 1982 of them meet this 99.9% quantile criterion 2 . Another parameter that could be used to evaluate the astrometric noise is astrometric_excess_noise_sig, which represents the significance of excess noise. Since the excess noise could absorb all kinds of modeling errors such as PSF (Point spread function) calibration errors and geometric instrument calibration errors (Lindegren et al., 2012), the astrometric_excess_noise_sig is important to evaluate if the noise is caused by the structure of the source. The Gaia document recommends that astrometric_excess_noise_sig > 2 indicates that the given noise is probably significant. We have not found any obvious correlation between the significance and magnitude in FRS, so astrometric_excess_noise_sig > 2 could be the sensible cutoff to ensure the excess noise is applicable for all magnitudes. ipd_gof _harmonic_amplitude measures the amplitude of the variation of the goodness-of-fit of image parameter as a function of the position angle of the scan direction. A large amplitude might indicate the source has more than one optical center. Quasar pairs, or AGN with bright parsec-scale optical jets, may lead to a relatively large amplitude of the sources, and the positioning accuracy of these quasars could be affected by the multiple centers. We hope to use the same method as for the excess noise to obtain a suitable criterion. As seen in Fig. 4, it seems that ipd_gof _harmonic_amplitude does not correlate with magnitude, and the 99% quantile line is almost a straight line parallel to the x-axis. The criterion we selected for this parameter is ipd_gof _harmonic_amplitude > 0.26 when G < 20.9 mag. ipd_f rac_multi_peak is another important parameter for evaluating whether the source is a binary. It provides the percent of successful-IPD (Image Parameters Determination) windows with more than one peak, and we could preliminarily judge whether a source is a visually resolved double star based on this parameter. Normally, all sources with percent greater than zero should be selected as abnormal quasar candidates, and totally, we found that there are 32578 sources in FRS whose ipd_f rac_multi_peak is greater than zero, with only 3215 (10%) of them greater than one. A large number of sources with ipd_f rac_multi_peak = 1 may increase the contamination of our final catalog, and ipd_f rac_multi_peak > 1 can be used to select some extreme quasars efficiently. So we take ipd_f rac_multi_peak > 1 as the criterion: in this case, 3392 (0.7% of SDSS quasars) quasars are selected.
With the considerations above, we propose the following criteria for selecting abnormal quasars in SDSS DR16Q: we finally obtained 44 quasars that met all of the above criteria, and these quasars are included in the type B catalog of abnormal quasars (Catalog B hereafter).
RESULT
In Table 1 and Table 2 we detail the contents of our catalogs. The sky distribution of the two catalogs is shown in Fig. 5. There are 108/309 3 (35.0%) 2-parameter Gaia sources in Catalog A, and for Catalog B, the rate is 26/44 (59.1%). Therefore, for the two whole catalogs, the position errors are obviously greater than those of Gaia FRS and SDSS DR16Q as expected, see Fig. 6. For the 5-parameter and 6-parameter sources in Catalog A and B, the normalized proper motion and parallax distributions are shown in Fig. 7. Compared to the almost zero parallax and proper motion of Gaia FRS, the sources in Catalog A and B have worse astrometric solutions. The Gaia celestial reference frame (Gaia-CRF3) is materialised by 1,614,173 quasars in GEAC (Brown et al., 2021), and we find that there are 111 common sources between GEAC and catalog A, and 16 common sources with Catalog B, which we recommend removing from GEAC. Fig. 8 shows the redshift distribution of these two catalogs: we find that the distribution of Catalog A and SDSS DR16Q is almost consistent. However, the sources in Catalog B are distributed more in the low redshift part, and almost no sources in Catalog B have a redshift in the range of 0.5-0.8.
As we mentioned above, the spectroscopically identified SDSS DR16Q has a contamination of 0.3%-1.3%, which is estimated by implementing the visual inspection of the spectra of a randomly chosen sample (Lyke et al., 2020). In Catalog A, for the 155 SDSS spectroscopically identified quasars, 43 have been visually inspected, and 36 are Quasars, while 7 of them are identified as BAL Quasars. Of the remaining 112 sources with only spectral identification, 98 have been included in LQAC5 4 (Souchay et al., 2019), and the remaining 14 quasars are newly identified by SDSS DR16Q. In Catalog B, 10 of the 44 SDSS quasars have been visually inspected, and all of them are Quasars. For the remaining 34 sources with only spectral identification, 24 have been included in LQAC5, and the remaining 10 quasars are newly identified by SDSS DR16Q. Therefore, we believe the quasars in our catalog are reliable.
We have checked the SDSS images of the sources in Catalog A and B. Some of them show obvious characteristics of a binary system, so these quasars may be potential quasar pairs. The details of the two catalogs are given below.
Catalog A
The sources in Catalog A have more than one matched source in Gaia or SDSS within a 1 radius. They may be quasar pairs, star-quasar pairs, active galactic nuclei with obvious jets, or lensing objects. For the two sources with three Gaia sources matched, the Simbad Astronomical Database (Wenger et al., 2000) shows that there is a significant lensing effect near these two sources. Their SDSS IDs are To eliminate the interference of foreground stars, we mark some 5-parameter and 6-parameter sources with significant parallaxes and proper motions, which means that they might be star-quasar pairs. If at least one source in a pair has | /σ | > 5, or |µ α * /σ µα * | > 5, or |µ δ /σ µ δ | > 5, the pair is marked as star-quasar pair. According to this criterion, 62 pairs are preliminarily identified as star-quasar pairs.
There are 64 extended sources and 91 point-like sources contained in Catalog A. Fig. 9 shows several bright sources in Catalog A. For the point-like sources, most of them only have one optical center except Fig. 9 (B), but the Gaia high-precision observation indicates that there is more than one source in 1 radius of each SDSS position. Therefore more observations are needed for identifying if they are quasar pairs. For the extended sources, some of them exhibit obvious galaxy structures, such as Fig. 9 (F), (H), while other extended sources may be caused by bright jets. In addition, the mean redshift of the extended sources is 1.19, and the average is 1.69 for the point-like sources. Therefore, these point-like sources are very important for studying high-redshift quasar pairs.
Catalog B
The sources in Catalog B are abnormal quasars, whose astrometric observation parameters deviate significantly from the entire sample. In Gaia EDR3, all kinds of sources must be solitary. It means if there are multiple sources found within a 0.18 radius, the database will only keep one source with a flag named "duplicated_source" . Although this flag does not definitely indicate that the source is a binary, it can be used as a reference to assess the reliability of the catalog. The proportion of duplicate sources in Catalog B is 11/44 (25%), while the ratios in SDSS DR16Q and Gaia FRS are 0.9% and 0.6%, respectively, which shows that our selection criteria are effective. Fig. 10, panel (A), (B), (E), (F) are four quasars with the flag "duplicated_source", while the remaining four without this flag. Due to the low resolution of SDSS, there is no obvious difference between the images of duplicated and non-duplicated sources. Therefore, to further confirm whether these sources are quasar pairs or not, higher-resolution observations are needed, or maybe a method that combines spectral and light curves could be effective. Among the 25 extended sources, J115517.34+634622.0 is the only one with a redshift greater than 0.5, and its redshift is 2.9. The SDSS image of the source shown in Fig. 10 (H) also exhibits a distinct dual optical center. Consistent with Catalog A, the average redshift of the point-like sources is 1.71, and the average redshift of the extended sources except J115517.34+634622.0 is 0.21. The huge redshift gap between the extended sources and the point sources may be because the extended structures of the long-distance high-redshift point sources are too faint to be observed.
Extened catalogs with different combination of the criteria
As we mentioned above, there are 0.9% of SDSS DR16Q quasars with the flag "duplicated_source". Although the proportion is very small, the number is huge. 4472 SDSS quasars with G < 20.9 mag are duplicated sources, which indicates that Catalog B has poor completeness. In Eq 1, to improve the reliability of the catalog, we only select the sources that meet all the criteria. In fact, each criterion can be used individually to select quasars with abnormal astrometric characteristics.
To select different kinds of abnormal sources, we consider three subsets of criteria in Eq 1: (1), the sources meet the criteria (i), (ii), (iii) and (vi); (2), the sources meet the criteria (iv), (v) and (vi); (3), the sources met the criteria (iv), (v) and (vi) but not the criteria (i), (ii) and (iii). The above three samples are respectively compiled into the Extended Catalog 1, 2, 3 of Catalog B (hereafter as EB1, EB2, EB3, respectively). According to their respective selection criteria, the sources in EB1 have bad fitting results in Gaia EDR3, and the sources in EB2 may be visually resolved binaries. EB3 contains the sources which have high percents of multi-peak but low noises, which means that there is another source near the EB3 source. Table 3 shows some statistical information of the three catalogs. Consistent with Catalog A and B, the catalogs with more extended sources have lower average redshift. There are hundreds of common sources in Extended catalogs of Catalog B and GEAC. Fig. 11 shows that the sources in EB2 have slightly worse position precision than SDSS DR16Q sources, and the position precision of sources in EB1 is even worse than that of EB2. The number of duplicated sources in Table 3 also shows that we only select a small part of abnormal quasars. Figure 11. The cumulative distribution histogram for σ α * (A) and σ δ (B) of sources in Gaia FRS, SDSS DR16Q, EB1, EB2 and EB3.
In addition to the above criteria, the renormalized unit weight error (ruwe) may also be a criterion that can be used to select binaries. In Gaia Data Release 2 (DR2), ruwe > 1.4 indicates that the source is a non-single star, however, this value is set to null for the 2-parameter sources in EDR3. As Lindegren et al. (2021) emphasized, both the ruwe and excess source noise quantify the disagreement between the Gaia observations and the best-fitting model. Fig. 12 shows that the ruwe of the sources in Catalog B is greater than that of FRS sources, which means that our criteria selecting binaries are effective. The ruwe of sources in EB1 is significantly higher than that of other samples, which proves that excess source noise and ruwe are consistent with each other. Therefore, ruwe is also a reliable indicator that can be used to select binary objects and may play an important role in our future releases.
Apart from the SDSS DR16Q, there are many other reliable quasar catalogs such as the Large Bright Quasar Survey (Hewett et al., 1995), the INT Wide Angle Survey (Sharp et al., 2001), and the quasars from Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) . With our method of selecting abnormal astrometric quasars, a large number of quasar pair candidates will be selected.
Identification of Quasar Pairs and Lensed Quasars
In section 2, we have described the selection criteria and hence obtained two samples of abnormal quasars denoted as Catalog A and B. It is also interesting to explore the nature of these sources, whether they are quasar pairs, lensing images or containing jet-like structures. With the high-resolution observations from Hubble Space Telescope (HST), we could firstly resolve the general structures of these abnormal quasars, and further analysis of the corresponding spectra and light curves will be needed for a detailed classification.
There are about a dozen sources in Catalog A and Catalog B that have been identified as lensed quasars in the literatures. For example, Fig. 13 shows the SDSS image (left side) and the HST optical image (right side) of the lensed quasar SDSS 111816.94+074558.2 as identified in catalog A, already reported in (Impey et al., 1998;Weymann et al., 1980). We could see the advantage of the higher resolution of HST compared with SDSS when resolving the structure of those abnormal quasars. We also note that two similar pioneer works by Shen et al. (2021) and Chen et al. (2022) have reported 2 and 43 AGN pair candidates, respectively, using the methods of varstrometry (i.e., excess astrometric noise). Among the sources we selected in this paper, 8 sources in Catalog A, 2 sources in Catalog B and 5 sources in EB1 have been reported as AGN pair candidates in their papers. However, both the target sample and the selection criteria are a little bit different. We will compare our results with them in future work.
CONCLUSION
By cross-matching with other quasar catalogs, Gaia EDR3 provides high-precision astrometric data for a large number of quasars, and a list of 1,614,173 quasar candidates are obtained, which could be used to establish the celestial reference frame in the optical band. However, during the selection process, many spectroscopically identified quasars showed abnormal astrometric characteristics, such as significant parallaxes and large proper motions. These quasars may come with astrometric jitter detectable with Gaia data. Therefore, with several Gaia parameters describing the goodness of data fitting, quasars with Figure 13. SDSS (left side) and HST optical image (right side) of the lensed quasar SDSS 111816.94+074558.2, respectively. For the SDSS image, the resolution is 0.015 arcsec per pixel with 512 pixel in total. While for the HST image, the FOV is 7.516" × 6.359" and the scale 1" is labelled as the light blue line segment in the right panel.
abnormal astrometric characteristics could be selected. The selected quasars can form a group of quasar pair candidates.
We propose a series of criteria for selecting abnormal quasars based on Gaia astrometric data. Since Gaia EDR3 contains 344 million 2-parameter sources, this means that these sources have only positional parameters. Our criteria do not rely on the complete data of parallax and proper motion, but depend on the goodness of fit to the observed data. With these criteria, two catalogs are obtained. Catalog A contains 155 SDSS quasars with more than one Gaia matched within a 1 radius. Catalog B contains 44 SDSS quasars whose Gaia observations are significantly different from the best-fitting standard astrometric model. The percentages of extended sources in Catalogs A and B are 41.3% and 56.8%, respectively. And in both catalogs, the mean redshift of the extended sources is significantly smaller than that of the point sources.
Although some of the SDSS images show obvious double star features, there are still many sources in our catalogs for which it is not possible to determine whether they are quasar pairs at the resolution of SDSS. Therefore, more high-resolution observations are needed to determine the fraction of quasar pairs of the catalogs in the future. In addition to SDSS DR16Q, many other quasar catalogs need to be further checked, so more efforts are needed to improve the selection criteria.
There are 127 common sources between the GEAC quasars and our Catalog A and B, which should be excluded from GEAC for the purpose of establishing a reference frame. Besides, hundreds of common sources between Extended Catalogs of Catalog B and GEAC also show large position errors. The aspects of morphology and astrometric variability were crucial for selecting the quasars to form the reference frame (Ma et al., 2009). A perturbation in the disk of the host galaxy can cause a significant offset to the photocenter in the Gaia observations (Popović et al., 2012). Andrei et al. (2012) used the morphological indexes in the Gaia Initial QSO Catalog to indicate such influences. The host detection and characterization for about 1 million quasars will be released in the future release of Gaia DR3. It might be interesting in the future to see if there is any correlation between the morphological parameters and the astrometric parameters mentioned in the current paper. | 6,402.2 | 2022-03-07T00:00:00.000 | [
"Physics"
] |
Temsirolimus Inhibits Proliferation and Migration in Retinal Pigment Epithelial and Endothelial Cells via mTOR Inhibition and Decreases VEGF and PDGF Expression
Due to their high prevalence, retinal vascular diseases including age related macular degeneration (AMD), retinal vein occlusions (RVO), diabetic retinopathy (DR) and diabetic macular edema have been major therapeutic targets over the last years. The pathogenesis of these diseases is complex and yet not fully understood. However, increased proliferation, migration and angiogenesis are characteristic cellular features in almost every retinal vascular disease. The introduction of vascular endothelial growth factor (VEGF) binding intravitreal treatment strategies has led to great advances in the therapy of these diseases. While the predominant part of affected patients benefits from the specific binding of VEGF by administering an anti-VEGF antibody into the vitreous cavity, a small number of non-responders exist and alternative or additional therapeutic strategies should therefore be evaluated. The mammalian target of rapamycin (mTOR) is a central signaling pathway that eventually triggers up-regulation of cellular proliferation, migration and survival and has been identified to play a key role in angiogenesis. In the present study we were able to show that both retinal pigment epithelial (RPE) cells as wells as human umbilical vein endothelial cells (HUVEC) are inhibited in proliferating and migrating after treatment with temsirolimus in non-toxic concentrations. Previous studies suggest that the production of VEGF, platelet derived growth factor (PDGF) and other important cytokines is not only triggered by hypoxia but also by mTOR itself. Our results indicate that temsirolimus decreases VEGF and PDGF expression on RNA and protein levels significantly. We therefore believe that the mTOR inhibitor temsirolimus might be a promising drug in the future and it seems worthwhile to evaluate complementary therapeutic effects with anti-VEGF drugs for patients not profiting from mono anti-VEGF therapy alone.
Introduction
Age related macular degeneration (AMD), macular edema (ME) following retinal vein occlusions (RVO) and diabetic macular edema (DME) as a complication of diabetic retinopathy (DR) are major reasons for severe vision loss and legal blindness in the western world [1][2][3]. With an expected increase in patients suffering from diabetes mellitus as well as the rising mean age of the population over the next decades even more patients will be affected in the future resulting in a tremendous socio-economic burden [4].
The pathogenesis of retinal vascular disease is complex and yet not fully understood [5,6]. However, cellular proliferation and vascular leakage are found in AMD, ME following RVO, as well as in DME, resulting in pathologic fluid accumulation in the macular area. Moreover is (sub)retinal neovascularization in AMD a severe complication and evidence exists, that this event is mainly triggered by proliferation and migration of endothelial and retinal pigment epithelial cells [7]. In addition, it is well known that besides a mechanical defect in different structures, such as the endothelium in retinal vessels and the outer blood retinal barrier formed by the retinal pigment epithelium and Bruchs membrane, a common underlying mechanism includes the increased production of angiogenic and inflammatory components due to increased hypoxic retinal conditions [8][9][10][11]. Although it is sufficiently clear, that the three diseases are different in their development and pathological sequence, it has been shown, that hypoxia and the deregulation of a large number of various growth factors such as platelet derived growth factor (PDGF), placenta derived growth factor (PlGF), connective tissue growth factor (CTGF) and particularly vascular endothelial growth factor (VEGF) play a crucial role in their etiopathologies [12][13][14][15]. Based on this knowledge selective intravitreal inhibition of VEGF has become a safe and effective primary treatment approach in the therapy of neovascular AMD, ME following RVO, and DME, as well as in several other ocular conditions that are characterized by macular edema. Today, the intravitreal injection of VEGF antibodies such as bevacizumab and ranibizumab or fusion proteins such as aflibercept [16,17] have therefore become common clinical practice. Large studies (e.g. RESOLVE, MARINA or VIEW) could clearly show that a significant number of patients suffering from macular edema improved in terms of edema resolution as well as visual acuity [18][19][20]. However, a number of patients do not improve from this treatment and for these cases alternative treatment options should be investigated.
Hypoxia plays a central role in the development and progression of retinal vascular diseases. A number of studies could clearly link hypoxia with several retinal diseases characterized by retinal ischemia and subsequent pathological angiogenesis involving the up-regulation of VEGF and other VEGF-related polypeptides such as PlGF [21][22][23]. The binding of these VEGF-family proteins to their receptor tyrosine kinases VEGFR-1, VEGFR-2 and VEGFR-3 is followed by numerous downstream effects leading to proliferation, cell migration, vascular permeability and endothelial inflammation. The most angiogenic effect of VEGF in-vivo is linked with its binding to VEGFR-2 [24]. One of the most important signaling pathways among several downstream of VEGFR-2 is PI3K/Akt [25,26] which subsequently activates the mammalian target of Rapamycin (mTOR) [27]. mTOR is a serine-threonine protein kinase that plays an important role in signal transduction pathways that control cell growth and angiogenesis and has been a target to many cancer therapy approaches in-vitro and in-vivo [28,29]. Temsirolimus is an ester derivative of sirolimus with enhanced pharmaceutical properties including improved stability and solubility and binds to a cytoplasmatic protein. This complex binds and inhibits mTOR and proved its potency in several clinical and laboratory studies [30][31][32][33].
We therefore evaluated the effect of temsirolimus on cellular events associated with retinal vascular diseases such as neovascular AMD and DR on retinal pigment epithelial cells and human umbilical vein endothelial cells as a common model for endothelial cells in vision research in an in-vitro model.
Ethics
The methods of securing human tissue were humane, complied with the Declaration of Helsinki, and were approved by the local ethics committee and institutional review board at Ludwig-Maximilians-University in Munich, Germany (file number AKIRB-20123). Samples were procured from our tissue bank (''Bayerische Gewebebank Bavarian Tissue Banking GmbH'', http://www.klinikum.uni-muenchen.de/Augenklinik-und-Poliklinik/de/forschung-lehre/arbeitsgruppen/hornhautbank/ index.html) and written informed consent was obtained before donor preparation from the donor or the next of kin.
Human RPE Cell Culture
Primary RPE cells from four human donors (aged 34, 47, 51, and 59 years old, obtained 3-10 h postmortem) without any history of eye disease were obtained from the Eye Bank of Ludwig Maximilian University and were prepared as previously described. [34] Dulbecco modified Eagle medium (DMEM; Biochrom, Berlin, Germany) supplemented with 10% fetal calf serum (FCS; Biochrom) was used as the cell culture medium. Primary human RPE cells of passage 3 to 7 were used for experiments.
HUVEC Culture
Cultures of human umbilical vein endothelial cells (HUVEC) were purchased from Promocell (Heidelberg, Germany) and cultured according to the manufacturer's instructions. Endothelial cell growth medium (ECGM; Promocell) with 5% FCS (Biochrom) was used as the cell culture medium. The experiments were conducted on HUVECs of passages 3 to 5.
Treatment of Cell Cultures
For methylthiotetrazole (MTT; 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyl tetrazolium bromide) assays, cells were seeded in 6well plates. For all other cell culture experiments, HUVECs or RPE cells were seeded in 35-mm tissue culture dishes and cultured on confluence in darkness. None of the used cell culture substrates were coated with anything aside of the polycarbonate membrane, that was coated with fibronectin and used in the boyden chamber assay.
As a result from viability testing and additional molecular pretesting, temsirolimus at a final concentration of 0.05 mg/mL was chosen for all further experiments regarding growth factor expression.
Hypoxic Stimulation
For hypoxia experiments, primary human RPE cells and HUVECs were grown to confluence. Human RPE cells were then washed and incubated overnight in serum-free medium. For hypoxic stimulation, dishes were placed in an incubator with 1% O 2 , 5% CO 2 , and 94% N 2 in humidified atmosphere, for 24h. The controls were kept in humidified atmosphere of 5% CO 2 in air at 37uC for the same time period.
MTT Assay
The MTT assay was used to determine the cell survival rate. The MTT assay, which is well established for the assessment of cell viability, was performed as described by Mosmann, with some modifications. [34,35] The medium was removed, the cells were washed with phosphate-buffered saline (PBS), and 1000 mL MTT solution (1.5 mL MTT stock, 2 mg/mL in PBS, plus 28.5 mL DMEM) was added to each well. RPE cells or HUVECs were incubated at 37uC for 1 h. The formazan crystals that formed were dissolved by the addition of 1000 mL DMSO per well. Absorption was measured by a scanning multiwell spectrophotometer at 550 nm (Molecular Probes, Eugene, OR, USA). The results are expressed as the mean percentage of proliferation in the control. The control was set to 100% to allow easier interpretation of the results.
Evaluation of Cellular Viability
To investigate the effects of different concentrations of temsirolimus on viability of HUVECs and RPE cells, the cells were brought to confluence on uncoated well plates, kept under serum-free conditions for 24 h, and then treated with temsirolimus at concentrations 0.005, 0.05, 0.5, 1, 2.5, 5, 7.5, 10, 12.5, 15, 17.5, and 20 mg/mL for another 24 h. Then the MTT assay was performed. The control cells were HUVECs or RPE cells of the same passage.
Cell Proliferation
To determine the effect of temsirolimus on cellular proliferation, HUVEC and RPE cells were seeded into a plain 6 well plate at a density of 3610 4 cells/well with medium containing 10% fetal calf serum. After four hours, final concentrations of 0.005, 0.05 and 0.5 mg/mL of temsirolimus and two controls were established in the culture plate. Cells were then incubated under standard cell culture conditions for 48 hrs. Then MTT solution (1.5 ml of 2 mg/ml in PBS plus 28.5 ml MEM) was added followed by 1 hour incubation at 37uC before 500 ml DMSO was added to dissolve the formazan crystals. Absorption was measured by a scanning multiwell spectrophotometer at 550 nm (Molecular Probes, Eugene, OR, USA). The control cells were HUVECs or RPE cells of the same passage. Controls were set to 100% to simplify reading of results.
Migration Boyden Chamber Assay
Migration was observed by using microchemotaxis chambers (Neuroprobe, Gaithersburg, Maryland, USA) as described by Boyden with a few modifications. [36] The chamber consists of a lower and an upper part divided by a polycarbonate membrane (Nuclepore) with pores of 8 mm in diameter. In order to induce chemotaxis, 200 ml medium containing 1 ng/ml VEGF-165 (VEGF-165, Sigma-Aldrich, St Louis, Missouri, USA) was filled into the lower half of the chamber and covered with the polycarbonate membrane previously coated with fibronectin (coating time = 24 hrs at 8uC and at a concentration of 2 mg/ cm2). The upper half contained 1610 5 HUVEC or RPE cells cells in 750 ml of the according cell culture medium containing 5% FCS. Five chambers were prepared with three different concentrations of temsirolimus (0.005 mg/mL, 0.05 mg/mL, 0.5 mg/mL) and two controls containing the same amount of solvent. Cells were able to migrate for 24 hrs under standard cell culture conditions. Then all non-migrated cells on top of the membrane were removed by a cotton bud and the remaining cells on the bottom side were fixed with methanol for 10 mins. After fixation, a hematoxylin and eosin (HE) stain was used to make cells visible under a magnification of 6200 using an inverted phasecontrast microscope (Leica Microsystems GmbH, Solms, Germany). Cells of five representative areas were photographed using a digital camera (Leica, Solms, Germany) and manually counted using the Leica LAS particle counting tool. The assay was repeated four times on four different days. The results are presented as % control and controls are set to 100% in order to simplify result interpretation.
RNA Isolation and Real-Time Polymerase Chain Reaction
Total RNA was isolated from HUVEC and RPE cells by the guanidinium thiocyanate-phenol-chloroform extraction method (Stratagene, Heidelberg, Germany) and quantification of VEGF (VEGF-A) and PDGF (PDGF-BB) mRNA was performed with specific primers using a LightCycler System (Roche Diagnostics, Mannheim, Germany) as described in our previous work [37].
Primers and probes were detected with ProbeFinder 2.04. Table 1 lists the primers used for RT-PCR. The level of VEGF and PDGF mRNA was determined as the relative ratio (RR), which was calculated by dividing the level of the respective mRNA by the level of the 18S rRNA housekeeping gene in the same samples. The ratios are expressed as decimals.
Protein Extraction and Western Blotting
HUVEC and RPE cells grown on 35-mm tissue culture dishes were washed twice with ice-cold PBS, collected, and lysed in RIPA cell lysis buffer. After centrifugation for 30 min at 19,000 6 g in a cold microfuge (5810R; Eppendorf, Hamburg, Germany), the supernatant was transferred to fresh tubes and stored at -70uC for future use. The protein content was measured by the bicinchoninic acid protein assay (Pierce, Rockford, IL, USA). Denatured proteins (1e2 mg) were separated by electrophoresis under reducing conditions with the use of a 5% sodium dodecyl sulfate (SDS)-polyacrylamide stacking gel and a 12% SDS-polyacrylamide separating gel, transferred with semidry blotting onto a polyvinyl difluoride membrane (Roche), and probed with a mouse anti-VEGF-antibody or a mouse anti-PDGF-antibody (both (Santa Cruz Biotechnology Inc., Santa Cruz, CA, USA), as described previously [34]. Chemiluminescence was detected with an imager (LAS-1000; RayTest) and the generated light units. Exposure times ranged between 1 and 10 min. Quantification was performed on a computer (AIDA software; RayTest). Western blot lanes were manually labeled (''profile'') and analyzed by the one dimensional (1D) analysis mode of AIDA. Brightness was detected by the program and divided through the height of the profile. Resulting graphs were manually marked at the peak and the volume below calculated as the integral by the software. An even protein load in each lane was confirmed by staining of the polyvinyl difluoride membranes with Coomassie Brilliant Blue after the blotting procedure.
Detection of VEGF and PDGF Secretion by HUVEC and RPE Cells
Cultures of HUVECs and RPE cells were grown to confluence and treated as described above. Levels of VEGF and PDGF in the culture supernatants were determined by enzyme-linked immunosorbent assays (ELISAs). The supernatants were collected after 24 h, and the levels of VEGF-A and PDGF were quantified using a VEGF (VEGF-A) or PDGF (PDGF-BB) QuantikineH ELISA Assay Kit (R & D Systems, Wiesbaden, Germany) according to the manufacturer's instructions.
Statistical Analyses
All data were analyzed with SPSS 13.0 for Windows (SPSS, Chicago, IL, USA). For all statistical tests, P , 0.05 was considered significant. Results of the MTT assay for evaluating proliferation are presented as mean (SD) units of absorbance. Ten individual samples per group were measured in triplicate. For the toxicity study the results of the assay are presented as mean value displayed as bars with standard deviation compared to the control. Results from VEGF and PDGF ELISAs are presented as mean (SD) ratios of each tested probe, which were normalized to the control.
Results of the RT-PCR are presented as mean (SD) ratios of the investigated mRNA and 18S rRNA. All experiments were performed at least in triplicate and repeated three times.
For statistical analyses a Wilcoxon test with a correction for multiple testing was used.
Effects of Various Concentrations of Temsirolimus on Viability, Proliferation and Migration of HUVECs and Primary RPE Cells
HUVEC and RPE Cell Viability. To exclude that the detected molecular interactions of temsirolimus in HUVEC and RPE cells depend on toxic effects, viability of cells was investigated after exposure to temsirolimus concentrations. Results of the MTT assay showed no significant decrease in cellular viability in the cell cultures after 24 h of exposure to temsirolimus at concentrations between 0.005 and 7.5 mg/mL for HUVECs and 0.005 and 12.5 mg/mL for RPE cells (Fig. 1, Wilcoxon test with a correction for multiple testing). In HUVEC, concentrations of 10 mg/mL and higher showed an increasing reduction of cell viability. However, .60% of the cells still showed activity. It must be noted though, that an earlier study came to slightly different results: Zhang et al. investigated the effect of temsirolimus and found signs of toxicity to HUVECs at a concentration of 0.1 mg/mL. [38] They used this concentration since another group showed that sirolimus significantly inhibited proliferation in five different cell lines at this concentration. No toxicity was however appreciated in this study at the same concentration for these cells. [39,40] Moreover is the intravenous concentration of temsirolimus (ToriselH, Pfizer, NY, USA) in the treatment of renal cell carcinoma significantly higher (30 mg). Therefore further studies are necessary to further illuminate this issue.
Beginning with a concentration of 12.5 mg/mL, cellular viability was reduced by .60% in a dose dependent manner. In RPE cells, concentrations beginning from 15 mg/mL temsirolimus led to a decrease of viability. However, even at the highest tested concentration of 20 mg/mL still .60% of cells were active (Fig. 1).
Proliferation of HUVEC and RPE Cells.
Cell proliferation is one of the key factors involved in neovascularisation. Therefore, we assessed HUVEC and RPE cell proliferation under different concentrations of temsirolimus (0.005, 0.05, and 0.5 mg/mL). At all tested temsirolimus concentrations (0.005, 0.05, and 0.5 mg/ mL) and in both tested cell lines, HUVEC and RPE cells, a significant reduction in optical density was observed, compared to the untreated control (Wilcoxon Test, p,0.05) ( Fig. 2A and 2B).
Migration of HUVEC and RPE Cells. The effects of temsirolimus on cellular migration were analyzed in vitro with a chemotactic approach. The chemotactic migration using VEGF-165, the isoform of VEGF that is most present in the eye, as an attractant was evaluated by using the Boyden Chamber. In both cell lines evaluated, HUVEC and RPE cells, there was a significant, dose dependent reduction in cell migration observed for temsirolimus concentrations (0.005, 0.05, and 0.5 mg/mL), compared to the control (Fig. 3A, 3B and 3C). Of note, there was also a significant difference in the decrease in-between the three tested temsirolimus concentrations (0.005, 0.05, and 0.5 mg/mL) with the strongest effect at the highest concentration of 0.5 mg/mL temsirolimus.
Hypoxia Induced VEGF and PDGF Protein and mRNA Expression in HUVECs and RPE Cells
Expression of VEGF and PDGF mRNA. Exposure to hypoxia for 24 h led to a significant increase in VEGF and PDGF mRNA expression in both HUVECs and RPE cells. Treatment of both cell lines with 0.05 mg/mL temsirolimus significantly reduced this hypoxia-induced increase in VEGF ( Fig. 4A and 4B) and PDGF mRNA ( Fig. 5A and 5B) after 24 h.
Expression of VEGF and PDGF Protein
Western-Blotting from Cell Lysates. For verification that the hypoxia-induced decrease of VEGF and PDGF in mRNA transcription translates into increased protein synthesis, whole cellular protein extracts of untreated control cells, cells exposed to hypoxia without temsirolimus treatment, and cells treated with 0.05 mg/mL temsirolimus and exposure to hypoxia were analysed by western blotting. Under hypoxic conditions, a marked increase in both VEGF and PDGF expression could be detected compared with the control. In HUVECs a 4.1x and 3.5x increase could be appreciated for VEGF and PDGF respectively, while for RPE this increase was even a little bit stronger with 4.4x for VEGF and 3.8x for PDGF compared to the control. When HUVEC and RPE cells were treated with 0.05 mg/mL temsirolimus and then exposed to hypoxia however, expression of both VEGF and PDGF protein was noticeably lower than after exposure to hypoxia only. In the presence of temsirolimus the increase of VEGF and PDGF was reduced to 1.2x and 0.9x in HUVECS and 1.9x and 2.0x in respect of RPEs ( Figure 6).
Detection of VEGF and PDGF Secretion in HUVECs and
RPE Cells. To investigate the effect of temsirolimus on VEGF and PDGF (PDGF-BB) secretion after exposure to hypoxia, levels of VEGF ( Fig. 7A and 7B) and PDGF ( Fig. 8A and 8B) in cell culture supernatants were quantified using ELISA after 24 h. A significant increase of the secretion of VEGF and PDGF by cultured HUVECs and RPE cells was noted after 24 h of exposure to hypoxia. Treatment of either cell line with temsirolimus 0.05 mg/mL reduced the secretion of VEGF and PDGF significantly compared to controls.
Discussion
Without any doubt, intravitreal inhibition of VEGF revolutionized the therapy of a number of retinal vascular diseases [41,42]. The selective inhibition of VEGF and its isoforms is one of the major strength of this treatment approach as it goes along with only very few side effects [43], it is however conceivable that a broader therapeutic approach involving the inhibition of key signal pathways might be of interest especially for those patients, who do not sufficiently benefit from the inhibition of a single molecule such as VEGF [44].
The mammalian target of rapamycin (mTOR) is one of the key signaling complexes that had been identified in conditions that are linked with angiogenic features such as increased cellular proliferation and migration [45] and evidence has been provided that mTOR inhibition could be a suitable therapeutic strategy for neovascular, degenerative retinal diseases [46].
We therefore investigated whether temsirolimus, a rapalog (rapamycin analog), would have inhibitory features on RPE and human umbilical vein endothelial cells (HUVEC) in non-toxic concentrations in an in-vitro setting. As a first important result of our experiments we could clearly show that temsirolimus concentrations between 0.005 mg/mL and 7.5 mg/mL for HU-VECs and 0.005 mg/mL and 12.5 mg/mL for RPE cells showed no decreased viability (Figure 1). Always in mind that these results are only in vitro, our data give a clear hint towards a broad therapeutic range of the substance and that the concentrations used for our further experiments (0.005 mg/mL -0.5 mg/mL) can be estimated as safe and non-toxic. MTOR consists of two protein complexes: mTOR complex 1 and 2 (mTORC 1 & 2). Once a growth factor such as VEGF binds to its receptor, particularly VEGFR-2, a signaling cascade is started leading to activation of PI3K and subsequent activation of protein kinase B (Akt) [29]. The effect of this is a regulation of mTORC 1 and a consecutive up regulation of eukaryotic initiation factor 4E-binding protein (4E-BP1) and the ribosomal protein S6 kinase (S6K) [47]. The phosphorylation of these proteins eventually leads to increased levels of several proteins that are important in cellular proliferation, migration and angiogenesis in RPE and vascular endothelial cells [48][49][50][51][52], being crucial cellular features in the pathogenesis of retinal vascular diseases such as neovascular AMD or proliferative DR [53]. In our experiments, a significant dose dependent effect on RPE cells as well as on HUVECs was seen in terms of inhibiting proliferation (Figure 2) as well as migration (Figure 3) after treatment with three different concentrations of temsirolimus, suggesting that a significant reduction of these properties can be reached by directly targeting mTOR without specifically blocking VEGF or other involved cytokines. However, in previous investigations using cancer cell lines, the inhibition of mTOR not only showed reduced activation in respect of cell proliferation and migration but also decreased activity of PI3K/Akt itself, suggesting susceptibility of even upstream mediators [54,55]. The pathogenesis of ME in retinal vascular disease is multifactorial and mainly results from a breakdown in the blood-retina barrier (BRB) separating the neurosensory retina from the vascular components of the eye. A disruption of the BRB involves an abnormal inflow of fluid into the neurosensory tissue that often causes residual accumulation of fluid in the intraretinal layers of the macula [56].
It is well known, that hypoxia is one of the leading causes for increased expression of VEGF, including its isoforms leading to the breakdown of the BRB [5,57]. VEGF is expressed by all vascularized retinal tissues and there is clear evidence that in response to hypoxia, augmented expression of VEGF in retinal pericytes, RPE and endothelial cells occurs [58,59].
We therefore induced elevated VEGF and PDGF levels in both RPE and endothelial cells by exposing them to hypoxic conditions and analyzed a possible effect of temsirolimus in respect of VEGF and PDGF levels on RNA and protein levels. Our results implicate that a significant reduction of VEGF and PDGF levels can be achieved by intervening at the level of mTOR, a downstream mediator of VEGFR-2 and VEGF, by temsirolimus following increased growth factor expression by induction of hypoxia in both cell types (Figure 4, 5 and Table S1). We furthermore proved that this reduction eventually lead to a decrease in protein levels for both cytokines qualitatively ( Figure 6) as well as quantitatively (Figure 7).
Mizukami et al. previously postulated however, that not only hypoxia, by stabilization of HIF-1, cellular receptors such as EGFR and IGF-IR and other growth factors (e.g. PDGF) induce the production of VEGF and its receptors but also PI3K and its downstream signaling pathway compounds including mTOR itself might trigger their production [60][61][62][63][64]. The strong decrease of VEGF expression in our experiments might therefore also emphasize the idea of mTOR being directly involved in the expression of VEGF and other growth factors.
Our experiments clearly show that temsirolimus has a broad therapeutic range and even low concentrations were capable of inhibiting proliferation and migration significantly in RPE and endothelial cells. Our results additionally suggest that temsirolimus is able to directly decrease the expression of growth factors such as VEGF and PDGF both on RNA and protein levels.
These findings are particularly interesting as the combination of an anti-VEGF drug and temsirolimus might have complimentary therapeutic effects for patients who do not benefit from repeated intravitreal anti-VEGF injections. This combination however, has not been studied extensively and therefore no valid statement regarding synergistic effects with anti-VEGF drugs can be made. Due to the very low concentrations that are needed to achieve significant effects and currently ongoing studies to evaluate the intravitreal biocompatibility of sirolimus [65], another mTOR inhibitor of the same family, we believe that toxic side effects described, when taken orally in order to treat different cancers would be less likely [66].
Temsirolimus is a well-tolerated and approved drug for renal cell carcinoma [67]. Our results show first evidence that inhibiting mTOR, a central signaling pathway in the complex cascade underlying neovascularization and angiogenesis, with temsirolimus lead to a decrease in the production of the key cytokine VEGF as well as other growth factors such as PDGF and also reduce cellular events typically associated with angiogenesis.
Our very early results on the effect of mTOR inhibition in this context are promising, although they are solely in-vitro with typical limitations including investigating mono cell cultures and cells that are not in the proper physiological state. ( Figure S1) We nonetheless believe, that temsirolimus might have a role in the treatment of ocular neovascular diseases in the future and further investigations should be pursued to gain additional information regarding clinical outcome as well as biocompatibility. | 6,202 | 2014-02-26T00:00:00.000 | [
"Biology",
"Medicine"
] |
A software framework for FCC studies: status and plans
The Future Circular Collider (FCC) is designed to provide unprecedented luminosity and centre-of-mass energies. The physics reach and potential of the different FCC options e+e−, pp, ep, has been studied and published in dedicated Conceptual Design Reports (CDRs) at the end of 2018. Conceptual detector designs have been developed for such studies and tested with a mixture of fast and full simulations. The investigations for all options have been conducted using a common software framework called FCCSW. In this paper, after summarising the improvements implemented in FCCSW to achieve the results included in the CDRs, we will present the current development plans to support the continuation of the physics potential and detector concept optimisation studies in view of future strategic decisions, in particular for the electron-positron machine.
Introduction
The Future Circular Collider (FCC) is a project designed to ultimately provide pp collisions at the largest centre-of-mass energy foreseeable, which is currently of the order of 100 TeV. Following a scenario similar to the LEP/LHC machines, the full FCC integrated program [1] features in sequence a high-luminosity e + e − electroweak, flavour, Higgs, and top factory, followed by a ≥ 100 TeV pp collider. The two FCC phases are commonly referred to as FCC-ee and FCC-hh, respectively.
A Conceptual Design Report (CDR) for the FCC project has been prepared end of 2018, and submitted as input to the 2019 Update of the European Strategy for Particle Physics. In view of the CDR, a dedicated software framework called FCCSW has been developed and used to study the physics potential of the proposed collider. For the preparation of the CDR, a total of about 250 TB of data have been simulated.
This paper reports on the status and evolution plans of FCCSW. It is organised as follows. In the next section we describe the driving considerations behind FCCSW and its core components, including the event data model and the geometry description. In the following section we describe the main components of the computing workflow, including Monte Carlo generation, simulation, reconstruction and analysis. The current software infrastructure which served the CDR preparation will be presented in Sec. 4. The future developments and the relation with on-going across HEP common projects are discussed in Sec. 5. Finally, the concluding remarks will be presented in Sec. 6.
The FCC Core Software
FCCSW is a result of a process started in 2014 just after the FCC project kick-off. From the beginning the aim was to have one software stack to support all the collider phases (ee, hh, eh) and detector concepts, which brought the challenge to support a broad range of event complexity. The framework had to support physics and detector studies with parametrised, fast, and full simulation, allowing also a mixture of the three. It had to be modular enough to allow for evolution, allowing component parts to be improved separately. Finally it had to allow multi-paradigms for analysis, with C++ and Python at the same level. The strategy to meet these challenging requirements has been to adopt existing solutions from LHC and to look at ongoing common projects, such as those developed under the AIDA EU project [2], in particular, as it will be mentioned below, in streamlining the event data model.
The result of this process is schematically shown in Figure 1.
The Event Data Model and PODIO
The first FCC studies were focusing mostly on the FCC-hh phase. The event data models of the closest LHC detectors, ATLAS and CMS, were considered overly complex, and potentially unfavourable for I/O performance, in particular in the long term. FCC decided to adopt a different approach following a then starting AIDA 2020 project, called PODIO [3]. PODIO is an Event Data Model toolkit allowing the automatic creation of Plain Old Data (POD) data structures starting from a high-level description of the required types. Automation allows to generate consistent and homogeneous implementations, minimising mistakes. The separation in high-level and low-level layers provides support different backends; the persistent layer as POD allows to keep the memory model simple, enabling fast I/O and efficient vectorisation.
FCC is the first project to use PODIO and the experience has been overall quite successful. However, the tool still lacks some features that are essential for a running experiment, such as schema evolution, optimisations of the memory layout and of the I/O performance. In view of the adoption of PODIO in the context of the ongoing common software stack initiative (see Sect. 5), the development of these features in PODIO has been proposed as a follow-up project in the context of the new edition of AIDA.
Detector Description: DD4hep
All detector geometries are implemented using the detector description toolkit DD4hep [4,5]. This allows for runtime configuration of detector dimensions, materials and parameters, read from "compact files" in xml format. Standalone compact files are provided for each detector subsystem, and are combined in a "master" compact file to provide a full description of the detector option. The extent to which parameters can be changed at runtime depends on the specific detector implementations, but the overall dimensions can be adapted for almost all models to allow them to fit a given geometrical context. This approach allows also the "plugand-play" interchange of existing detector subsystems; for example, the same tracking system can be evaluated with different solutions for the calorimeter technology.
The underlying software framework: Gaudi
FCCSW is based on Gaudi [6], a framework developed originally for the LHCb experiment, but which has found more widespread use in other experiments including ATLAS. Recent efforts to adapt Gaudi for concurrent data processing [7] make it a good candidate for use with evolving computing hardware. Figure 2 shows the way Gaudi is used in the FCC software. Sec. 3 gives a more detailed account of the components used
Monte Carlo generation and MDI
Monte Carlo Generators are provided by the GenSer project, and mostly used standalone, using common data formats to produce files readable by FCCSW. An exception is Pythia8 [8], the main tool to simulate hard scatter events and hadronisation for FCC-hh. Gaudi components for Pythia8, conversion to and from HepMC [9] and user-configurable single particle input to simulation have been added to FCCSW. All of these components can make use of additional tools to smear vertex positions with any given spatial and temporal distribution.
The different FCC options have different sources of background: FCC-hh will have extreme levels of pileup collisions (< µ >= 1000), while the main concern for FCC-ee are beam backgrounds. FCCSW provides components that can handle and overlay such background data, eliminating the need to generate and simulate large events on the fly.
Simulation
In order to address the different needs within the design study, different simulation libraries are integrated as components in FCCSW. Delphes [10] is used for fast, parametrised simulations of the detector response. A Delphes card parametrising the response of FCC-hh is included with FCC software.
Geant4 is used for detailed studies of the detector response. Full detector simulation is computationally expensive, but the FCC software infrastructure allows mixing of full and fast simulation based on detector regions [11]. Gaudi components to specify user actions, physics lists and outputs are also available in FCCSW.
Reconstruction
Reconstruction software tends to be experiment-specific, and avoiding code duplication for the different FCC options poses some challenges. Nevertheless, efforts have been made to use and develop generic reconstruction libraries in FCC. "A Common Tracking Software" (ACTS) [12], developed within ATLAS, is one such library the encapsulates tracking code that used to be experiment-specific. Similarly, TrickTrack makes part of the CMS pixel track seeding code available in an experiment-independent library. For calorimeter reconstruction, a Sliding-Window and Topo-Clustering algorithm have been implemented, adapted from AT-LAS approaches.
Analysis
The analysis and n-tuple production of simulated data was initially done using the Heppy framework [13], written in Python. While flexible, the performance of the Python implementation in processing the large volume of data required was problematic. With the RDataFrame classes, part of the ROOT library, [14] there is however a strong alternative concept that provides the performance of compiled C++ code with the flexibility of an interpreter. In order to use RDataFrame for FCC analyses, the analysers and I/O routines of Heppy were ported to C++.
Software infrastructure
A central experiment software provides many benefits to developers and user as it reduces the overhead of much necessary infrastructure. All FCC packages follow best practices established by the HEP Software Foundation (HSF), make use of modern CMake build configurations and C++ features where applicable. Releases and nightly builds are built with the package manager tool Spack and deployed in a dedicated CVMFS repository (/cvmfs/fcc.cern.ch). A dedicated website (https://cern.ch/fccsw) lists resources, technical documentation and tutorials for users.
Future developments and Key4hep
The publication of the CDR marks a new phase for the FCC design studies, for which the current organisation of software developments has provided a good foundation. Further studies leading up to a technical design report will require more detailed and comprehensive simulation and reconstruction workflows. In order to enhance collaborations and efficiently use R&D resources, a new common project called Key4HEP has been agreed upon with other future collider projects, including CLIC, ILC and CEPC. This aims to establish a common, ready-to-use ("turnkey") software stack to which all participating studies can contribute [15]. Figure 3 shows the proposed re-organisation of packages in experiment-specific, inter-experiment and completely generic parts.
Conclusions
FCC software has effectively supported the studies leading up the CDR, emphasising collaboration and re-use of existing packages. With the start of a new phase for FCC, more detailed studies, in particular for e + e − will be required. To this end, developers will closely follow and participate in new common activities under the Key4HEP organisation. | 2,280.6 | 2020-01-01T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Cytokine activity of the non-catalytic EMAP-2-like domain of mammalian tyrosyl-tRNA synthetase
Cytokine activity of the isolated recombinant C-terminai domain of mammalian tyrosyl-tRNA synthetase (TyrRS) 1 which is homologous to a tumor-derived cytokine, endothelial and monocyte activating polypeptide (EMAP-2) has been studied. It was shown that C-domain induced a ~ 2 -fold increase of monocyte chemotaxis. This effect is comparable with the values of chemotaxis induction by EMAP-2 cytokine and proEMAP-2. The truncated catalytic form of bovine TyrRS (2 χ 39 kDa) has no effect on monocyte chemotaxis. C-domain of TyrRS also induced a ~ 3-fold increase in tissue factor activity in cultured human ehdothelial cells. A hypothesis is forwarded that the isolated C-domain of mammalian TyrRS may be released at proteolytic cleavage of TyrRS by some protease, activated at stress conditions, and functions as a mediator via signal transduction through interaction with a pu.ative EMAP-2 receptor.
liver both as the full-length protein and as a proteolytically-modified active enzyme form (2 χ χ 39 kDa), which lacks its C-terminal polypeptide extension [9,10].Both distinct molecular forms of TyrRS displayed the similar catalytic constants in tRNA aminoacylation reaction [9.101.Moreover, the dispensable C-terminal polypeptide extension of bovine TyrRS revealed a significant contribution to the non-specific affinity of this enzyme for RNA [11].
A novel putative cytokine, EMAP-2, modulates a variety of properties of endothelial cells, monocytes and leukocytes in vitro, and induces an acute inflammatory reaction in vivo [12,16].Based on the sequence similarity of mammalian TyrRS C-domain and EMAP-2, we have forwarded the hypothesis thai the isolated C-domain may also display the cytokine activity, similar to EMAP-2 cytokine [8 ].In order to verify this hypothesis we have cloned and expressed the isolated non-catalytic C-terminal domain of bovine TyrRS [17].
In this work we have discovered several cytokinelike activities of the isolated C-domain of TyrRS in vitro which are compared with the properties of recombinant EMAP-2 cytokine.
Materials and Methods.Proteins isolation.Clo-ning and bacterial expression of the C-terminal domain of bovine TyrRS was performed as described earlier [17].The BamHI cDNA fragment encoding residues D322-S528 of bovine TyrRS was cloned into the pET15h vector for bacterial expression and recombinant protein was expressed in BL21(DE3) Escherichia coli ceils harboring the pEYCD2 plasmid.The supernatant was loaded onto Ni-NTA column and 6His-tagged recombinant protein was eluted with 300 mM imidazole.Recombinant human EMAP-2 and proEMAP-2 were isolated and characterized as previously described [18 ].
Monocyte chemotaxis: Freshly-isolated monocytes were then suspended in RPMI-1640 medium with 10 % foetal calf serum, at 2 χ IO 6 cells per ml.Solutions of proteins (TyrRS C-domain, 39 kDa TyrRS and EMAP-2) or chemotactic peptide, formylmethionyl-Ieucyl-phenyalanine (fMLP) (31 μΐ) at the indicated concentrations were placed in the bottom wells of ChemTx micro-chemotaxis plates (Neuro Probe, Inc.) in triplicate.The filter was placed on top of the solution in such a way as to provide fluid continuity between the upper and lower chambers.An aliquot of monocyte suspension (27 μΐ) was then placed on top of the filter above each well, and the plates incubated with lids on for 1.5 hr at 37 °С in air/5 % CO 2 .After incubation, cells binding to the membrane were fixed by the addition of 15 μΐ of ice-cold 20 % formaldehyde in phosphate-buffered saline, and those that had migrated to the underside of the membrane were counted with a haemocytometer.
Procoagulant activity assay.Human umbilical vein endothelial cells (HUVEC) were isolated essentially by the method of Jaffe et al. [19].Confluent endothelial monolayers at passages 2 to 3 were used to assess tissue factor-dependent procoagulant activity [20 I. Coagulation was initiated by the addition of 100 μ\ of 30 mM CaCl 2 solution at 37 °С and the time for visible fibrin strand/gel formation was determined.Procoagulant activity of endothelial monolayers was expressed as tissue factor equivalents (TFE, pg/10 6 cells) [20].
Results and Discussion.Purification of recombinant C-domain of TyrRS, expressed in E. coli cells after induction by IPTG and containing a 6His-tag, has been performed by metal-chelation chromatography.According to gel-electrophoresis data, the homogeneity of recombinant protein was about 95 % [17].
Since EMAP-2 cytokine has been shown to induce migration of monocytes and polymorphonuclear leukocytes in vitro [12,18], we therefore examined the effects of TyrRS C-domain on monocyte migration, using a micro-chemotaxis chamber assay.The addition of Ty rRS C-doniain to the lower chamber led to a ~ 2-fold enhancement of monocyte migration (Fig. 1).Monocyte migration was induced by Cdomain in the range I pM-10 nM (between 1 ρ M and 100 pM this increase was significant, p< 0.05) at levels slightly greater than that induced by recombinant EMAP-2 over a similar concentration range, but not as high as those achieved with 10 nM control chemotactic peptide fMLP.In contrast to isolated C-domain, the truncated 39K form of mammalian TyrRS, which lacks this domain, did not affect monocyte migration (Fig. 1).
A defining biological activity of EMAP-2 is its ability to induce tissue-factor-dependent procoagulant activity on the surface of endothelial cells in vitro, and furthermore to potentiate procoagulant activiity induced by tumour necrosis factor (TNF) in vitro (12].Therefore we studied the abilities of TyrRS C-domain and EMAP-2 polypeptide to induce tissue factormediated procoagulant activity on the surface of cultured HUV EC. The exposure of endothelial cells to isolated C-domain for 4 hr at concentrations of 1 -KM) pM led to a dose-dependent increase in cell surface tissue factor between 0,36 and 0.77 pg TFE/IO 6 cells on endothelial cells (Fig. 2, a).EMAP-2 also induced the enhancement of tissue factor activity in a dose-dependent manner (Fig. 2, b), but maximum activity was observed at lower mediator concentration at about 1 pM.
Our data suggest that the isolated C-domain of mammalian TyrRS reveals cytokine-like activities both as a chemotactic factor for monocytes and as an inducer of tissue factor expression on human endothelial cells similar to EMAP-2 cytokine.We propose, therefore, that the C-domain of TyrRS could potentially mimic the action of EMAP-2 cytokine through the interactions with complementary sites on the specific receptor.
Moreover, the cellular effects of TyrRS C-domain that we have observed, in particular the induction of tissue factor activity, cannot be fully explained by an interaction of its N-terminal region with a receptor.As indicated earlier, the chemotactic and tissue factor-inducing activity of EMAP-2 are believed to reside within different regions of the molecule [14].Since we have also observed tissue factor induction in response to the TyrRS C-domain, it is possible that other functional domains of this C-terminal module, except its N-terminal region, could be involved in its cytokine activities.
If the EMAP-2-Ике domain of TyrRS is released after proteolytic cleavage at the loop, connecting the catalytic 39 kDa enzyme core and this C-domain, it could be involved as a mediator in signal transduction process through the interactions with a putative EMAP-2 receptor.The nature of the EMAP-2 receptor is not known, although cross-linking studies have demonstrated binding of EMAP-2-derived peptides to a 73K protein associated with the monocyte cell surface [21 ], suggesting the existence of a distinct receptor.
Recently, it was shown that in cultured cells post-translatiorial processing of proEMAP-2 into mature cytokine EMAP-2 occurred coincidentally with apoptosis programmed cell death [22 ].It is well known that during apoptosis process some proteases, e. g. interleukin-1 converting enzyme (ICE, caspase) are activated [22].It is possible to propose, that mammalian TyrRS could be cleaved during apoptotis proteolytic cascade, or other protease activation process.
It is interesting to note, that other component of protein biosynthesis machinery, auxiliary p43 protein of multi-synthetase complex, is proposed to be a precursor of EMAP-2 cytokine [23 ].
Our results suggest a novel non-canonical function of mammalian aminoacyl-tRNA synthetases in higher eukaryotic cells, which may be associated with signal transduction process.Furthermore this function may only be expressed in conditions where cellular proteases are activated.
Acknowledgements.This work was supported by the Cancer Research Campaign and by a Wellcome Trust Travel Grant to A. I. K.
Fig. 1 .Fig. 2 .
Fig. 1.Induction of monocytes migration by recombinant C-domain of TyrRS and EMAP-2 proteins as studied by micro-chemotaxis chamber assay.Cell migration assays were performed as described under «Materials and Methods*.Data shown the standard deviations estimated with medium aloue control.Chemotactic peptide fMLP was used as a positive control | 1,894.2 | 1999-03-20T00:00:00.000 | [
"Biology"
] |
The Cl--channel TMEM16A is involved in the generation of cochlear Ca2+ waves and promotes the refinement of auditory brainstem networks in mice
Before hearing onset (postnatal day 12 in mice), inner hair cells (IHCs) spontaneously fire action potentials, thereby driving pre-sensory activity in the ascending auditory pathway. The rate of IHC action potential bursts is modulated by inner supporting cells (ISCs) of Kölliker’s organ through the activity of the Ca2+-activated Cl--channel TMEM16A (ANO1). Here, we show that conditional deletion of Ano1 (Tmem16a) in mice disrupts Ca2+ waves within Kölliker’s organ, reduces the burst-firing activity and the frequency selectivity of auditory brainstem neurons in the medial nucleus of the trapezoid body (MNTB), and also impairs the functional refinement of MNTB projections to the lateral superior olive. These results reveal the importance of the activity of Kölliker’s organ for the refinement of central auditory connectivity. In addition, our study suggests the involvement of TMEM16A in the propagation of Ca2+ waves, which may also apply to other tissues expressing TMEM16A.
In the developing inner ear, non-sensory inner supporting cells (ISCs) form a transient epithelial structure known as Kölliker's organ (Hinojosa, 1977;Hou et al., 2019). ATP released from ISCs through connexin hemichannels (Mazzarda et al., 2020) activates purinergic receptors in a paracellular manner, leading to cell volume decrease of ISCs and cochlear Ca 2+ transients (Babola et al., 2018;Tritsch et al., 2007). It was proposed that the Ca 2+ -activated Cl -channel TMEM16A, which is expressed in ISCs, might be the pacemaker for spontaneous cochlear activity (Yi et al., 2013). Indeed, spontaneous osmotic cell shrinkage was shown to be mediated by TMEM16A-dependent Clefflux, which forces K + efflux from ISCs and thus the transient depolarization of IHCs (Yi et al., 2013). Thereby, bursting activity of nearby IHCs, which will later respond to similar sound frequencies, becomes synchronized (Eckrich et al., 2018;Harrus et al., 2018;Wang et al., 2015), establishing a possible scenario for tonotopic map refinement in central auditory structures.
Using Ano1 conditional knockout mice, we show that TMEM16A not only modulates ISC volume but also drives the amplification of localized Ca 2+ transients to propagating Ca 2+ waves within the cochlea. Prior to hearing onset, knockout mice show reduced burst firing of neurons in the medial nucleus of the trapezoid body (MNTB), downstream of SGNs and neurons of the cochlear nucleus. Moreover, the frequency selectivity of individual MNTB neurons is diminished shortly after hearing onset (P14) pointing toward reduced refinement of auditory connections. Indeed, neurons from the lateral superior olive (LSO) received twice as many functional MNTB afferents in knockout mice compared to wildtype littermates. Taken together, these results suggest that the Ca 2+ -activated Cl -channel TMEM16A plays a significant role in the propagation of Ca 2+ waves and contributes to the refinement of auditory brainstem circuitries prior to hearing onset.
TMEM16A is required for the generation of cochlear Ca 2+ waves
The Ca 2+ activated Cl --channel TMEM16A is expressed in ISCs of Kölliker's organ (for a schematic representation of a part of the organ of Corti, see Figure 1A; for a differential interference contrast [DIC] image, see Figure 1B; Wang et al., 2015;Yi et al., 2013) and is activated by an ATP-induced increase in Ca 2+ concentration (Wang et al., 2015;Yi et al., 2013). The opening of TMEM16A triggers Clefflux, followed by K + efflux and cell shrinkage. The ensuing rise of extracellular K + drives electrical activity of immature IHCs (Wang et al., 2015). To assess the role of TMEM16A in the developing cochlea and the impact of TMEM16A-dependent cochlear signaling on the development of auditory brainstem nuclei, we disrupted Ano1 in the inner ear. This was achieved by mating our floxed Ano1 line (Ano1 fl/fl ) (Heinze et al., 2014) with a line expressing Cre-recombinase under the control of the Pax2 promoter (Ohyama and Groves, 2004), which is active in the otic placode (Lawoko-Kerali et al., 2002). This line is subsequently referred to as cKO mice. Ano1 deletion was confirmed by immunohistochemistry ( Figure 1-figure supplement 1A) and Western blot analysis (Figure 1-figure supple ment 1B). Importantly, organs of Corti of cKO mice showed no obvious morphological defects. The development of the tectorial membrane and the morphology of the hair cells appeared normal before hearing onset (P6), at hearing onset (P12), or in the weeks thereafter (3 weeks and 6 weeks after birth) ( Figure 1-figure supplement 1C and D), indicating that TMEM16A and TMEM16A-dependent activity of Kölliker's organ is not essential for the morphological development of the organ of Corti.
The online version of this article includes the following source data and figure supplement(s) for figure 1: Source data 1. Source data for Figure 1. Video 1. TMEM16 A is required for the generation of spontaneous volume changes in Kölliker's organ (KÖ). Time-lapse imaging (one image per second) reveals spontaneous volume changes of inner supporting cells of in a wildtype mouse cochlea (P7), which propagate in a wave-like manner up and down the cochlea turn. In contrast, volume changes are almost absent in the cochlea isolated from a cKO littermate (the bottom of the video shows erythrocytes moving in a blood vessel). Images were processed using a custom-written ImageJ macro and the ImageJ software. Each frame was subtracted from an average of five preceding frames to highlight the changes in light scattering caused by the changes in cell volume. The video (seven images per second) shows the top view of an area from an isolated cochlea turn. KÖ, Kölliker's organ; IHC, inner hair cells.
https://elifesciences.org/articles/72251/figures#video1 Video 2. TMEM16A is required for the propagation of cochlear Ca 2+ waves. Time-lapse imaging (one image per second) reveals spontaneous Ca 2+ signals in the inner supporting cells that propagate up and down the cochlear turn in a wildtype mouse cochlea (P7). In contrast, Ca 2+ waves are reduced to local Ca 2+ events in the cochlea of a cKO littermate. Images were processed using a custom-written ImageJ macro and the ImageJ software. Each frame was subtracted from an average of five preceding frames to highlight the changes in Ca 2+ concentration. The video (seven images per second) shows the top view of an area from an isolated cochlea turn. KÖ, Kölliker's organ; IHC, inner hair cells.
TMEM16A-dependent cochlear activity modulates the burst-firing pattern of MNTB neurons By increasing the K + concentration, TMEM16A-dependent activity of Kölliker's organ leads to the generation of Ca 2+ action potentials in IHCs. This is followed by Ca 2+ -dependent exocytosis of glutamate at the IHC synapse, which drives burst firing of action potentials in SGNs (Wang et al., 2015). The bursting activity is then relayed to central auditory neurons (Babola et al., 2018;; Figure 2A shows a schematic representation of the auditory brainstem) and is believed to be important for the proper development of synaptic contacts and tonotopic maps (Clause et al., 2014;Clause et al., 2017).
Since TMEM16A is neither expressed in SGNs nor in CN, MNTB, and LSO neurons in wildtype mice and expression in the brainstem was limited to vascular smooth muscle cells (Figure 3-figure supplement 1), we primarily attribute the severely altered burst-firing activity to impaired Kölliker's organ activity in cKO mice. from individual MNTB neurons recorded from mice before hearing onset (P8) in wildtype and cKO littermates. Dotplot graphs show respective interspike interval (ISI) distributions for 100 s of spontaneous discharge activity. On top of each dotplot raster is a 10 s period of original spike trains. Note that the wildtype MNTB neuron shows prominent burst firing, which is either strongly reduced (D) or absent in cKO mice (C, E). (F) Quantification of spike bursting patterns by calculating the coefficient of variation of ISIs yields significant differences between wildtype (n = 14) and cKO units (n = 15) (Mann-Whitney rank-sum test: p=0.006); also shown are boxplots indicating medians and 25% and 75% quartiles. Source data 2. Source data for Figure 2H.
Frequency selectivity of MNTB neurons is reduced in cKO mice
To test whether the changes in burst-firing patterns of cKO MNTB neurons have consequences on neuronal function after hearing onset, auditory brainstem responses (ABRs) were measured at P13-14. cKO mice had similar ABR thresholds in response to stimulation with clicks or tone bursts at 6, 12, and 24 kHz as wildtypes ( Figure 4A and B, Supplementary file 1b; n = 6 WT; n = 7 cKO). In both genotypes, ABRs to click stimuli of various intensities (40-100 dB) mainly consisted of three waves (labeled I-III), which were comparable in latency and amplitude (Figure 4-figure supplement 1, Supplementary file 1c and d). These data indicate that cKO mice have a normal sensitivity to sound stimulation and normal temporal precision of the spiking response to sound onset in the lower auditory pathway.
Next, we assessed whether the disruption of TMEM16A-dependent cochlear activity affects the frequency tuning properties of MNTB neurons. Therefore, the frequency response areas (FRAs) of single MNTB neurons were acquired in four cKO and four wildtype littermates using in vivo electrophysiology and tone burst stimulation. Juxtacellular recordings were performed at P14, that is, shortly after the onset of hearing to avoid possible compensatory effects of acoustically driven activity on neuron responsiveness (Werthat et al., 2008;Bogart et al., 2011). The characteristic frequencies (CFs) of the recorded MNTB neurons, that is, the frequency value at which the neuron is excited with the lowest intensity , ranged between 5.3 and 30.5 kHz and did not differ between the two groups (mean CF ± SEM: WT = 15.4 ± 1.1 kHz [n = 25]; cKO = 16.2 ± 1.3 kHz [n = 32]; p=0.51, Student's ttest). MNTB neurons recorded from wildtype mice showed the typical V-shaped FRAs with acoustically driven excitation sharply narrowing toward lower intensities ( Figure 4C). The filter characteristics of the FRAs were quantified by the Q n -value, a measure of the unit's sharpness of tuning, which is calculated as the ratio of CF to bandwidth at 10, 20, and 30 dB above threshold (e.g., Q 10 = CF/BW 10 with BW 10 = bandwidth at 10 dB above threshold). For wildtype mice, the median [25%, 75% quartiles] was Q 10 = 5.5 [4.7, 9.2], Q 20 = 4.6 [3.8, 6.8], and Q 30 = 3.6 [3.4, 5] ( Figure 4E). Neurons recorded from the cKO littermates (n = 32) had significantly broader excitatory response areas, that is, lower frequency selectivity as indicated by significantly lower Q-factors at all three above-threshold levels ( Figure 4D Figure 4F). Overall CF threshold levels tended to show a larger variability in knockout compared to wildtype mice (cKO: -7.3 dB SPL to 48.3 dB SPL; WT: -8.4 dB SPL to 18.7 dB SPL). Furthermore, the sound-evoked firing properties of the MNTB neurons in cKO mice were also affected. The rate-level functions at CF showed significantly lower firing rates for sound intensities at and above 10 dB SPL of tone-burst stimulation in comparison to wildtype littermates (effect of strain p<0.001, effect of intensity p<0.001, interaction strain × intensity p=0.006; two-way ANOVA) ( Figure 4G). The maximal firing rate of individual neurons in response to any CF/intensity combination was markedly diminished in cKO mice (mean FR ± SEM: WT = 263.6 ± 12.4 action potentials/s, n = 25; cKO = 223.9 ± 11.1 action potentials/s, n = 32; p=0.015, t-test) ( Figure 4H). Apparently, MNTB neurons in cKO mice achieve rates that are typically observed in wildtype littermates. Taken together, these data demonstrate that frequency selectivity and sensitivity to acoustic stimulation in single MNTB neurons are impaired upon disruption of Ano1 in the cochlea.
Since spontaneous burst activity of auditory neurons might promote the targeting and refinement of their projections as suggested for the developing visual system (Torborg et al., 2005), we assessed the synaptic and topographic refinement of MNTB and LSO neurons. MNTB neurons were activated via photolysis of caged glutamate. A 405 nm continuous diode laser was used for illumination. Laser flashes were delivered through a light guide of 20 µm diameter, which produced circular spots of 20 µm diameter in the focal plane. Laser pulses of 10 ms duration were delivered with 6 s delay time between uncaging sites. The number of functional inputs on individual LSO neurons was assessed by whole-cell current-clamp recordings. In a pilot experiment, the distance was measured at which glutamate uncaging produces action potentials in MNTB neurons. The mean distance (± SEM) was 19.2 ± 0.8 µm mediolaterally and 20.0 ± 1.3 µm dorsoventrally, indicating that glutamate uncaging was locally restricted to a surface area of 20 × 20 µm 2 and that light scattering that might influence the amount of uncaged glutamate in the tissue was negligible ( Figure 5-figure supplement 1A-C).
The online version of this article includes the following source data and figure supplement(s) for figure 4: Source data 1. Source data for Figure 4A and B.
Source data 2. Source data for Figure 4E-H. defined as the maximal distance of responsive uncaging sites along the mediolateral axis, was twice as large ( Figure 5A, B and D; mean input width ± SEM: WT = 18% ± 3% of the MNTB's mediolateral length; cKO = 36% ± 5% of the MNTB's mediolateral length; p=0.0073, Student's t-test). The large increase in MNTB input width in cKO mice reveals that LSO neurons located in the medial highfrequency region of the nucleus not only received input from neurons of the high-frequency (medial) region of the MNTB, but also from neurons in the mid-nuclear region tuned to lower frequencies.
For additional examples comparing the size and width of MNTB input areas between wildtype and cKO mice, see Figure 5-figure supplement 1D and E. These data strongly point toward an impairment of the tonotopic refinement of MNTB-to-LSO projections in cKO mice.
Discussion
Kölliker's organ was identified as the origin of pre-sensory cochlear activity, which was suggested to serve the refinement of auditory circuits (Tritsch et al., 2007). The Ca 2+ -activated Cl --channel TMEM16A, which is expressed in Kölliker's organ shortly before hearing onset, mediates spontaneous osmotic cell shrinkage of ISCs, which forces K + efflux and thus the transient depolarization of IHCs (Wang et al., 2015;Yi et al., 2013). Thereby, bursting activity of nearby IHCs, which will later respond to similar sound frequencies, becomes synchronized (Eckrich et al., 2018;Harrus et al., 2018;Wang et al., 2015), establishing a possible scenario for tonotopic map refinement (Johnson et al., 2011;Tritsch et al., 2007). Here, we demonstrate that disruption of TMEM16A in the inner ear impairs prehearing cochlear activity and spontaneous burst firing as well as sensitivity and frequency selectivity of sound-evoked firing of MNTB neurons. Finally, we show that these alterations in prehearing auditory activity severely impair the refinement of the MNTB-LSO pathway.
Spontaneous activity of Kölliker's organ is characterized by recurrent ATP-dependent changes in cell volume and Ca 2+ transients of ISCs. The decreases in cell volume reflect the passive movement of water associated with Clefflux upon activation of the Ca 2+ -activated Cl --channel TMEM16A (Wang et al., 2015;Yi et al., 2013). In WT mice, we detected spontaneous Ca 2+ waves and volume changes of ISCs as reported previously (Tritsch et al., 2007). In cKO mice, however, volume changes were absent and Ca 2+ waves were reduced to local Ca 2+ transients. Changes in cell volume can serve as a mechanical stimulus that activates robust and large ATP release (Praetorius and Leipziger, 2009) and, in turn, ATP release through connexin hemichannels is attributed to the propagation of Ca 2+ waves through the cochlear epithelium (Anselmi et al., 2008;Schütz et al., 2010;Mammano, 2013;Jovanovic and Milenkovic, 2020;Mazzarda et al., 2020). Therefore, TMEM16A-dependent volume changes may amplify ATP release via connexin hemichannels and thus promote the propagation of local Ca 2+ transients as Ca 2+ waves. ATP-mediated amplification of the Ca 2+ signals may further activate Ca 2+ -activated TMEM16A Clchannels in a positive feedback mechanism. First support of this model comes from our observation that the purinergic receptor blocker suramin abolished Ca 2+ waves in WT cochlea (as also shown by Tritsch et al., 2007) but did not affect the size of Ca 2+ transients in cKO mice.
To analyze the impact of the disruption of TMEM16A on the auditory pathway, we performed in vivo juxtacellular recordings in the MNTB of cKO mice before and shortly after hearing onset. Burst-firing pattern of MNTB neurons was often completely absent or severely diminished in P9 cKO mice. MNTB neurons from P14 cKO mice also showed markedly diminished firing rates, indicating that they cannot respond to moderate-to-high-intensity stimulation with sufficient firing rates that are normally observed in wildtype littermates. As high sound intensities are coded by high firing rates, interaural-level differences might be affected and hence also sound source localization in cKO mice. Furthermore, individual MNTB neurons recorded from P14 cKO mice responded to sound stimuli from a much larger frequency range compared to wildtype mice. The reduced frequency selectivity could result from superfluous functional projections from globular bushy cells (GBCs) onto MNTB neurons. At P2, MNTB neurons receive on average 9.3 inputs from GBCs, which after a short period of intense competition and pruning end up in one calyx of Held, usually by P9 (Holcomb et al., 2013). Because the elimination of excessive inputs is probably activity-dependent (Holcomb et al., 2013), reduced bursting activity in cKO mice might result in MNTB neurons that receive more than one input and thus show broadened FRAs.
To test whether projection patterns between auditory nuclei in cKO mice are indeed altered, we examined MNTB projections to the LSO, a part of the sound localization pathway in which refinement has been well studied (Clause et al., 2014;Kim and Kandler, 2003;Müller et al., 2019). The necessary spatial resolution was achieved with a fast galvanometric photolysis system, which allowed a fivefold improvement in area resolution compared to fiber optic-based uncaging systems used in previous studies (Clause et al., 2014;Kim and Kandler, 2003;Kim and Kandler, 2010). In cKO mice, the number of functional connections of single LSO afferents (MNTB input area) and the respective mediolaterally covered area, that is, the area along the tonotopic axis (MNTB input width), was twice as large as in wildtype. Since it has been reported that LSO neurons normally lose about 50% of their afferents during the first two weeks of postnatal development of the mouse (Noh et al., 2010), the present results imply that silencing of synaptic connections was strongly diminished in cKO mice.
Processes that underlie sensory network refinement are well studied in the visual system: Before onset of vision, bursting activity of retinal ganglion cells leads to the refinement of their projections to lateral geniculate nucleus neurons (Wong et al., 1993;Penn et al., 1998). This process depends on the duration of bursts, inter-burst intervals, and the synchronization of bursts (Stellwagen and Shatz, 2002;Torborg et al., 2005;Butts et al., 2007), and was proposed to follow Hebbian plasticity rules (Hebb, 1949). Similar mechanisms also apply for the auditory system, where neighboring IHCs show synchronized bursting activity before the onset of hearing (Tritsch et al., 2007). Notably, the firing patterns are less synchronized in cKO mice since simultaneous K + release is not triggered in the absence of TMEM16A-mediated Clefflux (Wang et al., 2015). Because synchronized bursting activity of neighboring IHCs (Tritsch et al., 2007) is faithfully relayed to the brainstem , it is likely that LSO neurons receive desynchronized MNTB inputs in cKO mice. In WT mice, GABA spillover from MNTB axon terminals can lead to the excitation of nearby synapses via pre-and extrasynaptic GABA A receptors (Weisz et al., 2016;Fischer et al., 2019) and thus likely contributes to the synchronization of MNTB inputs. In cKO mice, however, the shorter bursting will diminish GABA spillover, thus further compromising the synchronization of MNTB inputs. Hence, only in WT mice tonotopically similar axons that overlap in the LSO can be simultaneously active and get strengthened, possibly via glycinergic LTP (Bach and Kandler, 2020), whereas tonotopically distinct axons that have less overlap in the LSO get silenced following associative plasticity rules (Hebb, 1949). This assumption is in agreement with the recent discovery that synchronization of Ca 2+ signals of neighboring outer hair cells (OHCs) is required for proper refinement of afferent projections onto OHCs as well as the maturation of OHCs (Ceriani et al., 2019). The synchronization of OHC activity is also mediated by ATP release from supporting cells, namely, Deiter's cells, during the first two postnatal weeks in the mouse (Jeng et al., 2020).
Besides spontaneous activity of ISCs, IHC-firing patterns are also modulated by cholinergic medial olivocochlear efferents (Glowatzki and Fuchs, 2000;Johnson et al., 2011;Sendin et al., 2014), which transiently innervate immature IHCs (Simmons et al., 1996a;Simmons et al., 1996b;Warr and Guinan, 1979). In mice that lack the α9 subunit of nicotinic acetylcholine receptors (α9 KO mice), this cholinergic input is disrupted (Clause et al., 2014). The fact that α9 KO mice show more subtle changes in the bursting behavior of MNTB neurons in vivo (Clause et al., 2014) than our cKO mice suggests a prevalent role of the activity of ISCs in shaping firing patterns of the early auditory pathway prior to onset of hearing. Still, our results showed reduced synaptic refinement before hearing onset similar to those of α9 KO mice (Clause et al., 2014) and otoferlin KO mice, which exhibit drastically diminished spontaneous activity due to disrupted glutamate release from IHCs (Roux et al., 2006). While otoferlin KO mice have almost no discernible bursting activity, cKO mice showed a drastic reduction of the number of bursts. In contrast, the number of bursts was not changed in α9 KO mice. Firing rates within bursts did not differ between cKO and WT mice, but were 70-80% higher in α9 KO compared to WT mice. Overall firing rates, however, do not differ between both KO models. Consequently, we infer that the overall firing rate and firing rates within bursts, as well as the number of bursts, do not influence the physiological refinement of the MNTB-LSO pathway. Notably, the duration of bursts was markedly reduced in both KO mice (50% in α9 KO and 56% in cKO mice). Average ISIs, however, were significantly longer in cKO and shorter in α9 KO mice. Thus, it seems that both subtle and drastic changes in the temporal pattern and/or the duration of bursts can lead to a severe disruption of tonotopic maps. These changes have permanent consequences since tonotopic refinement of MNTB-LSO projections does not continue after hearing onset Walcher et al., 2011). Thus, α9 knockout mice display diminished sound localization and sound frequency processing after hearing onset as revealed by behavioral studies (Clause et al., 2017).
Whether TMEM16A-mediated prehearing activity has a similar impact on hearing after hearing onset remains to be determined. While ABR measurements in our study did not reveal gross alterations in signaling along the lower auditory pathway, our recordings from individual auditory brainstem neurons showed significantly elevated sound thresholds. We note that the discrepancy between significantly indistinguishable ABR thresholds and significantly elevated auditory thresholds of single MNTB neurons could arise for several reasons. First, by the nature of population vs. single-neuron responses, a direct correspondence is not warranted. Secondly, we speculate that the poorer tonotopic refinement of the auditory circuitry might have contributed to the higher thresholds of MNTB neurons involving cochlear excitation from IHCs off the peak of the traveling wave. However, other reasons cannot be excluded and a trend toward higher ABR thresholds was apparent for 6 and 24 kHz tone bursts.
The elevated sound thresholds and frequency selectivity that we observed in cKO mice at P14 could indicate impaired auditory processing after hearing onset. This is supported by findings showing that mutations in ion channels (hemichannel, gap junctions, and Ca 2+ channels) involved in prehearing activity of Kölliker's organ are linked to congenital deafness . Recent publications have established a causal connection between the bursting patterns of the ascending auditory system and the tonotopic refinement of MNTB projections to the LSO (Clause et al., 2017;Müller et al., 2019), and our work supports this notion. Moreover, our results link the propagation of Ca 2+ waves in developing supporting cells as a peripheral mechanism contributing to the refinement of ascending circuits.
Mouse strains
Mouse care and usage were in accordance with the German animal protection laws and were approved by the local authorities (license numbers: 33.9-42502-04-11/0439; TVV 06/09 and TLV UKJ-17-006). Ano1 fl/fl mice were crossed with a Pax2 Cre mouse line (a gift from A. Groves, House Ear Institute, Los Angeles) to obtain Ano1 ear conditional knockout mice. Throughout the article, these mice are referred to as cKO mice. Information about the mouse strains including genotyping primers can be found in Heinze et al., 2014 andOhyama andGroves, 2004. Ano2 knockout mice were a gift from T. Jentsch, Leibniz Institute for Molecular Pharmacology (Germany), and were described in Billig et al., 2011.
Western blot
Modioli including the SGNs were isolated from Ano2 knockout mice and their wildtype littermates (P5). For immunoblots, the tank blotting Mini-Protean Tetra system (Bio-Rad) and transfer buffer with methanol (25 mM Tris, 192 mM glycine) were used. 10 µg of proteins were separated in 5% acrylamide gels. Proteins were blotted on polyvinylidene membranes (Roti-PVDF, Roth). Protein transfer was controlled by staining with Ponceau S (0.2% Ponceau S, 3% acetic acid). Blocking was done in Tris buffered saline (TBS) supplemented with 0.05% Tween-20 and 5% dry milk powder. TMEM16B was detected using the same antibody as described in the immunohistochemistry section (1:1000). It was incubated overnight at 4°C in TBS-T with 1% dry milk powder. Washing was done with TBS/0.05% Tween-20. The secondary antibody (goat anti-guinea pig IgG antibody-HRP, Merck, 1:1000) was incubated at RT for 2 hr in TBS-T with 1% dry milk powder. Signals were visualized by chemiluminescence (Amersham ECL Prime Western Blotting Detection Reagent, GE Healthcare). Documentation was done with the camera system Stella 3200 (Raytest) and the Xstella software.
For DIC imaging, acutely isolated cochleae were transferred to an inverted microscope (Axio Observer.Z1, Zeiss). A time-lapse image series was taken for 20 min with a ×25 water-immersion objective. One image per second was acquired (1392 × 1040 px) using a CCD camera (Photometrics Cool Snap HQ 2 , Visitron) and the Metamorph imaging software (Visitron).
Ca 2+ signals were visualized by Ca 2+ imaging of cochleae that had been bath-loaded with 10 µM Fura-2 AM (Invitrogen) and 0.02% pluronic acid F-127 (Invitrogen) in ACSF at RT for 35-45 min. Loading was followed by another 30 min incubation in ACSF. Cochleae were imaged on an upright microscope (Axio Observer.Z1, Zeiss) using a ×25 water-immersion objective. Using the MetaFluor imaging software (Visitron), 400 time-lapse ratio images (1392 × 1040 px) were obtained at a rate of one image per second. Therefore, pairs of images were acquired at alternate excitation wavelengths (340/380 nm). The emission was filtered at 500-530nm.
Images acquired during DIC-and Ca 2+ imaging were processed using a custom-written ImageJ macro and the ImageJ software (NIH) deposited at https://github.com/alenameis/ANO1-hearingdevelopment.git (copy archived at swh:1:rev:4af9cb8bc927320653b556eab12000e675a61e12; Maul, 2022). Each frame (time point t n ) was subtracted from an average of five preceding frames: ] . The mean areas and frequencies of volume changes and Ca 2+ events were analyzed within a 10,000 µm 2 area of Kölliker's organ that was chosen from the center of the field of view. The frequencies of volume changes and Ca 2+ events were analyzed by counting the number of events per second. The area of each event was measured in square micrometer using the Metamorph or MetaFluor imaging software for volume changes or Ca 2+ signals, respectively.
To be regarded as an event, volume changes had to match the following two criteria: first, events had to show an increase in DIC contrast above 10% of the baseline level. Baseline contrast levels were measured from an area of the cochlea that did not show volume changes (i.e., an area outside of Kölliker's organ and the IHC region). Second, areas with changes in DIC contrast had to be larger than the surface area of one ISC.
To be regarded as an event, Ca 2+ signals had to match the following two criteria: first, events had to show an increase in fluorescence intensity greater than 10% of the baseline level. Baseline fluorescence levels were measured from an area of the cochlea that did not show Ca 2+ events (i.e., an area outside of Kölliker's organ and the IHC region). Second, areas with changes in fluorescence had to be larger than the surface area of one ISC. Videos were generated with the ImageJ software (seven images per second) and annotated using Premiere Pro (Adobe).
In vitro electrophysiological recording and functional mapping
The brain was removed from mice (P9-11) and the forebrain and parts of the cerebellum were dissected to isolate the brainstem. Coronal brainstem slices (300-400 µm) were cut on a vibrating microtome (Leica VT1200S) in ice-cold dissection solution containing (in mM) 50 NaCl, 2.5 KCl, 1.25 NaH 2 PO 4 , 3 MgSO 4 , 0.1 CaCl 2 , 75 sucrose, 25 D-glucose, 25 NaHCO 3 , and 2 Na pyruvate (280 mOsmol, pH 7.2). The solution was oxygenated with 95% O 2 and 5% CO 2 during dissection. A single brain slice per animal was obtained, which contained the MNTB and LSO to reach a maximum possible preservation of connections between these two nuclei. Slices recovered for 30 min at 36°C and for an additional 30 min at RT in oxygenated recording solution containing (in mM) 125 NaCl, 2.5 KCl, 1.25 NaH 2 PO 4 , 2 MgSO 4 , 2.5 CaCl 2 , 18 D-glucose, and 25 NaHCO 3 (290 mOsmol, pH 7.2). For electrophysiological recordings, brain slices were transferred to a recording chamber, mounted on an upright microscope (Axio Examiner A.1, Zeiss), and perfused with oxygenated recording solution at a speed of 2 ml/min using a pressure-driven perfusion system (ALA-VM8, ALA Instruments). The LSO was identified using a digital camera (C10600 Orca R 2 , Hamamatsu). Recordings were made from neurons with bipolar morphology from the medial (high frequency) region of the LSO in order to obtain comparable results and because the highest degree of refinement was reported for this frequency region (Sanes and Siverls, 1991). Electrodes had a resistance of 4-5 MΩ (Biomedical Instruments) when filled with intracellular solution containing (in mM) 54 potassium gluconate, 56 KCl, 1 MgCl 2 , 1 CaCl 2 , 5 sodium phosphocreatine, 10 HEPES, 11 EGTA, 0.3 NaGTP, and 2 MgATP (285 mOsmol, pH 7.2). Recordings were done in current-clamp in the whole-cell configuration at RT, and LSO neurons were held at 70 mV. Currents were acquired with an Axon CNS MultiClamp 700B amplifier (Molecular Devices) and lowpass filtered (3 kHz) and digitized (3 kHz) with a Digidata 1440A analog-to-digital converter (Molecular Devices) using the pClamp10 software (Molecular Devices).
Photolysis of caged glutamate
Presynaptic MNTB neurons were localized by local uncaging of 120 µM MNI-caged glutamate trifluoroacetate (Femtonics) that was added to the recording solution. Uncaging of glutamate was controlled by a fast galvanometric photolysis system (UGA40, Rapp OptoElectronic) that was triggered via a TTL impulse. A 405 nm continuous diode laser (DL405, Rapp OptoElectronic) was used as a light source. Laser flashes were delivered through a light guide of 20 µm diameter (Rapp OptoElectronic) with a ×10 water-immersion objective that produced circular spots of 20 µm diameter in the focal plane. For mapping, a virtual grid was defined over (and around) the MNTB harboring 250,300 grid points (spaced 20 µm apart) using the UGA-40 control 1.1 software (Rapp OptoElectronic). Each grid point represented one uncaging site. Laser pulses of 10 ms duration were delivered with 6 s delay time between uncaging sites. This resulted in reproducible LSO-MNTB input maps confirmed by rescanning in some cases. Only one MNTB map was obtained per mouse.
Reconstruction and analysis of MNTB-LSO input maps
Events were detected with template search methods during a 20 ms window starting from the onset of the laser illumination. Peak amplitudes of the postsynaptic potentials (PSPs) were measured (Clampfit 10.2 software, Molecular Devices). Each uncaging site that evoked PSP amplitudes greater than 2 mV in the recorded LSO neuron was considered as a functional MNTB-LSO connection and marked as colored square according to the following color code: yellow = depolarizations > 2 mV, orange > 10 mV, red = action potential. Stimulation sites that evoked PSPs smaller than 2 mV or no PSPs were considered as unresponsive MNTB areas and were left blank. The total area covered by all colored squares was defined as the MNTB input area. The analysis of the MNTB input area was done in pixels (1 pixel represents 1 µm 2 ). The MNTB input area was calculated by multiplying the size of an uncaging site (~400 px) by the number of colored squares. The MNTB input area was normalized to the cross-sectional area of the MNTB. MNTB borders were determined by two investigators (one of them was blind to the mouse group) using images taken from the slices during the experiment. The result represents the MNTB input area in percent. The MNTB input width was calculated by measuring the maximal distance of stimulations sites that evoked depolarizations greater than 10 mV along the mediolateral axis. This distance was normalized to the mediolateral length of each MNTB. The result expresses the input width in percent along the tonotopic axis of the MNTB. On rare occasions, responses could be elicited from uncaging sites slightly ventral or dorsal to the defined MNTB boundaries in mice of both genotypes. These sites were included in the analysis.
Spatial resolution of glutamate uncaging
The spatial resolution was assessed by direct current-clamp whole-cell recordings from MNTB neurons. The holding potential was -70 mV and the electrode resistance was 34 MΩ when filled with intracellular solution containing (in mM) 54 potassium gluconate, 56 KCl, 1 MgCl 2 , 1 CaCl 2 , 5 sodium phosphocreatine, 10 HEPES, 11 EGTA, 0.3 NaGTP, and 2 MgATP (285 mOsmol, pH 7.2). A virtual 10 × 10 grid was defined over the recorded MNTB neuron with uncaging points spaced 10 μm apart. The 'action potential-eliciting distance' was defined as the maximal distance from the center of the uncaging site to the center of the cell body at which uncaging produced action potentials in the recorded MNTB neuron. The action potential-eliciting distance was measured for the mediolateral and dorsoventral direction of the MNTB. We used the determined action potential-eliciting distance to define the uncaging parameters for the mapping experiment.
In vivo recordings in the MNTB
Juxtacellular single-unit recordings were performed in mice before or right after hearing onset (P8, P14). Animals were anesthetized with an initial intraperitoneal injection of a mixture of ketamine hydrochloride (0.1 mg/g body weight; Ketavet, Pfizer) and xylazine hydrochloride (5 µg/g body weight; Rompun, Bayer). Throughout recording sessions, anesthesia was maintained by additional subcutaneous application of one-third of the initial dose, approximately every 90 min. The MNTB was approached dorsally and typically reached at penetrations depths of 5000-5500 μm.
The experimental protocol for acoustic stimulation and single-unit recording was described in detail previously (Dietz et al., 2012;Sonntag et al., 2009). Briefly, auditory stimuli were digitally generated using custom-written MATLAB software (The MathWorks Inc, Natick) at a sampling rate of 97.7 kHz. The stimuli were transferred to a real-time processor (RP2.1, Tucker-Davis Technologies), D/A converted and delivered through custom-made earphones (acoustic transducer: DT 770 pro, Beyer Dynamics). Two recording protocols were used: (i) pure tone pulses (100 ms duration, 5 ms rise-fall time, 100 ms interstimulus interval) were presented within a predefined matrix of frequency/ intensity pairs to determine the excitatory response areas of single units. CF (sound frequency causing a significant increase of unit's action potential spiking at the lowest intensity), the respective threshold, Q-values (Q n , a measure of the unit's sharpness of tuning calculated as the ratio of CF to bandwidth at n = 10, 20 and 30 dB above threshold), and maximum discharge rates were computed from the response area and used for the next protocol. (ii) Spontaneous neuronal discharge activity was assessed in the absence of acoustic stimulation to determine the average firing rate and CV of ISIs (ISIs = time that passes between two successive action potentials). Regularity of spontaneous discharge activity was quantified by CV, calculated as the ratio of standard deviation of ISIs and mean ISI. Additional analysis was done for MNTB units recorded in P8 mice, where spontaneous spiking discharges are grouped in bursts followed by periods of greatly reduced discharge activity (Sonntag et al., 2009). We used a statistical test based on gamma probability distribution of ISIs to determine the number and duration of bursts, and number of spikes per burst within each single-cell recording.
For the juxtacellular single-unit recordings, glass micropipettes (GB150F-10, Science Products) were pulled on a horizontal puller (DMZ universal puller, Zeitz) to have a resistance of 5-10 M when filled with 3 M KCl. The recorded voltage signals were amplified (Neuroprobe 1600, A-M Systems), bandpass filtered (0.3-7 kHz), digitized at a sampling rate of 97.7 kHz (RP2.1, Tucker-Davis Technologies), and stored for offline analysis using custom-written MATLAB software. Three criteria were used to classify single-unit recordings: (i) changes in the spike height did not exceed 20%, (ii) uniform waveforms, and (iii) signal-to-noise ratio at least 8:1. The principal neurons of the MNTB were identified by the complex waveform of the recorded discharges (Guinan and Li, 1990;Sonntag et al., 2009).
Histological verification of the recording site was done by iontophoretic injection of Fluorogold (4 µA for 7 min). The animal was perfused 4-6 hr after injection with 0.9% NaCl solution followed by 5% PFA. The brain was cut on a vibratome (Microm HM650), and the tissue sections (100 µm thick) were visualized under the fluorescent microscope (Axioskop, Zeiss). An example of a recording site is shown in Figure 2I.
Auditory brainstem response
ABR recordings were conducted as described previously (Jing et al., 2013). In brief, mice (P13-14) were anesthetized intraperitoneally with a combination of ketamine (125 mg/kg) and xylazine (2.5 mg/ kg). The ECG was constantly monitored to control the depth of anesthesia. The core temperature was maintained constant at 37°C using a temperature-controlled heat blanket (Hugo Sachs Elektronik-Harvard Apparatus). Note that in these small immature mice the temperature probe could not be placed rectally, contributing to the relatively long latencies and greater variability of the ABR waves. ABR peaks IV (~5.3 ms) and V (~7 ms) were very small and were thus excluded from analysis. For stimulus generation, presentation, and data acquisition, the TDT System II (Tucker-Davis Technologies) was used that was run by the BioSig32 software (Tucker-Davis Technologies). SPLs were provided in dB SPL root mean square (RMS) (tonal stimuli) or dB SPL peak equivalent (clicks) and were calibrated using a 1⁄4 inch Brüel and Kjaer microphone (model 4939). Tone bursts (6/12/24 kHz, 10 ms plateau, 1 ms cos 2 rise/fall) or clicks of 0.03 ms were presented at 40 Hz or 20 Hz, respectively, in the free field ipsilaterally using a JBL 2402 speaker. The difference potential between vertex and mastoid subdermal needles was amplified (50,000×) and filtered (400-4000 Hz, NeuroAmp) and sampled at a rate of 50 kHz for 20 ms, 1300× to obtain two mean ABRs for each sound intensity. Hearing threshold was determined with 10 dB precision as the lowest stimulus intensity that evoked a reproducible response waveform in both traces by visual inspection.
Statistics and statistical significance
For statistical analysis, the SigmaPlot 12.5 (Systat Software Inc) and GraphPad Prism software were used. Datasets with normal (Gaussian) distribution were assessed by unpaired Student's t-tests (twotailed distribution) if not said otherwise. For datasets with non-Gaussian distribution, the nonparametric Mann-Whitney rank-sum test was used. For analysis of the intensity dependence of ABR amplitudes and latencies and of ABR thresholds across frequencies, two-way ANOVA was used. If the data were normally distributed, the results are displayed as means ± standard error of means (s.e.m.) and otherwise as medians with the respective 25% and 75% quartiles. Mean cumulative distributions of interevent intervals and distribution of ISIs between wildtype and cKO groups were compared using the two-sample Kolmogorov-Smirnov test and the chi-square test, respectively (see Figure 2). To avoid assigning too much weight on high rate-spiking cells, distributions were created by pooling n random ISIs for each cell within the two groups (n = lowest number of events recorded in any of the cells). A statistical test based on gamma probability distribution of ISIs was used to determine the number and duration of bursts and number of spikes per burst within each single-cell recording. In brief, we assumed that action potential firing of MNTB neurons is a random Poisson-like process. In this way, the probability to encounter one ISI in time period τ is p=1-e -λτ , where λ stands for neuronal firing rate. Next, the probability to encounter k ISIs during the time period τ as k-fold convolution of exponential distribution density was calculated, thereby yielding gamma distribution of waiting times for k ISIs. The resulting probability is p =´τ 0 x k−1 λ k e −λs (k−1)! dx . We ran the statistical test through our recorded action potential times, and each spike train where probability stayed lower than 0.01 (p< 0.01) for at least 10 ISIs (k ≥ 10) was recognized as a burst. Significance levels < 0.05 are denoted by *, <0.01 **, and <0.001 ***. No statistical methods were used to predetermine sample sizes. the BMBF (01EW1706) and the DFG to CAH (HU 800/10-1) and grants of the DFG to NS, RR, and TM (priority program 1608). In addition, the work of TM was supported by Fondation Pour l'Audition (FPA RD-2020-10).
Ethics
This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All of the animals were handled according to our local authorities (license numbers: 33.9-42502-04-11/0439; TVV 06/09 and TLV UKJ-17-006).
Additional files
Supplementary files • Supplementary file 1. Distribution of interspike intervals and quantification of auditory brainstem response. (a) Comparison of the distribution of interspike intervals (ISIs) between wildtype and cKO mice. (b) Quantification of auditory brainstem response (ABR) thresholds (mean ± SEM) in response to stimulation with tone bursts at 6, 12, and 24 kHz, or click stimulation. (c) Quantification of peak amplitudes (mean ± SEM) of the first three ABR waves (I-III) in response to click stimuli of various intensities (40-100 dB). (d) Quantification of latencies (mean ± SEM) of the first three ABR waves (I-III) in response to click stimuli of various intensities (40-100 dB).
• Transparent reporting form
Data availability
All data generated or analysed during this study are included in the manuscript and supporting file; Source Data files have been provided and did not change for the revised manuscript. | 9,970.8 | 2022-02-07T00:00:00.000 | [
"Biology",
"Medicine"
] |
Simulation-Based Testing of Automated Driving Systems
Automated Driving Systems (ADS) require extensive safety testing before receiving a road permit. To gain public trust, ADSs must be as safe as a Human Driven Vehicle (HDV) or even safer. Simulation-based safety testing is a cost-effective way to check the safety of ADS. My goal is to compare the safety behavior of ADS with HDV via simulation and to develop a process of selecting testing scenarios that could be useful to build trust and reliability in simulations. Additionally, I aim to translate the performance advantages and disadvantages observed in simulated ADS behavior into real-world safety-critical traffic situations.
My Ph.D. project aims to evaluate ADS safety through simulationbased tests and compare its advantages and disadvantages against a typical human-driven vehicle (HDV).The outcomes from my research could guide transportation regulators in addressing any identified ADS disadvantages.For example, in some cases, making infrastructure and traffic rule adjustments might be more feasible than enhancing ADS performance.An in-depth analysis of available quantitative data can shed light on potential strategies for neutralizing identified ADS disadvantages.
Simulation-based testing provides a controlled, cost-efficient method for evaluating ADS safety compared to on-road trials.However, there are infinitely many potential scenarios, and testing every scenario is unfeasible [5,7].Therefore, defining a process for scenario prioritization and selecting scenarios is essential, which is not covered in the existing literature.Additionally, deriving precise conclusions from simulation-based testing can be complex as it is essential to ensure that simulated elements closely resemble their real-world counterparts.Measuring ADS' disadvantages against a typical HDV poses another challenge, given the limited availability of pre-crash HDV data and the lack of knowledge about the precise behavior of a well-driven HDV in both crash and non-crash situations.
RESEARCH GOALS
The research goals of my Ph.D. thesis are as follows: • RG1 [Scenarios]: To develop a process for prioritizing and selecting scenarios for ADS safety testing.• RG2 [Simulation Environment]: To create a simulation environment and implement the selected scenarios within a simulator.
• RG3 [Experiments]:
To execute the experiments in the simulator and assess the behavior and performance of an ADS compared to an HDV in scenarios.
RESEARCH APPROACH & CONTRIBUTIONS
The research approach consists of five main steps, as shown in Figure 1: (i) prioritizing and selecting test scenarios, (ii) creating simulation models capturing the behavior of an ADS and an HDV, (iii) running simulations against the selected test scenarios, (iv) defining criteria for performance evaluation (v) and evaluating the performance of both ADS and HDV.
To initiate my research, I conducted a comprehensive literature survey to understand the current state-of-the-art from research and industry experts regarding the safety testing methods employed for ADS [10].This review aims to provide an overview of (i) types of ADS, (ii) safety features, (iii) testing methods, and (iv) tools and datasets utilized in ADS safety testing.I outline the planned contributions to fulfill the mentioned research goals.Contribution1 -aiming to achieve RG1: Since ADS encounters infinite scenarios, it is crucial to prioritize and choose which ones to focus on.Therefore, I defined a process to prioritize and select scenarios using pre-existing textual scenario catalogs and real-world autonomous car video data 4 (in progress).I started with the pre-existing scenario catalog from a reputable organization.A set of scenarios is selected that aligns with the specific Operational Design Domain (ODD) of ego vehicle 5 .These scenarios are grouped based on the similar critical action of the ego vehicle and target object (vehicles, pedestrians, cyclists, etc.) and then prioritized using accident statistics.Scenarios that are duplicates or fall outside the limitations of simulators are excluded.The remaining scenarios are scored based on the occurrence frequency of elements like actors, driving maneuvers, weather, and lighting conditions from accident datasets; the highest-scoring scenario from each prioritized group is chosen for simulation.Figure 2 shows the proposed process for prioritizing ADS safety testing scenarios.Additionally, I aim to identify critical scenarios where safety drivers intervene in autonomous mode (particularly when uncertain of the Bolt car's response), using the Autonomous Driving Lab's dataset from the University of Tartu, Estonia 4 .These contributions aim to refine scenario selection for enhanced ADS safety assessment.
Contribution 2 -aiming to achieve RG2: I will use CARLA 6 simulator and ScenarioRunner 7 for the implementation of selected scenarios, as it can depict complex scenarios and the movements of entities such as vehicles and pedestrians.Furthermore, I will use 4 https://adl.cs.ut.ee/This dataset comprises 20 service bags, each containing a video recording of a single loop where a Bolt car operates in autonomous mode along a designated route 5 The ADS under test 6 https://carla.org/ 7https://carla-scenariorunner.readthedocs.io/en/latest/Python API8 for the simulation of ADS as it offers diverse vehicle control models, allowing manipulation of both the environment and the vehicles themselves.The CARLA ROS bridge 9 makes it easy to link CARLA with a third-party ROS-based control system for the ego vehicle.I will use the ROS bridge to connect with Autoware Mini 10 .This contribution will provide a controlled simulation setting and findings about how ADS will behave/respond to specific scenarios that could further be compared to HDV.This ultimately enhances understanding of where ADS excels and where it may face challenges in diverse real-world driving scenarios.
Contribution 3 -aiming to achieve RG3: I will define the criteria and metrics for evaluating and assessing ADS and HDV simulated behavior and performance in Contribution 2, specifically on safety.
FIRST RESULTS (CONTRIBUTION 1)
I applied the proposed process for prioritizing and selecting scenarios to two existing scenario catalogs for ADS safety testing.The first catalog is published by the Land Transport Authority of Singapore 11 and contains 67 real-world traffic scenarios.The second catalog is published by the US Department of Transportation 12 and contains 44 pre-crash scenarios precisely capturing the situations and conditions leading up to an accident or collision.The total number of scenarios in both selected catalogs is 111.Twenty-one scenarios are excluded based on the ODD specific to ADS.The remaining scenarios are categorized into fifteen distinct groups.Nine duplicated scenarios are removed, and eight scenario groups are prioritized based on crash statistics retrieved from real-world data.Considering CARLA simulators' limitations, six scenario groups from the prioritized containing fifty-one scenarios for testing ADS in the CARLA simulator.
RELATED WORK
Several studies performed the safety testing of ADS using openroad [3,4,8,15] test beds [1,13] simulators [2,9,11,14,17] and [6,12] are related to scenario prioritization and selection.However, we only include the studies that perform ADS safety testing using simulation due to space limitations.Son et al. [16] introduced a co-simulation platform that integrates vehicle dynamics, sensors, and traffic modeling, exemplified through use cases e.g., adaptive cruise control.Matthew et al. [14] proposed a simulation framework that employs an adaptive sampling method to test an entire ADS.Jha et al. [9] presented a fault injection tool that systematically injects faults into the hardware and software of an ADS to evaluate safety and reliability.Ben et al. [2] presented an approach to test ADS via Simulink using multi-objective search and surrogate models to identify critical test cases regarding an ADS behavior.My approach to simulation-based ADS safety testing differs from current methods as it takes a black-box perspective when analyzing the ADS's behavior.The primary focus of ADS is pinpointing ADS behavior that deviates from HDVs, considering both positive and negative distinctions.My Ph.D. project is expected to be completed by January 2026.
217 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion) Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
Figure 2 :
Figure 2: Overview of proposed process for prioritization and selection of scenarios for safety testing of ADS | 1,745.8 | 2024-04-14T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
IMPROVEMENT OF SOIL USING GEOGRIDS TO RESIST ECCENTRIC LOADS.
This paper presents the results of experimental investigations to predict the bearing capacity of square footing on geogrid-reinforced loose sand by performing model tests. The effects of several parameters were studied in order to study the general behavior of improving the soil by using the geogrid. These parameters include the eccentricity value, depth of first layer of reinforcement
INTRODUCTION
The reinforced earth has been most widely applied with something in excess of one million square meters of wall facing benign erected to the end of end of 1978, Mickittrick andDarbin (1979).The soil particles in direct contact with the reinforcements tend to slide over them under the effect of the load.The sliding is reduced by the frictional resistance between the particles and the surface of the reinforcement.Consequently this resistance will produce a tensile force along the reinforcing element which will acts as a tie between the particles surrounding it.The soil particles which are in direct contact with the reinforcement are bounded to other particles by the interlocking action.The frictional resistance will be transferred through the reinforced mass (Vidal, 1969).
The present study was undertaken to investigate the bearing capacity of square footings on geogrid-reinforced sand.The parameters investigated include
EXPERMENTAL TESTS AND TESTS PROCEDURE
A series of model loading tests were conducted inside steel box of 600 X 600mm in plane and 700mm in depth.The box was made of steel plate of 3mm thickness, stiffened with angle sections, as shown in Plate (1) .The internal faces of the box were covered with polyethylene sheets in order to reduce the slight friction which might be developed between the box surface and soil.Static vertical loads were applied using electrical hydraulic pump.Loads transferred from the pump to a hydraulic jack were carefully recorded by proving ring installed between the jack and the tested footing.
The footing was loaded at a constant loading rate to failure.The ultimate bearing capacity state was defined as the state at which either the load reached a maximum value where settlement continued without further increase in load or where there was an abrupt change in the load -settlement relationship.Settlement of the footing was measured using two dial gauges fixed in the middle and edge of footing.
The test footing was a square steel plate 60mm in plane and 5mm thick.The value of (Ø) was obtained from the result of triaxial test (UU.test) in accordance with ASTM(D2850-95).
REINFORCEMENT PROPERTIES.
The reinforcement used is polymer geomesh the general view for three types used in tests described, Plate (2) .The dimensions of the geogrid samples used in this study are listed in
Effect of Depth Ratio
The relative improvement for soil versus depth ratio for each value of eccentricity is shown in Fig ( 2).The optimum depth ratio (u/B= 0.75, 0.5, 0.25) show the maximum rate of strength improvement is define [{(Pr/P) -1}*100] where Pr and P is the max load for reinforced and un reinforced sand for eccentricity values (e= 0.05B, 0.135B, 0.22B) respectively.It is noted that for depth ratios (u/B=1.0),improvement values decreased and approach a constant for eccentricity values (e= 0.05B, 0.135B and 0.22B).The relative improvement increases with decreased the values of eccentricity.
It should be pointed out that there is no general consensus regarding the effect of depth ratio on the relative improvement of the soil .Singh (1988), based on the study of square footing on sands reinforced with mild steel grids (also called "welded mesh"), indicated that the effect of depth ratio on the bearing capacity was independent of the number of reinforcement layers and the optimum depth ratio was about 0.25.Selvadurai and Gnanendran, (1989) to improve the bearing capacity of footing located on slope fill using geogrids, showed that when (u<B), the failure path is penetrated below the reinforcement while when (u>B) the failure occurs at the soil geogrid interface (i.e. the failure path is limited in narrow zone) and the deep location of the geogrid layer at (u>2B) does not lead to any improvement in either the carrying load or the stiffness of the footing
Effect of Vertical Spacing of Reinforcement Layers :
Figure ( 4) shows the relative improvement (%) versus vertical spacing ratio(z/B=0.5,0.75, 1.0 and 1.5) for different eccentricity values (e= 0.05B, 0.135B and 0.22B).This figure illustrated the maximum improvement for eccentricity values (e= 0.05B, 0.135B and 0.22B) at z/b=0.5.It can be seen that the rate of strength improvement equals (0) for vertical spacing ratio (z/B= 1.5) for different eccentricity values thus the increase of (z/B) above 1.5 has no effect on the relative improvement for the soil.Similar to these findings were found by Fukuda et. al., (1987) who tested concentric load applied on footing with polymer grid reinforcement showed that the optimum vertical spacing between reinforcement is 2/3B.Guido et al (1987) indicated that the bearing capacity decreased with increasing vertical spacing ratio for Tensar SS1, SS and SS3 geogrid.
Figure (1) Geometric parameters of Reinforced Foundation.
Figure ( 4 )
Figure (4) The Relative Improvement Versus Vertical Spacing Ratio for (u/B=Optimum Value and Br/B=3)
Table ( 2
).The physical and chemical properties for sample used were listed in Table(3).The technical properties for sample used were listed in Table (4). | 1,208.2 | 2024-02-24T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Atrial Glutathione Content, Calcium Current, and Contractility*
Atrial fibrillation (AF) is characterized by decreased L-type calcium current (ICa,L) in atrial myocytes and decreased atrial contractility. Oxidant stress and redox modulation of calcium channels are implicated in these pathologic changes. We evaluated the relationship between glutathione content (the primary cellular reducing moiety) and ICa,L in atrial specimens from AF patients undergoing cardiac surgery. Left atrial glutathione content was significantly lower in patients with either paroxysmal or persistent AF relative to control patients with no history of AF. Incubation of atrial myocytes from AF patients (but not controls) with the glutathione precursor N-acetylcysteine caused a marked increase in ICa,L. To test the hypothesis that glutathione levels were mechanistically linked with the reduction in ICa,L, dogs were treated for 48 h with buthionine sulfoximine, an inhibitor of glutathione synthesis. Buthionine sulfoximine treatment resulted in a 24% reduction in canine atrial glutathione content, a reduction in atrial contractility, and an attenuation of ICa,L in the canine atrial myocytes. Incubation of these myocytes with exogenous glutathione also restored ICa,L to normal or greater than normal levels. To probe the mechanism linking decreased glutathione levels to down-regulation of ICa, the biotin switch technique was used to evaluate S-nitrosylation of calcium channels. S-Nitrosylation was apparent in left atrial tissues from AF patients; the extent of S-nitrosylation was inversely related to tissue glutathione content. S-Nitrosylation was also detectable in HEK cells expressing recombinant human cardiac calcium channel subunits following exposure to nitrosoglutathione. S-Nitrosylation may contribute to the glutathione-sensitive attenuation of ICa,L observed in AF.
trical and structural remodeling of the atria. Calcium influx via I Ca triggers calcium release from intracellular stores and is an essential first step in excitation-contraction coupling and a critical determinant of atrial contractility. In atrial myocytes from patients with persistent AF (1) or from animals with pacinginduced AF (2), calcium current density is significantly decreased. The time course of contractile impairment following the onset of AF (3) or rapid atrial pacing (4) parallels the down-regulation of I Ca , and contractile recovery proceeding cardioversion follows a time course similar to that of reverse electrical remodeling (4). Reports characterizing the impact of AF on the expression of the calcium channel pore subunit in human AF and in experimental models have had conflicting results (5,6); the mechanisms by which I Ca,L down-regulation occurs are not fully resolved.
In a porcine model of rapid atrial pacing, superoxide production is increased (7), and nitric oxide production is decreased (8). Oxidant-generating pathways can shift the myocyte intracellular redox environment from its normally reduced state to a more oxidized one. A shift in oxidation state has been implicated in early atrial electrical remodeling in a canine rapid atrial pacing model of AF (9) and in the remodeling accompanying persistent AF (10). In addition to I Ca (11), the ryanodine receptor, RyR2, and the transient outward K ϩ current, I to , are also redox-sensitive (12,13). Each of these currents modulates the atrial action potential and atrial contractility.
Phosphorylation is the signaling pathway most commonly associated with modulation of calcium channel activity. Oxidant generation and redox changes are important elements in the -adrenergic regulation of I Ca (14), and altered calcium channel phosphorylation has been shown to contribute to the decrement in calcium channel activity during AF (15). However, intracellular redox state can modulate protein function by several additional pathways. Redox state directly regulates the conformation and activity of proteins. Redox-dependent posttranslational modifications of proteins include nitration of tyrosine residues and S-nitrosylation of cysteine residues, among others. Nitration involves the interaction of peroxynitrite anion or another reactive nitrogen species with protein tyrosine residues and is generally indicative of oxidative damage (16). S-Nitrosylation involves the reversible transfer of NO to a cysteine residue of an acceptor protein and has been considered cytoprotective (17). Two proteins essential for cardiac excitation-contraction coupling, the ryanodine receptor (18) * This study was supported by National Institutes of Health Grants RO1 HL-65412 and RO1 HL-73816, American Heart Association Grants 0130309N and 0235045N, and a grant from the Atrial Fibrillation Innovation Center, an Ohio Wright Center Initiative. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 To whom correspondence should be addressed: Dept. of Molecular Cardiology, M/S NE-61, Cleveland Clinic, 9500 Euclid Ave., Cleveland, OH 44195. Tel.: 216-444-0820; Fax: 216-444-9155; E-mail<EMAIL_ADDRESS>2 The abbreviations used are: AF, atrial fibrillation; BSO, L-buthionine sulfoximine; ERP, effective refractory period; NAC, N-acetylcysteine; pF, picofarads; mN, millinewtons. and the L-type Ca 2ϩ channel (19), have been shown to be nitrosylated. It is intriguing that mice conditionally overexpressing neuronal nitric-oxide synthase have lower I Ca densities and contractility than do the wild type controls (20). Disease states, including diabetes (16), hypertension (21), and heart failure (22), all common co-morbidities with AF, are characterized by evidence of increased oxidative stress. Under stressed conditions, cellular and plasma glutathione is oxidized, and cellular glutathione stores become depleted. Glutathione is the most abundant endogenous reducing agent and a critical modulator of cellular redox state. Since glutathione can act as an intracellular NO acceptor, alterations in the level of total cellular glutathione may have important consequences for both the redox-dependent and the NO-dependent regulation of ion channel activity.
Studies done in one of our laboratories (1) and others (15,23,24) have shown that I Ca is decreased in atrial myocytes isolated from AF patients. Several years ago, we postulated that changes to the intracellular redox environment of myocytes from the fibrillating atria may contribute to the observed loss of calcium current (25). Here we have evaluated the redox-and glutathionedependent modulation of I Ca in myocytes from patients undergoing cardiac surgery for chronic AF. Because glutathione is an endogenous reducing agent important in maintaining a reduced intracellular environment, we evaluated whether glutathione levels are reduced in AF. We have also tested the impact of pharmacologic depletion of atrial glutathione on atrial function in an in vivo canine model system. Because redox state modulates the activity of cellular kinases and phosphatases, we have examined the calcium current response to -adrenergic stimulation and correlated this response with atrial glutathione content. Many cardiovascular diseases are characterized by glutathione depletion; in this setting, S-nitrosylation of several cellular targets has been reported to increase (26). Thus, we explored calcium channel S-nitrosylation as a potential mechanism that may couple altered glutathione levels with modulation of calcium channel activity and contractility in AF.
MATERIALS AND METHODS
Human Atrial Studies- Table 1 summarizes relevant clinical characteristics of the cardiac surgery patients from whom left atrial appendage specimens were obtained. Left atrial tissue was brought to the laboratory from the operating room within 5 min of excision. Part of the specimen (ϳ300 mg) was used for myocyte isolation, whereas adjacent areas were snap frozen in liquid nitrogen and stored at Ϫ80°C for assessment of glutathione and S-nitrosylation levels. All procedures were performed between 2001 and 2005 with informed consent and approval from the Cleveland Clinic Institutional Review Board.
Myocyte Isolation-Myocytes were isolated from the left atrial appendage (human or canine) using a chunk dissociation technique, as previously described (1). Yields of canine atrial myocytes were in the range of 40 -60% calcium-tolerant, rodshaped, well striated myocytes. Yields of human myocytes were lower (20 -40%). Myocytes were maintained at room temperature until use. To determine whether exogenous N-acetylcysteine (NAC; a glutathione precursor) or glutathione modulated I Ca , preparations were divided following isolation, with half stored in an incubation buffer containing 10 mM NAC or glutathione, whereas the remaining cells were held in an incubation buffer without additional antioxidant. The composition of this buffer was as follows: 118 mM NaCl, 25 Canine Glutathione Depletion-L-Buthionine sulfoximine (BSO; Sigma) was dissolved in 30 ml of 0.9% NaCl immediately prior to intravenous administration. Young adult beagle hounds of either sex (5-12 kg) were randomly assigned to either active treatment (0.1 mmol/kg BSO, n ϭ 10) or control dosing (30 ml of 0.9% NaCl injection, n ϭ 10) twice daily for 2 days. On the morning of day 3, in vivo measurements were obtained prior to euthanasia and removal of the heart. All animal protocols were approved by the Institutional Animal Care and Use Committee of the Ohio State University.
In Vivo Physiologic Measurements-Dogs were lightly sedated for all conscious procedures (butorphanol, 0.1 mg/kg intramuscularly). Two-dimensional and M-mode echocardiography was performed at base line and daily thereafter (SSD 1400 (Aloka, Tokyo, Japan) with a 5 MHz transducer). Systolic blood pressure was measured in triplicate using an automated blood pressure device (Vet/BP model 6000; Sensor Devices Inc., Waukesha, WI). Electrocardiograms were recorded using a Biopac MP 100 (Biopac Systems, Santa Barbara, CA); lead I was used for P wave measurements. Electrocardiograms were digitized and stored for off-line analysis (acqKnowledge version 3.5, Biopac Systems, Santa Barbara, CA).
After induction of anesthesia with isoflorane, a bipolar electrode was advanced to the right atrium. Atrial effective refractory periods (ERPs) were determined at cycle lengths of 300, 250, and 200 ms, using a train of eight pacing stimuli at twice the diastolic pacing threshold, followed by an increasingly premature extrastimulus (5-ms decrement) delivered by a programmable stimulator (Medtronic model 5325; Medtronic, Inc.). Inducibility of atrial arrhythmias was determined using a right atrial burst pacing protocol (10 Hz stimulation for 10 s). Inducibility was evaluated in triplicate at 5-min intervals.
In Vitro Physiology Studies: Contractility-In a separate set of experiments, dogs were randomized to treatment with BSO or vehicle control as above (n ϭ 3 per group). Dissection and experimental procedures for trabeculae were as previously described (27). Briefly, using a stereomicroscope, ultrathin trabeculae were dissected from the right atria (n ϭ 20 trabeculae) in a Krebs-Henseleit solution, containing 120 mmol/liter NaCl, 5.0 mmol/liter KCl, 2.0 mmol/liter MgSO 4 , 1.2 mmol/liter NaH 2 PO 4 , 20 mmol/liter NaHCO 3 , 0.25 mmol/liter CaCl 2 , and 20 mmol/liter 2,3-butanedione monoxime in continuous equilibrium with 95% O 2 , 5% CO 2 . Only very small muscles (averages 237 Ϯ 24 m wide, 162 Ϯ 17 m thick, and 2.97 Ϯ 0.32 mm long) were used. Muscles were mounted between a force transducer and a length displacement device, in the same solution (with omission of 2,3-butanedione monoxime and CaCl 2 increased to 1.5 mmol/liter). At 37°C, stimulation at 1 Hz (20% above threshold) was initiated. After equilibration at optimal length, a force-frequency protocol was performed. Trabeculae were stimulated at frequencies of 1-5 Hz, and data were collected at steady state for each frequency. Next, the response to isoproterenol (1 nmol/liter to 1 mol/liter) was assessed. Data were collected and analyzed with custom programs written in Labview.
Glutathione Assay-Atrial glutathione levels were measured using colorimetric assays (Oxis Research and Northwest Life Sciences). Intra-assay coefficient of variation was estimated as 0.6%, and interassay coefficient of variation was estimated as 3%. Frozen (Ϫ80°C) tissue specimens, 20 -60 mg each, were homogenized in vendor-recommended buffers for measurement of tissue glutathione.
S-Nitrosylation Measurement-The biotin switch technique (28) was used to assess the extent of nitrosylation of human LAA tissue or HEK cell lysates from cells stably overexpressing the ␣ 1C subunit of the cardiac L-type Ca 2ϩ channel (29). This technique labels nitrosylated cysteines with biotin, so that this posttranslational modification can be visualized. Briefly, left atrial appendage tissue or cells were homogenized in HEN buffer (250 mM Hepes-NaOH, 1 mM EDTA, 0.1 mM neocuproine). Free thiol groups were blocked with 20 mM methyl methane-thiosulfonate in HEN buffer with 1% SDS for 20 min at 50°C. Free methyl methane-thiosulfonate was removed by acetone precipitation at Ϫ20°C for 20 min. After centrifugation at 2000 ϫ g for 10 min at 4°C, the resulting pellet was resuspended in HEN buffer with 1% SDS. SNO groups were reduced with sodium ascorbate (1 mM), and free thiols were labeled with N- [6-(biotinamido)hexyl]-3Ј-(2Ј-pyridyldithio)-propionamide (4 mM; Pierce) for 1 h at 25°C. All steps up to this point were performed with minimal light at 4°C to prevent the loss of endogenous S-nitrosylation. Following biotin labeling, a 20-l aliquot was removed from each sample and added to 2ϫ sample buffer. The sample was subjected to SDS-PAGE under nondenaturing conditions, and Western blot analysis was performed
Demographic and clinical characteristics of patients in study
Variables studied listed include patient age, gender, left atrial dimension, and extent of mitral and tricuspid regurgitation. Patients from whom myocyte I Ca studies were performed (and contributing to Fig. 2) are indicated in column 8 with an asterisk. Comorbid disease conditions (in addition to AF, for the AF/AF and PAF/SR patients) that the patients had and their presurgical medications are also listed. ACEi, angiotensin-converting enzyme inhibitor; Ao., aortic; ARB, angiotensin II receptor blocker; BB, -adrenergic receptor blocker; AVN, atrioventricular node; CAD, coronary artery disease; CHF, congestive heart failure; EF, left ventricular ejection fraction; Htn, hypertension; LA, left atrium; MI, myocardial infarction; MR, mitral regurgitation; MS, mitral stenosis; MV, mitral valve; NA, not available; PAF, paroxysmal atrial fibrillation; PPM, permanent pacemaker; T4, thyroid hormone replacement; TIA, transient ischemic attack; TR, tricuspid regurgitation; tr, trivial; TV, tricuspid valve; VHD, valvular heart disease. using a mouse anti-biotin antibody (Sigma) to label biotinylated, and thus nitrosylated, residues. The remainder of some samples was immunoprecipitated with a polyclonal antibody to the ␣ 1C subunit of the L-type Ca 2ϩ channel (Santa Cruz Biotechnology, Inc., Santa Cruz, CA), and immunoprecipitated protein was analyzed by Western blot analysis using an antibiotin antibody to determine the level of nitrosylation of the ␣ 1C subunit.
Group
Generation of Antibodies-Antibodies to the cardiac ␣ 1C subunit (Fig. 6) were raised against synthetic peptides as described for the 2-3 loop antibody in Ref. 5. Further antigenic epitopes comprised the long N-terminal (KGTLVHEAQLN) and the ␣/ interaction domain in the 1-2 loop (KQQLEEDLK-GYLKG) of the cardiac ␣ 1C (Swiss-Prot accession number P15381). The antibodies were affinity-purified on respective peptide antigen columns.
Reagents-Reagents, all analytical grade or higher, were purchased from Sigma, unless otherwise noted.
Statistical Analysis-Statistical analysis was performed using either Student's t test or analysis of variance, as appropriate. Statistical significance was defined as p Ͻ 0.05. Data are reported as mean Ϯ S.E.
Glutathione Content Is Decreased in LAA Tissues from AF
Patients and BSO-treated Dogs-Total glutathione content was evaluated in left atrial tissues from cardiac surgery patients with no history of AF and compared with that of persistent AF patients or patients with a history of paroxysmal AF but in normal sinus rhythm at the time of surgery (Table 1). Fig. 1A shows that relative to control patients with no history of AF, left atrial glutathione content was 65% lower in patients with persistent AF. Tissue from patients with a history of paroxysmal AF but in sinus rhythm at the time of surgery had glutathione levels 45% lower than controls, a value intermediate between that of controls and those in AF at surgery. Analysis of variance revealed that differences between groups were highly significant (p Ͻ 0.001). Fig. 1B shows the left atrial glutathione content in dogs treated for 48 h with or without BSO. Glutathione levels were lower (23.7%, p Ͻ 0.02) in atrial tissues from dogs treated for 48 h with BSO than in control animals (0.74 Ϯ 0.06 versus 0.97 Ϯ 0.06 mol/g) (Fig. 1B).
Influence of NAC on Human Atrial Calcium Currents (I Ca )-I Ca was recorded from left atrial myocytes isolated from AF patients in AF at the time of surgery (Fig. 2). As described under "Materials and Methods," half of the myocytes from each preparation were stored in a buffer supplemented with 10 mmol/liter NAC, a glutathione precursor. Recordings were obtained under base line conditions and following exposure to isoproterenol (4-min exposure, 1 M) (Fig. 2, A-D). NAC exposure significantly increased human atrial I Ca in myocytes from AF patients. Base line I Ca was ϳ70% higher in the myocytes preincubated with NAC relative to those not similarly incubated with NAC (Ϫ11.85 Ϯ 1.4 pA/pF versus Ϫ6.97 Ϯ 0.9 pA/pF, p Ͻ 0.002). Current-voltage relations for the responses to NAC incubation and to isoproterenol stimulation are summarized in Fig. 2E. Although both isoproterenol and NAC increased I Ca , the leftward shift in the current-voltage relations typically observed following adrenergic stimulation were not present following NAC exposure alone. Peak I Ca densities for each group before and after isoproterenol stimulation are summarized and compared in Fig. 2F. Perhaps because of the upregulation of basal I Ca , the relative response of NAC-treated myocytes to isoproterenol was attenuated relative to that of myocytes not treated with NAC (Fig. 2F). These results suggest that NAC and isoproterenol have discrete effects on the calcium channel. Interestingly, in myocytes from two patients with no history of AF, incubation with 10 mM NAC had no significant effect on peak I Ca (Ϫ9.0 Ϯ 2.6 pA/pF (n ϭ 3) in control bath, versus Ϫ11.1 Ϯ 0.7 pA/pF (n ϭ 3) in NAC, p ϭ 0.68).
Effects of BSO on Canine Atrial Function, Muscle Contractility, and Calcium Currents-To more systematically evaluate the functional impact of glutathione depletion on atrial con- A, glutathione levels are plotted for human LAA tissues from control patients with no history of AF, from persistent AF patients in AF at surgery, and from paroxysmal AF patients in normal sinus rhythm at the time of surgery. Glutathione content was lower in both AF groups relative to controls (*, p Ͻ 0.01); glutathione levels were also lower in patients with persistent AF than in those from patients with paroxysmal AF but in normal sinus rhythm at surgery (HxAF/SR; ‡, p Ͻ 0.05). B, total glutathione levels were decreased relative to controls in canine atrial tissue excised from BSO-treated dogs (*, p Ͻ 0.02). The number of tissue samples evaluated is indicated within each bar. tractility and calcium currents, 10 dogs were treated with BSO (a ␥-glutamyl synthetase inhibitor) for 48 h prior to echocardiographic examination of cardiac function and ex vivo electrophysiologic and contractile characterization and compared with the same number of control (untreated) animals.
In Vivo Effects of BSO-As summarized in Table 2, there were no base line differences between groups with respect to resting heart rate, P wave duration, left atrial diameter, left atrial contractility, left ventricular fractional shortening, or peak A wave velocity. The only significant difference between groups prior to initiation of treatment was a slightly higher systolic blood pressure in the BSO-treated dogs. There was no significant difference in systolic blood pressure or heart rate between groups after 48 3A, paired t test, p Ͻ 0.007). Atrial ERPs were assessed at cycle lengths of 300, 250, and 200 ms prior to euthanasia. Analysis of variance from studies at all rates revealed that BSO treatment significantly shortened ERPs compared with the control group (p Ͻ 0.001). Abbreviation of the ERP by BSO treatment was greatest at the highest stimulation rates (Fig. 3B). There was no difference in either inducibility or duration of atrial arrhythmias between the two groups, with 2 of 10 control and 2 of 10 BSOtreated dogs having brief (Ͻ5 s) inducible atrial fibrillation. One control dog experienced an episode of induced atrial flutter lasting more than 5 min. No spontaneous arrhythmias were detected during routine, conscious electrocardiogram monitoring.
In Vitro Contractility: Effects of BSO-Contractility was further assessed by measuring the force-frequency response of isolated canine atrial trabeculae from control dogs or those pretreated with BSO. Dur- Myocytes were held at Ϫ60 mV and stepped to potentials from Ϫ40 to ϩ30 mV in 10-mV increments. B-E, representative control I Ca traces from a human AF myocyte maintained in a control incubation buffer (B), before and after superfusion with 1 M isoproterenol (C). Shown are similar recordings from a different myocyte incubated (Ͼ1 h) in a buffer supplemented with 10 mmol/liter N-acetylcysteine prior to recording (D) and following superfusion with 1 M isoproterenol (E). Current densities are normalized to cell capacitance (pA/pF). F, current-voltage relations for all myocyte groups studied. The number of myocytes in each group is shown in the legend (6 -10 patients/group). G, summary plot comparison of paired peak I Ca data (before and after isoproterenol exposure) from myocytes isolated from human AF left atria and incubated in a control buffer or in one supplemented with 10 mmol/liter N-acetylcysteine. Statistical significance (unpaired t test) is indicated above the bars. Patients whose myocytes were evaluated in this study are indicated in Table 1.
TABLE 2 Canine base line (pretreatment) in vivo cardiovascular function (means ؎ S.E.)
SBP, systolic blood pressure; HR, heart rate; LV FS, left ventricular fractional shortening; peak A wave velocity, atrial ejection by echocardiogram. Effects of BSO on Canine Atrial Calcium Currents-I Ca was measured in myocytes isolated from the atria of control or BSO-treated animals to correlate the decrease in force-frequency response in isolated atrial trabeculae with I Ca changes in individual myocytes. Representative I Ca traces are shown in Fig. 4. Peak I Ca was significantly lower in canine atrial myocytes from BSO-treated animals (p Ͻ 0.05). Analysis of the relationship between I Ca density and tissue glutathione content showed that I Ca was inversely correlated with atrial glutathione concentration (r 2 ϭ Ϫ0.36, p ϭ 0.05; data not shown). BSO treatment had no effect on time-dependent recovery from inactivation or voltage dependence of steady-state inactivation.
Group
To further evaluate the role of glutathione depletion on I Ca , experiments were performed in which myocytes were divided into two populations after isolation, with half incubated (Ͼ3 h) in a solution containing 10 mmol/liter glutathione. Glutathione incubation increased I Ca in myocytes from control and BSOtreated animals; however, the response to glutathione incuba-
. BSO treatment abbreviates atrial ERP and decreases contractility in a rate-dependent manner.
A, summary of echocardiographic measurements shows that A wave velocity (reflecting synchronized atrial contractile activity, measured during spontaneous sinus rhythm) was significantly reduced in BSO-treated dogs (p Ͻ 0.007). B, the atrial effective refractory period was significantly reduced in dogs treated with BSO (p Ͻ 0.001). Open bars, control group; shaded bars, BSO-treated group. *, a significant difference between treatment groups at a specific cycle length (p Ͻ 0.05). C, force-frequency response of active tension development by isolated atrial trabeculae from BSO-treated dogs is impaired compared with controls at all stimulus rates above 1 Hz (p Ͻ 0.05). Myocytes in A-D were of similar size, and currents were normalized to cell capacitance. Myocytes were held at Ϫ60 mV and stepped to potentials from Ϫ40 to ϩ30 mV in 10-mV increments (as in Fig. 2). A, I Ca recorded from a representative control canine myocyte. B, I Ca from a canine myocyte following in vivo treatment with BSO. C, I Ca from a control canine myocyte following incubation (Ͼ3 h) with 10 mmol/liter glutathione. D, I Ca from a canine myocyte following in vivo BSO treatment and 10 mmol/liter glutathione incubation. E, summary data of peak I Ca (mean Ϯ S.E.). The number of observations is indicated in each bar. *, difference from control (p Ͻ 0.05); #, difference from BSO (p Ͻ 0.05).
tion was greater in the myocytes from the BSO-treated animals (Fig. 4, C and D). Following incubation with glutathione, there was no difference in peak I Ca between myocytes from control and BSO-treated animals (Fig. 4E, p ϭ 0.24).
Canine Atrial -Adrenergic Responses-In myocytes from BSO-treated dogs, isoproterenol-stimulated I Ca was significantly lower than in controls (Fig. 5A, p Ͻ 0.05). There was no significant difference in isoproterenol-stimulated I Ca between control and BSO-treated myocytes after incubation with glutathione. Concordant with the I Ca measurements, the contractile response to isoproterenol of BSO-treated atrial trabeculae was also significantly attenuated (representative traces in Fig. 5B). As summarized in Fig. 5C, the control group displayed a robust response, and at the highest concentration of isoproterenol (1 mol/liter), active force in control preparations had risen from 23.3 Ϯ 3.7 to 78.5 Ϯ 8.9 mN/mm 2 , representing an average individual increase of 315 Ϯ 72%. After BSO treatment, at 1 mol/liter isoproterenol, active force rose from 18.1 Ϯ 1.7 to 36.6 Ϯ 4.3 mN/mm 2 (113 Ϯ 30%).
S-Nitrosylation of ␣ 1C Subunit of L-type Calcium Channel in Human AF Samples-Glutathione depletion has been associated with an increase in S-nitrosylation of protein thiols (26,30), and S-nitrosylation of the calcium channel has been associated with a decrease in I ca (19). We used the biotin switch technique to evaluate the extent of endogenous S-nitrosylation of proteins in the same control and AF samples used to assess glutathione levels. A biotin-labeled band representing a nitrosylated cysteine residue was detected around 214 kDa in samples from patients with a history of AF (Fig. 6A). Additional biotin-labeled bands were detected above 420 kDa, at 160 kDa, and at ϳ64 kDa. The apparent molecular mass of the band around 214 kDa is consistent with that of the ␣ 1C subunit of the L-type calcium channel. Blots were reprobed with two antibodies to the ␣ 1C subunit (Fig. 6B). An antibody specific for the 2-3 loop preferentially recognized a band greater than 247 kDa; an antibody specific for the N terminus of the ␣ 1C subunit preferentially labeled a protein around 214 kDa. This band appeared in some of the control samples but was less readily detected in the AF samples (Fig. 6B). The composite image (Fig. 6C) shows that the band detected by the calcium channel N terminus antibody ran at the same position as the band representing the nitrosylated protein (Fig. 6A). Biotinylation and/or the antibiotin antibody attenuated detection of the ϳ214 kDa band (Fig. 6B), since this band was evident in all specimens and of comparable density in each lane of a blot not probed for biotin (Fig. 6D). A very similar staining pattern was obtained with an antibody directed against the 1-2 loop of the ␣ 1C (data not shown). Because density of the ␣ 1C subunit was not increased in the samples from AF patients, increased nitrosylation intensity must reflect increased nitrosylation of the calcium channel. Densitometric analysis of the biotinylated Western blots showed that nitrosylation of the ␣ 1C subunit in the AF samples was increased 4.85-fold in the AF versus control samples (p Ͻ 0.03; Fig. 6G).
Lysate from one of the samples that was biotin-labeled was immunoprecipitated with an antibody to the ␣ 1C subunit and subjected to SDS-PAGE and Western blot analysis using an anti-biotin antibody to confirm that the immunoreactive band detected at ϳ214 kDa was the ␣ 1C subunit. An immunoreactive band in the lane loaded with lysate was labeled with biotin and immunoprecipitated with an antibody to the ␣ 1C subunit. No bands were detected in control lanes loaded with either lysate that had not been labeled with biotin or lysate that was immunoprecipitated with rabbit serum alone (Fig. 6E). To further confirm that the ␣ 1C subunit of the human L-type calcium channel can be nitrosylated, HEK cells overexpressing the ␣ 1C subunit were treated with the NO donor S-nitrosoglutathione FIGURE 5. Electrophysiologic and contractile responses to -adrenergic stimulation are attenuated after BSO treatment. A, mean peak I Ca densities Ϯ S.E. for isoproterenol-stimulated currents, with and without exogenous glutathione incubation. The number of observations is indicated within each bar. *, p Ͻ 0.05 compared with myocytes from control dogs. B, representative atrial trabecular tension transient, 1-Hz stimulation, from control and BSOtreated dogs, in the absence and presence of maximal (1 M) -adrenergic stimulation with isoproterenol. C, summary of tension data shows that the effect of isoproterenol on developed tension is reduced in trabeculae from BSO-treated dogs. Squares, control atria; circles, BSO-treated atria. *, p Ͻ 0.05 compared with controls.
(100 mol/liter, 20 min) and subjected to the biotin switch technique in the same manner described for human atrial tissues. Lysate from these cells was immunoprecipitated with an antibody to the ␣ 1C subunit, and then Western blot analysis was performed using a biotin primary antibody. In the presence of S-nitrosoglutathione, the ␣ 1C subunit was nitrosylated (Fig. 6F).
To evaluate the relationship between tissue glutathione content and the extent of nitrosylation of the ␣ 1C subunit, glutathione content was plotted against the density of immunoreactive bands detected on Western blots from the corresponding samples (Fig. 6H). Tissues with lower glutathione content had more protein nitrosylation than tissues with greater glutathione content. This inverse correlation was significant (p Ͻ 0.03).
DISCUSSION
Redox state modulates numerous signaling pathways in normal and diseased tissues. Cardiac cells maintain a strongly reduced state via production and enzymatic regulation of glutathione, NADPH, and thioredoxin (31). Although transient changes in redox state play a physiological role in cell signaling (32), shifts toward a persistent oxidative state are associated with injury and aging (33,34). Glutathione is the primary intracellular redox buffer. Diseases, including diabetes (35), obesity (36), and heart failure (37) are associated with glutathione depletion and evidence of systemic oxidative stress as well as increased risk of AF (38 -40).
Acute loss of cardiac glutathione occurs during cardiac surgery; glutathione loss correlated with decreased ventricular function following surgery (41). Postoperative AF occurs in 20 -40% of cardiac bypass surgery patients; advanced age and decreased atrial contractile function have been identified as predictors of postoperative AF (42). Here, atrial glutathione content was ϳ40% lower in left atrial samples from AF patients than in tissues from control patients with no history of AF (Fig. FIGURE 6. S-Nitrosylation of the ␣ 1C subunit of the L-type calcium channel is increased in human AF tissue. A, representative anti-biotin Western blot analysis (WB) of biotin-labeled S-nitrosylated residues showing three human tissue samples from control patients and three samples from patients with AF. B, the same blot was stripped and reprobed with antibodies to the 2-3 loop and N terminus of the ␣ 1C subunit of the L-type calcium channel. Lower detection of the ϳ214-kDa band in the AF samples may reflect masking of the epitope by residual anti-biotin antibody. C, this composite image is an overlay of the biotinylated blot A (red) and the ␣ 1C antibody staining (B, black). D, a blot from a separate gel run at the same time as the one in A-C, loaded with the same samples, probed with the same anti-␣ 1C antibodies used in B but not probed with the anti-biotin antibody. E, atrial tissue lysate was immunoprecipitated (IP) with an ␣ 1C subunit-specific antibody, and then Western blot analysis was performed using an anti-biotin antibody. As a control, an unlabeled aliquot of the AF sample was immunoprecipitated with an ␣ 1C subunit-specific antibody, and then Western blot analysis was performed using an anti-biotin antibody. In addition, a biotin-labeled aliquot of the AF sample was immunoprecipitated with rabbit serum, and then Western blot analysis was performed using an anti-biotin antibody. The arrow points to the ␣ 1C subunit band. F, homogenates from HEK cells expressing the ␣ 1C subunit of the L-type calcium channel were treated with 100 mol/liter S-nitrosoglutathione (GSNO) for 20 min and subjected to the biotin switch technique. Lysate from these cells was immunoprecipitated with an ␣ 1C subunit-specific antibody, and then Western blot analysis was performed using an anti-biotin antibody. G, summary densitometric analysis of mean relative band density of S-nitrosylated ␣ 1C subunit of the L-type calcium channel. *, p Ͻ 0.05 compared with controls. H, S-nitrosylation of the ␣ 1C subunit of the L-type calcium channel is inversely related to the atrial glutathione content. The line represents a best linear fit of the relationship for all samples (r ϭ Ϫ0.70, p ϭ 0.025).
1A), consistent with an AF-related increase in atrial oxidative stress (10) and decreased atrial contractile function (3). Increased wall stress leads to increased oxidant production (43,44); valvular dysfunction (increasing atrial pressures and wall stress) in AF patients may contribute to the greater loss of glutathione in these patients. Risk factors, including diabetes, obesity, and age, are also relevant.
Calcium cycling is essential for contraction, and Ca 2ϩ influx via the L-type calcium channel (I Ca ) is a critical determinant of contractility. A decrement in atrial myocyte I Ca has been noted both in AF induced by rapid atrial pacing (2,45,46) and in studies of human atrial myocytes isolated from patients with AF (1,23,24). There is less agreement about the molecular basis for the down-regulation of I Ca . Although several animal studies suggest transcriptional regulation as a primary mechanism for this response (6,45), no change in the expression of the primary pore-forming channel subunits was detected in human AF studies (5,15). This apparent conflict may reflect subtle differences between the rapid atrial pacing models and clinical AF.
The human cardiac calcium channel is modulated by hypoxia and redox state (11). To test the hypothesis that redox modulation contributes to the loss of I Ca in human AF, we evaluated the impact of incubating myocytes isolated from AF patients with NAC (a glutathione precursor). Fig. 2 shows that this treatment resulted in a robust increase in I Ca in myocytes isolated from these patients. Attempts to correlate I Ca with tissue glutathione levels revealed a trend for the lowest current densities to be present in myocytes isolated from tissues with the lowest glutathione content. Incubation of myocytes with NAC led to a significant increase in I Ca both in the absence and in the presence of -adrenergic stimulation (Fig. 2). NAC incubation had little effect on I Ca in myocytes isolated from two patients with no history of AF. Because of the very limited access to appropriate control cases, it is difficult to fully study this issue using human tissues.
To systematically evaluate the impact of lower glutathione levels on atrial I Ca and contractile function, experiments were performed in which dogs were treated for 48 h with BSO, a ␥-glutamyl synthetase inhibitor. BSO decreased atrial glutathione content by ϳ24%, less than the decrement in glutathione noted in human AF tissues (Fig. 1B). Although this is a modest reduction, echocardiography (Fig. 3A) revealed significantly depressed A wave velocity (which reflects atrial contractile function) in BSO-treated animals. Contractile remodeling in AF is strongly associated with electrical remodeling, and early in AF the time course of contractile remodeling parallels that of electrical remodeling (4). Fig. 3B shows that the atrial ERP was reduced at all cycle lengths studied in the BSO-treated dogs. Abbreviation of atrial ERP was closely correlated with loss of contractile force generation in atrial trabeculae isolated from the BSO-treated dogs and a shift in the forcefrequency response from a positive slope to a flat or negative slope (Fig. 3C).
Abbreviation of atrial ERP is consistent with a loss of I Ca (2). I Ca was decreased in canine atrial myocytes isolated from BSOtreated relative to untreated control animals (Fig. 4). The increased density of I Ca of human atrial myocytes following incubation with NAC was paralleled by the response of canine atrial myocytes to glutathione incubation. This response was noted in myocytes isolated from both control and BSO-treated animals. It is notable that atrial glutathione levels in the control dogs were lower than the glutathione levels in control patients (Fig. 1).
In the presence of exogenous glutathione, canine atrial I Ca from the BSO-treated animals was still lower than that recorded from the control animals. Altered calcium channel phosphorylation may contribute to the decrement in I Ca in human AF (15). In the presence of isoproterenol (Fig. 5A), atrial I Ca remained depressed in myocytes isolated from BSO-treated animals. Attenuation of I Ca in treated myocytes in the presence of isoproterenol was paralleled by a diminished contractile response to isoproterenol in isolated atrial trabeculae (Fig. 5, B and C). Following incubation of myocytes from BSO-treated dogs with exogenous glutathione, the isoproterenol-stimulated calcium current density was essentially identical in both groups. Redox-dependent modulation of kinase (48) and/or phosphatase (49) activity may also contribute to the loss of I Ca and contractile response in the BSO-treated animals and, we speculate, in human AF. Although BSO treatment attenuated atrial I Ca and contractility, it did not promote AF. Additional factors (dispersion of repolarization, altered excitability, neural influences, etc.) must also be necessary to promote arrhythmogenesis.
Although we did not evaluate the influence of glutathione or NAC treatment on atrial L-type calcium channel phosphorylation, we did investigate another form of redox-sensitive intracellular signaling. Reactive oxygen and nitrogen species can lead to posttranslational modifications of cellular proteins and alter bioactive lipids (50). It seems likely that posttranslational modification of channels has functional consequences. Oxidant stress is implicated in the modulation of calcium channels (49), sodium channels (51), and the ryanodine receptor (52). Experimental advances have made it possible to evaluate modification of protein thiols (53). Much like protein phosphorylation, the addition of an -NO group to a protein can dynamically modify its function. This type of regulation may provide a mechanism by which changes in oxidation state and NO metabolism dynamically modulate cell function. We found increased levels of nitrosylation in the human LAA samples with decreased levels of total glutathione and a history of AF (Fig. 6). In a similar manner, another study reported that glutathione depletion was associated with increased nitrosylation of mitochondrial proteins, and it was suggested that this posttranslational modification protected cells from irreversible protein oxidation (26).
Here we report that the ␣ 1C subunit of the L-type Ca 2ϩ channel is a protein that is often highly nitrosylated in LAA samples from AF patients and that nitrosylation was inversely related to atrial glutathione content. This subunit was recently identified as the predominant protein that is S-nitrosylated in female mice following ischemia-reperfusion injury (19). In that setting, nitrosylation was also associated with decreased I Ca and was suggested to be cardioprotective by attenuating Ca 2ϩ overloadinduced cellular injury. Mice conditionally overexpressing nNOS (with increased NO production adjacent to the calcium channel) have lower I Ca densities and contractility than do the wild type controls (20). Proteins in addition to the calcium channel also undergo nitrosylation (Fig. 6A). Ryanodine receptor nitrosylation is implicated abnormal regulation of calcium release and may contribute to spontaneous electrical activity (54).
AF is associated with both decreased I Ca and evidence of increased oxidant production. As in ischemia-reperfusion injury, a down-regulation of I Ca may be a short term response that protects myocytes from rate-induced calcium overload, preventing more extensive and irreversible proteolytic tissue injury (55). With adequate intracellular glutathione, this response may be mediated by S-nitrosylation of cysteine residues.
Integration of Results into a Coherent Model-These results may be integrated into a coherent framework. Under unstressed conditions, intracellular redox state is maintained in a highly reduced state. Under these conditions, available NO is most likely to interact with the abundant glutathione molecules within the myocyte. In the presence of ischemia-reperfusion, Ca 2ϩ overload, or mitochondrial stress, production of intracellular oxidants is increased. Due to its abundance, these reactive species are likely to interact with glutathione, oxidizing it and releasing NO to interact with other target molecules (e.g. the calcium channel and ryanodine receptor). Nitrosylation of these targets may transiently decrease calcium influx and attenuate the energetic burden associated with calcium cycling. This generally cytoprotective effect may be associated with abbreviated ERP, decreased contractility, and increased risk of reentrant arrhythmia.
Under more persistent oxidant stress, oxidized glutathione may be exported from the cell faster than it is synthesized or recycled, leading to a state of net glutathione depletion. Aging is associated with decreased protection from oxidant injury (56), and aged hearts have lower glutathione levels (47). Under these conditions, intracellular oxidants may be more likely to interact with target proteins, leading to protein nitration (10) or other irreversible oxidation reactions (51). Analysis of tissue glutathione levels suggests that BSO-treated dogs and paroxysmal AF patients might be functionally comparable, whereas glutathione depletion may be greater in persistent AF.
Conclusions-Human AF is associated with oxidant stress and decreased atrial glutathione levels; persistent AF is associated with decreased atrial I Ca . We propose that these observations are linked and that tissue glutathione acts to protect the myocyte from lethal oxidant injury. Glutathione levels also modulate the impact of other oxidants, including nitric oxide, on calcium channel activity. Calcium influx is at least transiently enhanced during the high rate activity of AF. Increased oxidant production during AF oxidizes glutathione, leading to a decrement in cellular glutathione levels. Glutathione levels also impact the adrenergic modulation of the calcium channel.
Although suppression of excessive oxidant production, prevention of ERP changes, and maintenance of atrial contractility appear to be desirable goals, a transient decrement in I Ca may protect the atria from calcium overload-induced apoptosis and/or proteolysis. In this regard, oxidant-mediated changes may parallel the effects of -adrenergic receptor antagonists, limiting calcium influx and force generation but improving prospects for long term survival. | 9,496.2 | 2007-09-21T00:00:00.000 | [
"Biology"
] |
Renormalization Group Analysis of Turbulent Hydrodynamics
Turbulent hydrodynamics is characterised by universal scaling properties of its structure functions. The basic framework for investigations of these functions has been set by Kolmogorov in 1941. His predictions for the scaling exponents, however, deviate from the numbers found in experiments and numerical simulations. It is a challenge for theoretical physics to derive these deviations on the basis of the Navier-Stokes equations. The renormalisation group is believed to be a very promising tool for the analysis of turbulent systems, but a derivation of the scaling properties of the structure functions has so far not been achieved. In this work, we recall the problems involved, present an approach in the framework of the exact renormalisation group to overcome them, and present first numerical results.
Motivation
Since the first work on statistical hydrodynamics by Kolmogorov [20], turbulence basically remained an unsolved problem of classical physics, even though the fundamentals seem to be simple enough -the Navier-Stokes equations just express the conservation of momentum of an incompressible fluid. Nevertheless, experimental results on the moments of velocity differences (the structure functions to be defined below) have yet to be understood on a theoretical basis. While Kolmogorov's assumptions of scale-independence fail, the intrinsic scale-dependence that is responsible for the intermittent exponents have not been deduced from first principles so far. A promising approach seems to be the Renormalization Group (RG), which aims to describe the dependence of the correlation functions of a given field theory on the scale on which the system is observed. Beginning with the works of Forster, Nelson and Stephen [11], numerous attempts have been made to apply the various formulations of the RG to turbulent hydrodynamics, but until today the observables proposed by Kolmogorov could not be deduced in accordance with experiment. For some work on the RG approach to turbulence, see [8,1]. The formulation of the RG most suitable for the study of turbulence is, in our opinion, the so-called Exact Renormalization Group (ERG), due to Wilson [32,33]. Although the distinction between "field theoretic" and "exact" RG is merely artificial, the latter explicitly involves the generating functional of the theory to be studied, which can be simplified and re-formulated to suit the analytic methods involved. It is especially helpful, as we shall see, for the analysis of a theory with constraints, like in our case the incompressibility condition. We will show how to apply the well-known Faddeev-Popov gauge-fixing method to work out the functional integral measure. In this step we differ from previous work, which in one way or another omitted the incompressibility condition and/or the pressure term of the Navier-Stokes equations. The RG-flow, as we shall discuss in detail, can be understood as the continuous way of calculating all Feynman graphs of the theory. Keeping this in mind, we established a numerical algorithm that simulates the RG-flow by integrating out the corresponding graphs. We therefore take advantage of the freedom of choice of a cutoff-function for the propagator. We end up with lengthy, but straightforward rate equations, that can be iterated quickly and up to a high number of involved field operators. We will also show that the predictions of Kolmogorov can be found as the trivial scaling solutions of this theory, and that non-trivial structures in coupling space exist. We have tested the algorithm on well-known theories; the identification of intermittent exponents in turbulence, however, has not yet been accomplished.
Turbulent Preliminaries
We shall introduce the basics that are needed in this work, and sum up Kolmogorov's predictions from 1941 (K41) as far as we aim to falsify them, rather than to give a full account of Kolmogorov's theory. Reviews can be found for example in [25,13,29,24].
Navier-Stokes Equations
We shall mainly work with the full Navier-Stokes Equations (NSE) given by where v denotes the D-dimensional velocity field (in Navier-Stokes turbulence, D is either 2 or 3), ν the kinematic viscosity, p the scalar pressure field and ρ the density of the fluid. Eq. (1) expresses the conservation of momentum in an infinitesimal volume element of the fluid. As these are three equations for four degrees of freedom, it is clear that we need an additional constraint, determining the pressure field. In incompressible turbulence, this constraint simply states that the velocity field is divergencefree, Starting from any given initial condition, the velocity field obeying Eq. (1) has to develop into a constant field, as the dissipative term converts more and more energy into heat.
If we aim to understand fully developed turbulence, statistically homogeneous in space and time, and statistically isotropic in space, we need a mechanism to insert energy into the system, so that an equilibrium flow can develop. The standard way of providing this is to add a stochastic force (stirring force) f α to Eq. (1) that is long-range correlated: The idea is to bring energy into the flow on large scales, let large structures decay freely into smaller ones until the energy is finally dissipated into heat (Richardson-cascade). We model the stochastic force to be Gaussian distributed, with δ-correlation in time, and a long-range correlation function in space: where ǫ is the local energy dissipation rate (not to be confused with the parameter of the ǫ-expansion). ∇ −2 de-notes the fundamental solution of the Laplacian, e.g. in 3 dimensions: Different forms have been tried for F , though in the context of the NSE it is widely believed that the form of the stochastic force does not influence the statistical characteristics of turbulence. On the other hand, it should not be forgotten that e.g. in Burgers turbulence, the intermittent exponents (to be defined below) clearly depend on the choice of the stirring. Eq. (2) is sufficient to eliminate the pressure term, as can be seen when deriving the solenoidal NSE, as we shall do in the following. In a first step, let us operate onto Eq. (3) with a divergence operator. Then the first and the third terms drop out, as the field is divergence free: We then invert the Laplacian, arriving at which is the above mentioned condition for the pressure field. One might ask whether the inversion of the Laplacian leads to a unique solution for p. Now, two solutions might at best differ by a harmonic function, which is either constant or growing without limits. The second option is not physical, the first one not relevant as we are only working with pressure differences.
To obtain the solenoidal NSE, replace the "solved" pressure field into Eq. (3): Observe that the operator P αβ = δ αβ − ∂α∂ β ∇ 2 is precisely the transverse projector known from electrodynamics; a feature that we will use later on. From now on we shall investigate the solenoidal NSE. It is important to keep in mind that these are only equivalent to the full NSE as long as incompressibility is ensured. Also observe that Eq. (10) is non-local, as it involves the inverse of the Laplacian operator, the integral kernel of which is of the form (6).
Gauge-Invariance
As a preparation for a Faddeev-Popov method, that we will need in the following section, we shall now rewrite the formula in a gauge-invariant way. As the transverse operator P projects a field onto its incompressible parts, and knowing that the fields we are interested in are transverse a priori, it is easy to see that so that we are free to replace v by P v in Eq. (3): Now, even if the resulting equation looks more complicated, it is invariant under the same gauge transformation known in electrodynamics, Constraint (2) is still required, but it now acts as a gauge fixing term.
Structure Functions and Intermittent Exponents
In 1941, Kolmogorov published his famous article, introducing a statistical framework for turbulent hydrodynamics [20]. As the theory is Galilean invariant, he proposed that the observables should be functions of velocity differences, and more specific, he considered the so-called velocity increment the difference of the velocities at two points separated by a vector x, projected onto the unit vector in x-direction.
Suitable observables appear to be the structure functions of order p. They are defined as the p-th moment of the distribution of the absolute value of the velocity increment: The average is taken over all points r of a realization of the turbulent flow. In the case of homogeneous turbulence, we suppose this to be equivalent to an average over all histories v(x, t), which we will perform in the functional integral.
Kolmogorov proposed the existence of a smallest length scale λ, also called the "dissipation scale", below which physics is no longer dominated by turbulence, but by dissipation. Dimensional analysis leads to where ǫ is the (constant) dissipation rate. Assuming that the turbulent cascade of decaying vortices happens on scales much larger than λ, we might hope that observables don't depend on it and are thus self-similar, which means power-law functions of the scale: Again, Kolmogorov deduced by dimensional analysis that It has long been stressed that the fundamental assumption, namely the independence of the smallest scale λ, is by no means natural, and is in general not fulfilled in critical systems. This could lead to a scale dependent dissipation rate (or viscosity) and the breakdown of scaling law (17). Even though general agreement on this point seems to be common, the scale dependence could not yet be deduced. In case that a typical (macroscopic) length scale L can be identified in the system, the Reynolds number is defined as L might be the radius of an obstacle of the flow, or, more useful for our purposes, the correlation length of a choice of the stochastic force. Eq. (19) coincides with the more common definition if the typical velocity U is defined to be
Burgers Equation
As we shall refer to it later, we mention Burgers' equation as a simpler model for turbulence, which can be seen as fully compressible hydrodynamics. Further constraints are not needed as a pressure term is missing. Burgers turbulence, or Burgulence, is investigated in one to three space dimensions (see e.g. [3]). It is local, and its solutions typically contain structures that resemble the fundamental solution of the Hopf-eq.
as shown in Fig. (1). These are regularized by the viscosity term to a finite step width (similar to the behaviour of the KPZ-equation, see [21]). v(x; t = const.) x Shock structure in Burgers' con guration Monte-Carlo simulation [10]. The graph shows v(x, t) as a function of x in periodic boundary conditions at constant t.
As published in a different article [10], we investigated a generating functional of Burgers equation by means of Monte-Carlo methods. These investigations have shown that the generating functional formalism is suitable for the study of turbulent systems, even if almost-singular structures are involved.
Exponents for Burgers Turbulence
In general, the intermittent exponents of burgulence strongly depend on the form of the stirring force; for an overview see e.g. [14]. In the canonical case, that of freely decaying burgulence in 1+1 dimensions, analytic calculations lead to ξ p = min (p, 1) .
This scaling can be interpreted as the signature of a bifractal decay channel.
Generating Functional
The basis of any analysis using the Exact Renormalization Group (ERG), first introduced by Wilson, or many other field theoretical methods, is the generating functional. For turbulence several attempts have been made to define it as a functional integral, all in one way or another omitting the condition of incompressibility. We shall begin with the derivation of the Martin-Siggia-Rose functional [23] for the solenoidal NSE, and than show how to respect condition (2) by means of a Faddeev-Popov procedure.
Fine-Grained Distribution
Our starting point is the so-called fine-grained probability distribution for the field v, obtained by counting all possible solutions to the NSE.
where N is defined in Eq. (12) 1 , and we defined the abbreviation A few remarks are in order: -Of course, N −1 is not to be understood as the (not necessarily existing) inverse of an operator, but as a multi-valued operator counting any solution v for a given realization of the random force f . Observe that we are working with a functional δ-function, meaning that we are searching for histories v(x, t) that solve the NSE for all x and t, rather than a realization v(x, t 1 ) at a given time t 1 , depending on some initial condition. -The average · is to be understood as an average over all realizations of f , replacing the spatial average in (15). It is the basic assumption of this work that for homogenous and isotropic turbulence, these averages are interchangeable; we are supported by our results for Burgulence published in [10].
Making the average over all realizations of the stochastic force explicit yields Now the argument of the δ-function is multiplied by N, which leads to a functional determinant that we shall discuss in the next paragraph: The δ-function is now written by means of a functional Fourier-transformation, introducing a new formal (bookkeeping) field u: If we formally define the action S 1 by where the determinant still has to be evaluated, we can identify the elements for our Feynman-rules. For the time being, we want to focus the attention on two parts, which together lead to the famous θ(0)-problem: Notice that the corresponding bare two-point-function, also called response function, is proportional to the Green's function of the diffusion equation, applied to transverse fields: We have to choose the retarded Green's function to ensure causality of the theory, so uvv-vertex: In a graphical expansion, it is required to calculate the following loop: As the retarded Green's function of (not only) the diffusion equation is proportional to θ(t 2 − t 1 ), this loop is proportional to the seemingly ambiguous quantity θ(0). We will show in a following paragraph how this is solved by a choice of discretization of the functional integral.
To illustrate what the above action actually implies, we integrate out the non-physical fields f and u for a moment.
Observing that first the integration over f , and then the one over u are Gaussian, we can simplify Z to: with We see that field configurations not solving the NSE are admitted, but suppressed by a Gaussian weight that we can control numerically, and that can be related to a physical constant by Eq. (5). This expression is already a generating functional, as we are able to extract all correlation functions using functional derivatives. Before we can start to simulate RGflows, we still have to bring the determinant into a suitable form.
Functional Determinant
A straightforward way of writing the functional determinant is by using ghost fields, where we dropped a field-independent term and defined I to be the non-linear part of N , We thus end up with the functional where It is important to keep in mind that the ghost fields, though anticommuting with each other, don't pose any particular problems in the numerical procedure. In principle, it is very much feasible to take them into account, and we performed several RG-runs while keeping track of the ghost terms. The determinant has a simple graphical interpretation, as we see directly that it exactly cancels out the u-v-loop shown in Fig. (2): From eqs. (31) and (36), we see that the ψ * ψ-and the uv-propagator are identical. The reader can easily check that ψ * , δN δv ψ leads to two terms similar to uN[v], but with u replaced by ψ * and one v-fields replaced by ψ. When this vertex is closed to a loop by means of a ψ * ψ-propagator, this is numerically identical to graph (2). This greatly simplifies matters for us, as our numerical program sorts and calculates contributions to the RGflow according to their graphical representation. Rather than simulating two additional fields, and calculating all the graphs, we are thus allowed to drop a certain class of graphs. The cancellation of certain averages can be proven even non-perturbatively, using the BRS-invariance of the action.
BRS-Invariance
If the functional determinant is expressed in terms of ghost fields, this yields an extra symmetry, also called BRS-invariance. We observe that the action (39) is indeed invariant under the following infinitesimal simultaneous changes: that for obvious reason have also been called "half a supersymmetry". From the Ward identities of this symmetry, the desired result follows on a non-perturbative level: Both sides of this equation can be interpreted as the sum of the corresponding graphs as explained in the previous section. For details we refer the reader to the explict proof in [27].
The θ(0)-Problem
We have seen in (3.1) the typical appearance of the seemingly arbitrary number θ(0) in the retarded Green's function of the diffusion equation. It is of course linked to the Itō-Stratanovich-dilemma; Zinn-Justin [35] shows explicitly how it is connected to the problem of operator ordering. To clear the mist, we want to show in this section how θ(0) is fixed by the choice of discretization of the functional integral, namely the time derivative.
In any case the θ-function is required to obey We further require the fundamental theorem of calculus to be valid, so We now have to define a differentiation. We distinguish different choices by a parameter α, suggesting with −1 ≤ α ≤ 1. Notice that α = 0 corresponds to the symmetric, and α = −1 to the pure backward derivative. We conclude (49) The term involving the second θ-function does not contribute since t ≤ 0 in the integral. The first θ-function only gives a contribution for We thus find Thus θ(0) is fixed by the choice of the time derivative. Notice that for the symmetric (Stratanovich) derivative, α = 0, we find θ(0) = 1 2 , while in the pure backward (Itō) case we get θ(0) = 0, as expected.
Incompressibility Condition
Before we demonstrate how to derive a generating functional respecting condition (2) for the full Navier-Stokes equations, let us briefly discuss three common positions concerning incompressibility: -Incompressibility doesn't matter. Though not stated explicitly, this appears to be the point of view taken in those works that neglect the additional condition (2) without any further comment. This is more than questionable, because there is a whole class of equations that only differ by the compressibility condition, and give different statistics. An obvious example is Burgers' equation (22), which models fully compressible fluids and shows bifractal scaling of the structure functions. -Incompressibility is enforced by a choice of boundary conditions. This argument goes as follows: In the solenoidal form, it can be clearly seen that an incompressible flow stays incompressible even without enforcing it by a particular equation, as the random force term on the one hand, and the former pressure term on the other hand lead to incompressible contributions to the flow. On the other hand, all compressible parts of a given flow tend to die out due to the dissipation term. This leads to the observation that condition (2) is always ensured if we impose an incompressible flow for t → −∞. This observation is indeed useful for some calculations, e.g. direct numerical simulations, as they have a definite boundary at t = 0. In our case, we aim for statistics over whole histories of the fields, for all times -and in negative t-direction, as the argument itself states, any compressible part of the flow is going to be amplified. Now the slightly compressible flows and the incompressible ones lie dense to each other in functional space, so that in any numerical application we would lose control of the boundary conditions completely. -Incompressibility is ensured by some standard integration measure. What is meant here is that incompressibility is considered to be included in an integration measure Dv inc (we are going to proceed in the same way in the next paragraph), but then the measure is treated as a standard Lebesgue-like measure. This is certainly not legitimate; e.g. an integral like looks like a Gaussian integral, but will in general not be so.
We thus conclude that incompressibility should well be taken care of. It might be surprising that the way to do this is rather straightforward and well known for gauge invariant field theories.
"Strong" attempt using δ-function
We implement condition (2) through a second functional Fourier transformation of a δ-function, The advantage of this is that the generating functional is exact, apart from the definitions of some constants. The major drawback is that we will have to apply the Renormalization Group in a discretized way, which requires an approximation of the generating functional. Now a constraint like (53) can, after approximation, lead to a field theory that has no solutions at all, as it is driven from the manifold of solutions further away by every iteration of the RG-process. From a technical point of view, the additional field makes simulation times longer and most calculations more elaborate, but does not represent qualitative problem.
Faddeev-Popov Method
A proper way to treat the compressibility condition is by means of the Faddeev-Popov method. Let us first generalize our considerations to the case ∂ α v α = c. We multiply the functional by 1: where U denotes the gauge transformation as defined in (13). The determinant can be written explicitly as and can be dropped as it is independent of any field. We now multiply the functional by another (field-independent) factor, namely Now our original action is gauge-invariant, meaning that the generating functional does not depend on the value of c. We might choose a particular one, or as well integrate over all possible c: The reason to integrate over all c is to evaluate the δfunction, finally arriving at The gauge fixing term appears as a Gaussian distribution, and only in the limit κ → 0 incompressibility is enforced strictly in the integrand. This method overcomes the disadvantages of the δ-function method, as deviations from solutions aren't completely forbidden, but only suppressed by a Gaussian weight. The limit can be taken when results have been obtained. We shall from now on work with the functional (62), but formulated in wavenumber space, which is: We discarded convention (26) because in the next section we shall directly manipulate some of the terms that would otherwise be hidden by the notation.
Galilean Invariance
A remark concerning Galilean invariance, as analyzed by Hochberg and Berera [5], is here in order. The path inte- In any attempt to calculate averages or correlation functions of the field itself, this might be a problem that could be overcome by another application of the Faddeev-Popovmethod. In our case, however, it is of no practical con-sequence, as we are aiming for the averages of velocity differences, which are gauge invariant.
Non-local Interactions
The action we derived still contains interactions of the type which are non-local in physical space, but of a very simple form in wavenumber space. Independent of the space where the theory is formulated, we need to rewrite (67) in a local way: In physical space, non-local interactions are at best cumbersome; in wavenumber space, we are going to sort the terms of the action according to their power of momenta, so we try to avoid 1 p 2 -interactions. We shall proceed in two steps: we will first re-define nonphysical fields, and then introduce new fields to transport the non-locality of interactions. We will end up with a lengthy, but local action that suits our needs for further analysis.
To make the non-local nature of the interactions more obvious, the functions within this paragraph are functions on physical space.
Field Redefinition
If we redefine the non-physical fields in the following way: and introduce we can rewrite the action as The functional determinant of these transformations is field independent and can thus be omitted.
Introducing new fields
Some non-local terms of the general form for some fields K and L still survive. We are able to solve this problem by introducing new fields. These can be understood as transporting fields that "carry" the non-local interaction from one place to another, thus replacing it by two local interactions and a propagator. Formally this means inserting a Gaussian integral of the type: where we defineM to be a new field. This leads to a new kinetic term 1 2 (M , ∇ 2M ) in the action, which is independent of all physical fields. We now shift the variables aŝ The point is that So we get rid of the original, non-local interaction (75) by replacing it by a new kinetic term for M and new interactions. Two of them are still non-local, but of the much simpler type for example. We can deal with them by the same method to get a local action finally. We add again a Gaussian integral for a new field, sayN , and definê from which we end up with The constant λ is needed so that the new fields get a definite dimension. This procedure might appear unusual, but is in fact quite common. The reader may think of electrodynamics: When a light is shining and is seen from a distance, this might be called a non-local interaction. In theory, a local electromagnetic field is introduced, that propagates from the source to your eye. Now, the equations of motion for the electromagnetic field turn out to be fairly simple, so that we are used to think of the field as being fundamental. In the present case, think of the fields N and M as merely helpful auxiliary quantities, though it might in other cases, like Burgulence, be advisable to model structures as new fields explicitly, and observe their interactions in an effective field theory.
Applying the method discussed above to action (63), we arrive at the local action with Here, φ 1 , φ 2 , φ 3 denote the new scalar fields. The terms are already sorted by their order of derivatives. This is the result for the action S. Depending on how the determinant (35) is expressed, other non-local interactions may have to be rewritten in the same way.
Discussion
In this section we have shown how to transform non-local into local interactions: either by redefinition of unphysical fields, or by introduction of intermediate propagators.
A major drawback will be that we have to approximate this action to account for RG-transformations, and a common way is the derivative expansion. Now, due to our "localization" of the action, the original terms have been mixed concerning the order of derivatives. Moreover, the number of derivatives has increased for most interactions, which means that we would have to expand the RG-flow to a high order in the derivative expansion. Also, the number of fields involved makes it a cumbersome work to do, even to the lowest orders. Nevertheless, this expansion is feasible to any order in derivatives, as shown in [18] for the first two orders. The results are rather lengthy rate equations, which will not be elaborated on here.
Renormalization Group Considerations
The renormalization group has its origins in the work of Gell-Mann and Low in 1954 [15] 2 . The first works concerning the Wilsonian Exact Renormalization Group (ERG) have been published in the early 70s [32,33], based on Kadanoff's block-spin picture [19]. It is surprising that some very basic questions, e.g. concerning the renormalization step and the anomalous dimension, are still being discussed. We will consider this point in detail in paragraph 4.4, as it seems to be in order (at least to us) to address these matters, especially the anomalous dimension and the graphical representation of the flow.
In this section, we remind the reader of the origins of the ERG, and will try to give a basic understanding of the flow equations. We will not repeat the derivation of the equations, as this can be found in a number of articles, but quite a bit of the graphical representation of the different terms, as it will lay the foundations for our numerical investigations that closely follow the loop expansion.
Form of the Action
We are looking for a RG-flow of a given theory defined by its generating functional Z. To be definite, let us work with the theory of a vector field v i , and write Z in the following way: The action depends on two momentum scales Λ and Λ 0 . By Λ 0 we denote the scale on which we impose the initial renormalization condition -e.g. the value of the fourpoint-function is fixed to a certain value λ 4 if all external momenta equal Λ 0 . The term S 0 might look uncommon, but is necessary to pick up terms nonlinear in J that will be generated by the RG-flow. As initial condition, we set S 0 [Λ = Λ 0 ] = 0.
Starting from a renormalized action on scale Λ 0 , the flow is going to generate the renormalized action on all lower scaled Λ, which is the second momentum scale involved. From the RG-perspective, S[Λ = Λ 0 ] plays the role of the initial condition of the flow. The flow equations depend on the choice of the kinetic action, so we will define the kinetic term to be where we define C is the cutoff-function, which has the following properties: Though it is by no means necessary, one usually assumes that C −1 is monotonous, and that it is a smooth approximation of the step function, thus suppressing degrees of freedom on scales bigger than Λ, while not effectively altering those on scales below. The last equation (92) is ambiguous, but we define a value for C −1 (1), so that the role of Λ becomes definite. We will say that the degrees of freedom that are suppressed are "integrated out", as this part of the involved integrals can be interpreted as already being performed. Apart from the properties (90-92), we are free to define C in any way we like. It follows that not even (88) is enforced; other definitions of the propagator have been tried. In practice, some propagators will lead to simpler numerical calculations than others. A very special choice of C is the sharp cutoff C s : which leads to the Wegner-Houghton equation and will be treated separately.
In a similar matter, we define the following kinetic terms for anti-commuting Grassmann variables: where P −1 Ψµν is an antisymmetric matrix in the indices µ and ν. For completeness, we already mention here that we will expand the interaction part of the action, S int , in powers of the fields to illustrate some examples, where we will call a k-vertex. The derivation of the flow equations does not depend on this expansion; but it is useful in some definite calculations.
Integrating out degrees of freedom / Lowering the Cutoff
A simple way to derive the RGE is to calculate the effect of a change of the cutoff on an action, keeping in mind that both the generating functional and the correlation functions may not change. 3 In our case, we will lower the cutoff by lowering Λ, leaving Λ 0 as a unit of measurement unchanged. Applied to the vector theory, for example, we arrive at the following equation for the interaction term of the action: where we dropped a field-independent term. Taking the kinetic term into account, we find the simple equatioṅ The term 1 2 p δS δvjṖ vji δS δvi will from now on be called the link-term of the flow-equation, while we will call − 1 2 p δ δvjṖ vji δS δvi the loop-term. These names will be justified in the following subsection. In complete analogy, equations for theories involving Grassmann variables ψ * and ψ with propagator (95) can be derived: In case of anti-commuting fields it is important to keep track of all extra signs that arise. These equations describe the lowering of the cutoff, or integrating out of degrees of freedom. Before we proceed, let us remark on the graphical interpretation of the RGequations.
Graphical Representation
Let us begin with the interpretation of the link-term. For the time being, we assume that it is applied to a part of the interaction term of the form (97), a vertex with n 1 + 1 attached lines. Then the functional derivative of this gives us a vertex with n 1 lines; the missing line is linked by the part of the propagator that is integrated out,Ṗ , to a second, similar vertex with, say, n 2 + 1 lines. The graphical result is shown in Fig. 3. Observe that the functional derivatives automatically lead to the correct symmetry factor of the graph. This graph gives a contribution to the (n 1 + n 2 )-vertex, proportional to (105) The two δ-functions will pose problems in the numerical process, as they will have to be approximated in order to be suitable for a derivative expansion. We can eliminate one δ-function by realizing that which gives us at least the overall conservation of momentum that we need. The loop-term is equally easy to understand: From a vertex with n + 2 attached lines, two are joined by a prop-agatorṖ (Fig. 4). Again, the symmetry factor is given correctly. This graph gives a contribution to the (n)-vertex; The overall δ-function does not pose any problems in this case. So far, we explained the effect of the flow equation as only S int is concerned. Let us now investigate the contributions of the kinetic term. The loop-term generated from the kinetic term (Fig. 5) is trivial , as it is field-independent and can be dropped. Let us consider the terms arising when one field derivative in the link-term acts on the interaction, and the other one on the kinetic term; this one is compensated by another term in the RG-eq.: The remaining term to be considered is We can use1 and this is, as defined in (88), the change in the kinetic term. Let us summarize: We have seen that the RG-flow can be expressed graphically. Iteratively, we calculate the contributions from all graphs with one propagatorṖ , and all other propagators P , that are inner propagators which have been generated in RG-steps before. This is simply an application of the product rule: If we formally define G[f (p)] to be the sum of all possible Feynman-graphs of the theory with inner propagators f (p), we can even write down a formal solution to the Wilson equation: In the limit Λ 0 → ∞ this becomes This reveals the meaning of the changes to the action: The RG-flow sums up the part of the propagator that is cut off iteratively. Notice that we seem to subtract all graphs -this is because we are working with e −S rather than e S . Of course the derivation and graphical interpretation of the flow-equation does not apply directly to the case of a sharp cutoff as defined in (93, 94). As our numerical approach favours the Wegner-Houghton equation, we discuss it in the following paragraph.
The Wegner-Houghton Equation
For numerical purposes, it is easiest to work with the sharp cutoff function C s defined in (93, 94). This changes the form of the flow equation drastically. Again, we will not present a derivation here as it is found in the original literature [31], but present a short overview. We start from dividing degrees of freedom into a high-momentum part that is to be integrated out, and a low-momentum part that is kept. Expressing the integrated part of the functional as a change ∆S to the action, we find where the prime denotes integration over the momentum shell between Λ − dΛ and Λ. We now expand the action two second order in the fields and get where we completed the square. Integrating out the field v in the shell of momentum, which is assumed to be of the thickness ∆Λ Λ ≪ 1, we get and with the aid of (det A) α = exp{α ln det A} = exp{αTr ln A} (122) we can finally write down the Wegner-Houghton equatioṅ In the derivation it is used that it is sufficient to work in one-loop-order, as higher order contributions are also of higher order in dΛ. A proof of this is found in the original work [31].
Notice that (123) seems to differ from the Wilson equation (103) by an overall-sign; but this is explained asṖ is negative.
Eq. (123) is especially convenient for numerical applications, as the contributions to the integrals can be calculated explicitly. The logarithm looks problematic at first glance, but we shall demonstrate in the next paragraph that it has a very simple graphical interpretation, and is thus favourable for our graphically based program.
Graphical Representation
Once again we start with the interpretation of the linkterm, which for the Wegner-Houghton equation does not involve the bare propagator alone, but the quantity , which is more than just the inverse of the kinetic term. Using the geometric series, we can write This is the sum of all graphs with i vertices, linked into a line by propagators. The first and the last vertex in the line are also attached to propagators, which link them to terms δS δv . Again, let us assume first that these act on the interaction part of the action, thus the chain described above is linked to other vertices. We therefore find the graphical representation Fig. 6. In a similar way, we derive the graphical representation of the loop-term. We re-write the logarithm as: The first term is field-independent and is dropped. The second logarithm is expanded as a Taylor-series, reading The corresponding graph is shown in Fig. 7. It is similar to the link-term before, but closed to a loop by the trace. The factor 1 n compensates the rotational symmetry of the graph. As before, we still have to sort out the link-terms involving the kinetic action. Let us start with an example. P P . . . r 1 r 2 r n P −1 P −1 q 1 q 2 Fig. 8: Graph of the Wegner-Houghton flow, linking a vertex to two outer propagators in this case.
The graph in Fig. 8 is obviously one of those that arise from the link-term; the reader may focus his attention to one of the P P −1 legs. Integration is again over the momentum shell, so P P −1 = 1. The result looks like the vertex, but with the difference that we know that fields q 1 and q 2 are depending only on momenta less than Λ − dΛ.
We can thus interpret these terms as integrating out the momenta on remaining fields. Integrating out more outer fields at the same time would again be of higher order in dΛ, and is thus omitted. The change in the kinetic term itself is again simple, and not even a sign problem arises as in the Wilson-case. The corresponding graph is and as P −1 P P −1 = P −1 , this is exactly Again, this is precisely the change of the kinetic action, as expected from Eq. (88). As in the case of the Wilson equation, we are now able to give a formal solution to the WHE. The final result reads
Kinetic Action -Renormalization and Rescaling
The field renormalization step is completely artificial. Let us clearly discuss the renormalization step, as it seems to have lead to confusion. For definiteness, we will demonstrate the concept using φ 4 -theory in D-dimensions; for other theories the procedure works in exactly the same way. Let us emphasize that this step is not unique to the ERG, but also found in perturbative renormalization.
Field Strength Renormalization
We started our integration step with the kinetic term We thus defined S kin to be the only term quadratic in fields and momenta in the limit p → 0. After the integration step (lowering the cutoff from Λ 0 to Λ), new terms that, according to our definition, should belong to the kinetic term, can be found within the interaction part of the action. Such a term has to be counted as kinetic term by hand. In practice, the first contribution to the kinetic term arises in the second step of the RG-flow as it is of two-loop order. The simplest graph contributing to field strength renormalization is the so-called sunset graph, Fig. 10. In our case, a graph analogous to Fig. 10 is to be computed by our numerical approach in an iterative way later on, summing up contributions from every infinitesimal integration. The result depends on the used renormalization scheme; by means of a Taylor-expansion, we are always able to identify the contribution to the kinetic action; we will call this This term is transferred from the interaction term to the kinetic action by hand, which then becomes As our renormalization condition for the field strength, we will choose that the coefficient of the kinetic action in the limit p → 0 is equal to 1 2 . According to the definition of the cutoff-properties, we introduce the field-strength renormalization factor Z in a way that compensates for the new term in S ′ kin . If we write for the original action (133) we conclude that Z transforms as (The sign changes as in the integration step, we actually lower Λ.) The initial condition has to be so that (133) is fulfilled. We can integrate (137) easily to Z is now absorbed into the fields in the following way, which explains the name of field strength renormalization: We will use this in the next paragraph to determine the scaling of the field. This is in complete accordance with renormalization conditions met in perturbative renormalization, see for example [28,7]. The kinetic term now reads which, expressed in renormalized fields, is exactly of the same form as (133). The point stressed by Golner [16] and Bervillier [6] is that we have to do the rescaling step very carefully, to account for the renormalization step consistently.
Rescaling
The last step in the renormalization group process is the rescaling of the momenta, and the functions thereof. We define new momentap in the following way: so we replace We can directly see that this replacement also changes the cutoff function in the expected way: so we are ready to identify the new with the old cutoff. From the renormalization step, it is now easy to deduce the scaling of a field. Beginning with the original kinetic term (133), we conclude that, as S kin does not scale at all, the rescaled fieldφ has to be scaled as We call the canonical dimension D φ,can = D−2 2 . Eq. (140) tells us the anomalous exponent: We will now see that due to the effect of the rescaling of the cutoff-function, the renormalized kinetic action indeed does not scale. As the cutoff is a function of the ratio p 2 Λ 2 , a change in p has the inverse effect as the same change in Λ. By this, we reverse the effect of the integration step. As a part of this, terms are re-distributed back to the interaction of the theory. Now scaling back the kinetic term: The canonical scaling of the fields is compensated by the integration measure, and the anomalous scaling by the renormalization step. We thus trivially find again formally identic to (133) as a function of the rescaled quantities, as desired. We will drop the tildes and primes, formally getting back to (133).
The Interaction Terms
As we have explicitly discussed all steps when applied to the kinetic action, we now just have to apply them to the interaction terms. Let us, as an example, have a look at a four-field-interaction
Renormalization
Once we change φ to the renormalized field φ ′ , without changing the interaction term, we have to renormalize the coupling as follows: We need the change in λ for an infinitesimal integration step, which isλ or, for a general interaction S int with any number of vertices,Ṡ int, Ren = as the operator φ δ δφ counts the number of fields in a vertex.
Rescaling
This step is now straightforward. Let us sum up the contributions: -Integral: For each integration measure, we get a factor D, and there is one integration measure less than there are fields (because of the δ-function), so we get a contributioṅ (we are still lowering the cutoff). -Momentum: The vertex could and will depend explicitly on the momentum, so we introduce another operator φ(p)p ∂ ∂p ′ δ δφ(p) that counts the powers of momenta in each vertex. The prime at the derivative indicates that it is not acting upon the momentum conserving δ-function. We get the contributioṅ -Fields: As derived above, each field brings a contribution proportional to − D+2−η 2 , so in total we finḋ -Renormalized Coupling: Not to be forgotten, we renormalized the couplings according to (150), so it scales itself anomalously, exactly compensating the anomalous scaling of the fields: So, to sum up all contributions, we finally get the rescaling terṁ (155)
The RG-Equation
For the interaction term, we thus find the flow equatioṅ This equation clearly depends on the choice of propagator (89). To give an example: If we had defined the inverse propagator to bẽ as often found in literature, we would have ended up with the same equation as Bervillier [6] or Golner [16], namelẏ which in turn is completely equivalent to Wilson's equation.
We want to point out that both equations (156) and (158) are in a way correct, even though they seem to differ by a sign. The equivalence is obscured by a different choice of propagator functions, taking advantage of the reparametrization-invariance of the equation. Propagator (157) indeed has some advantages, as in principle other values for k are also possible, and simplify the implementation of K41, as we will see. On the other hand, the derivative of (157) is a bit uglier, and especially the derivative expansion gets mixed up.
We can now aim for the final assembly of the RG-equation, written down for vectorial and Grassmannian fields: This equation is general enough for all our applications, including different ways of considering the functional determinant Eq. (35).
Derivative expansion
Clearly, any action has to be approximated to be of numerical use, as it represents infinitely many degrees of freedom. A common way of approximation is the derivative expansion, see e.g. [17] and [26]. In principle, applied to the scalar theory, it amounts to expanding the action in powers of derivatives: in contrast to an expansion in powers of fields, which can be seen as expansion around a weak field. As a special case, in the Local Potential Approximation (LPA) the action is reduced to an interaction term depending only locally on the field φ(x) (and not on its derivatives) and a kinetic term whose coefficient Z is held constant throughout the flow: If we apply the loop-and link-terms to the action expanded in powers of fields, we are led to rate equations for the coefficients. Let us, as an example, apply the Local Potential Approximation (LPA) to Eqs. (105) and (107), expanded in powers of fields. Starting with Eq. (105) for the link-term, we see that the non-trivial part of graph 3 is proportional tȯ λ n+m ∝ Ṗ p 2 Λ 2 λ n+1 (p 1 , . . . , p n , p)λ m+1 (q 1 , . . . , q m , p) ×δ(p 1 + p 2 + . . . + p n + p)δ(q 1 + q 2 + . . . + q m − p)d D p.
As explained above, we transform one of the δ-functions into an overall momentum conservation, and use the other to evaluate the integral: In the LPA, we approximate the couplings to be momentum-independent, and develop the cutoff to zeroth order in the momenta, so: A difficulty is the momentum-dependence of the factoṙ P which we need to expand, according to the derivative expansion. The result obviously depends on the choice of the cutoff; if we apply it to the LPA, we can summarize the result into the constant If the cutoff is an approximation of the step function, we would expect the limit to converge, andP 1 = 0. This clearly is not an option, as it would suppress the nontrivial character of the RG-flow.
On the other hand, the loop-equation (107) leads in the LPA to Again, the integral depends on the choice of the cutoff; for the LPA we will write Rather than to specify a cutoff, in the LPA it is sufficient to define the constantsP 1 andP 0 . In higher orders of the derivative expansion, additional information concerning the cutoff will be required.
In the case of a vector theory in three dimensions, the situation is not that simple, as products of the type v i v i or any contraction with other three-component fields will be present. We need to keep track of this to calculate the contributions to a renormalization group flow, so we propose to expand the terms of the action in powers of fields and momenta in the following way: From now on, we will work with the coefficients In first order, the terms of the derivative expansion are even more complicated, as we also have to keep track of terms like p i v i (q).
As the overall number of momenta is fixed for each term, and the action itself is a scalar, we get the following possible values for the indices of V : and for Z equivalently. For completeness, we also have to keep track of the number of time derivatives Der in a term -so the quantities we will work with are the coefficients We worked with a predictor-corrector-, as well as a Runge-Kutta-integration algorithm, both with self-adjusting step width. We used two sets of algorithms -one of them involves explicitly programmed versions of the rate equations, while the others worked out the loop-and linkgraphs automatically, only needing the parameters of the physical system. Apart from the algorithm of the flow simulation itself, we developed a number of tools for the analysis of the resulting data. As the coupling space, in which we are working, is very abstract and high-dimensional, it is helpful to make first steps by the study of unphysical toy systems, i.e. simple and solvable physical systems, and reduced turbulent systems (Burgulence) first, to gain the confidence that the algorithm is working correctly, and to develop some intuition for the work with renormalized couplings.
Non-Turbulent Systems Analyzed
We started our investigations by analyzing unphysical (toy-) systems with arbitrary constants and dimensionality of space, to learn more about the detection and features of different sorts of fixed points. A main question was how structures in coupling space can be recognized, if the dimensionality of the coupling space is high, and whether terms of higher order in the field expansion contribute as a correction.
In a second step, we applied the algorithm to standard physical systems with known properties, as the scalar and the O(3)-symmetric field theory. The idea behind this was on the one hand to ensure that the algorithm works correctly and to see how closely we can reproduce analytic values for fixed point scalings; and on the other hand to approach turbulent hydrodynamics in a stepwise manner, interpreting it as a special case of the general 3-vectormodel.
Toy Systems
We worked with a number of unphysical systems for trials of the algorithm and analysis tools, thus merely looking for interesting structures. These systems were defined by an action consisting of a propagator, a two-field-and a four-field-interaction, where the field was a 3-vector-field. Parameters were deliberately adjusted to allow the presence of different fixed points. Investigations of the coupling space were mainly done using the shooting method, which is especially useful for finding fixed points. In practice, one initiates a number of RG-flows, starting from initial conditions sufficiently close to each other, and searches the topology of the flow for interesting structures. An example can be seen in Fig. 11, where the location of the fixed point is easy to guess.
To identify the location more precisely, one would repeat the method with initial conditions closer to the guessed fixed point couplings, leading to a picture similar to Fig. 12. In this way, one approaches the fixed point iteratively. Following this procedure, the simulated trajectories approach the ideal trajectories, i.e. the flows directly running into our out of the fixed point.
The shooting method is limited by the numerical accuracy of the computer program, and the stepsize adjustment of the flow integration, as the algorithm slows down drastically when a fixed point is approached.
In a simulation involving more than two couplings (as is usually the case) the projection of the flow onto a twodimensional subspace will in general not look so evident, but quite similar if the fixed point is approached closely enough. Fig. 11 after adjusting the initial conditions. Position and ideal trajectories of the fixed point can clearly be identified.
Other interesting structures include focus fixed points; an example can be seen in Fig. 13. As explained by e.g. Sornette [30], these fixed points can be associated with complex eigenvalues, and can thus be identified with some discrete scale invariance. Observe that the simulated trajectory ends at the fixed point due to numerical errors, rather than to encircle it and approaching it asymptotically.
Simple Physical Systems
The renormalization group flows of the scalar theory and the O(3)-symmetric theory in D dimensions have been analyzed by P. Düben [9]. By reproducing known values of these theories like fixed point locations and scaling (also in the ǫ-expansion), we tried to get closer to the much more divert general three-vector-theory, and also to check the correctness of our algorithms. We found that we are able to accurately reproduce the values known from literature, to a given order of the ǫ-expansion. Our results will be published in a forthcoming article.
Local Potential Approximation of Hydrodynamics
We now return to the analysis of the action for hydrodynamics derived above in the LPA. The system is specified by the dimensionality of space and symmetries of the fields; action (82) merely serves as the initial condition of the flow. The simulations were performed using two distinct algorithms: The first one iterating the rate equations derived in the previous sections and doing the book-keeping of the terms involved explicitly, the second one finding the graphs to be computed automatically. The second formulation turned out to be not only more elegant, but a great deal faster than the cumbersome implementation of the book-keeping.
The advantage of our approach is the fast integration of a large number of couplings, and in that way circumventing the drawbacks of the expansions. Simulations were done with up to 100 couplings, though it has to be said that the identification of fixed points becomes nearly impossible in these high-dimensional spaces. Working with such a number of terms can only be done iteratively, meaning that one starts with a low number of couplings, identifies the fixed point and than changes to more and more terms, hoping that these act as corrections to the overall behaviour. It is a simple calculation to show that for values η > 1.5 of the anomalous exponent, a non-trivial fixed point exists in the vicinity of the trivial one. We used the shooting method to determine the position of this non-trivial fixed point, depending on the anomalous exponent, as can be seen in Fig. 14 and Fig. 15. The distance to the origin of coupling space can be seen to grow linearly with η, but, alas, no physical property could be ascribed to it. Note that for η < 1.5, this fixed point does not exist. As we have already explained, in flow-simulations it is generally preferable to calculate η, rather than to search for it by means of the shooting method. In the LPA, on the other hand, one has η = 0 as no field renormalization is performed.
Scaling of the Trivial Fixed Point
It is straightforward to analyze the scaling of the trivial fixed point. The correlation functions of even orders are directly computed by the RG-flow; after Fouriertransformation to physical space we can read off the scaling, and find:
Order of the Correlation Function
Scaling Exponent 2 0, 666 ± 0, 017 4 1, 338 ± 0, 035 6 1, 999 ± 0, 052 Table 1. Scaling exponents at the trivial fixed point The correlation functions of odd orders are not explicit terms of the action and so have to be measured indirectly. The correlation function of order n can, for example, be derived from the term uv n , if the scaling of the field u is known. We suggest to measure the scaling of u from the two-point-function uu , and subtract it from uv n . In this way the following exponents can be measured:
Conclusion / Outlook
We have shown how to define a generating functional for hydrodynamic turbulence, including a strict treatment of the incompressibility condition. The non-local interactions have been transformed into local ones, by a re-definition of unphysical fields and constants, and by introduction of new fields. We have shown how to approximate the resulting action within a derivative expansion. On the side of the renormalization group, we discussed the procedure of renormalization and rescaling in detail in order to clarify some obscure point in the literature. Our work leads to a RG-equation for the turbulent action, and a set of rate equations after the application of the derivative expansion. These rate equations have been simulated numerically, and the resulting data have been analyzed. As will be published in another article, we tested the numerical algorithm by reproducing known values for nontrivial scalings of the scalar theory in 4 − ǫ dimensions, and the O(3)-symmetric field theory. The results are in agreement with values found in literature, so the numerical algorithm can be considered to be reliable. Our numerical work involves the computation of the RGflow for products of Grassmannian variables, as well as 3-vectors that have to be kept track of according to their combinations.
We were able to identify the trivial fixed-point with the scaling exponents predicted by the K41-theory.
On the other hand, we have to admit that we were not yet able to reproduce intermittent exponents for the structure functions of turbulence that would agree with the experimental values. The reason clearly is the complexity of the general 3-vector-model, including all theories that are based on hydrodynamics. Although the basic foundations of these theories are easily understood, all of them (including Navier-Stokes and Burgers turbulence) involve the same dimensionality of space and symmetry of the fields, while leading to different predictions for the intermittent exponents. Moreover, it is questionable whether the analysis of a fixed point will eventually lead to an understanding of intermittency. Data seem to hint to a cutoff-phenomenon; the probability distribution of the velocity increment looks, for small distances, like a Lévy-distribution; on large scales like normally distributed [12]. We are strongly convinced that this can be explained by a crossover between two fixed points, but cannot yet justify it by our simulations. | 13,847.4 | 2010-12-02T00:00:00.000 | [
"Physics"
] |
Vav1 Controls DAP10-Mediated Natural Cytotoxicity by Regulating Actin and Microtubule Dynamics1
The NK cell-activating receptor NKG2D recognizes several MHC class I-related molecules expressed on virally infected and tumor cells. Human NKG2D transduces activation signals exclusively via an associated DAP10 adaptor containing a YxNM motif, whereas murine NKG2D can signal through either DAP10 or the DAP12 adaptor, which contains an ITAM sequence. DAP10 signaling is thought to be mediated, at least in part, by PI3K and is independent of Syk/Zap-70 kinases; however, the exact mechanism by which DAP10 induces natural cytotoxicity is incompletely understood. Herein, we identify Vav1, a Rho GTPase guanine nucleotide exchange factor, as a critical signaling mediator downstream of DAP10 in NK cells. Specifically, using mice deficient in Vav1 and DAP12, we demonstrate an essential role for Vav1 in DAP10-induced NK cell cytoskeletal polarization involving both actin and microtubule networks, maturation of the cytolytic synapse, and target cell lysis. Mechanistically, we show that Vav1 interacts with DAP10 YxNM motifs through the adaptor protein Grb2 and is required for activation of PI3K-dependent Akt signaling. Based on these findings, we propose a novel model of ITAM-independent signaling by Vav downstream of DAP10 in NK cells.
N atural cytotoxicity mediated by NK cells is regulated by multiple activating and inhibitory receptors, which confer innate defenses against tumor cells and virus-infected cells. The multistep process leading to target cell lysis involves the formation of a cytolytic synapse and polarized degranulation of the NK cell (1)(2)(3)(4). Initial contact between the NK cell and the target cell is mediated by integrins and facilitates engagement of NKactivating receptors by their cognate ligands (2,4). Postconjugation events are orchestrated by signals emanating from activating receptors and involve F-actin accumulation at the NK-target contact site and microtubule-organizing center (MTOC) 3 polarization toward the target cell. In turn, MTOC polarization leads to the establishment of a microtubule network guiding cytolytic granules to the synapse where they fuse with the plasma membrane and release perforin and granzymes to lyse target cells (2,4). Cytoskeletal remodeling is critical for NK cytotoxicity because pharmacologic inhibition of F-actin or microtubule dynamics blocks granule polarization and target cell lysis (1). Moreover, NK cells from patients bearing mutations in WASp that disrupt actin dynamics fail to initiate synapse formation and lyse target cells (1,5).
Activating NK cell receptors primarily signal through ITAMcontaining adaptor molecules such as DAP12, CD3, and FcR␥, which initiate cellular activation signals by recruiting Syk/Zap-70 family kinases (6 -9). Additional NK cell activating receptors, such as NKG2D, trigger cytotoxicity independently of ITAMs by associating with DAP10, a unique adaptor containing a YxNM motif that recruits PI3K (10) and Grb2 (11). Initiation of NKG2D signals occurs upon recognition of specific ligands, stress-induced MHC class I-like molecules such as MICA, MICB, and UL-16 binding protein in humans, as well as Rae-1, H-60, and MULT1 in mice (8,12). Subsequent to ligand engagement, human NKG2D signals through DAP10, whereas murine NKG2D signals through distinct adaptor molecules. Full-length NKG2D-long (NKG2D-L) signals through DAP10, whereas a shorter splice variant (NKG2D-S) signals through both DAP10 and DAP12 (13)(14)(15). In this regard, the relative proportion of NKG2D-L and NKG2D-S in NK cells varies upon in vitro activation with IL-2. Thus, freshly isolated (ex vivo) NK cells predominantly express NKG2D-L, whereas in vitro activation leads to an increase in NKG2D-S expression (13). Nevertheless, experiments with DAP12-deficient murine NK cells demonstrate that DAP10 is sufficient to mediate NKG2D-dependent cytotoxicity (16). Furthermore, human NKG2D promotes NK cytotoxicity despite its inability to interact with DAP12 (17).
Previous studies have implicated the Vav family of Rho guanine nucleotide exchange factors in the regulation of several distinct pathways controlling natural cytotoxicity (18 -22). NK cells lacking all three Vav proteins show severely compromised cytotoxicity triggered by both ITAM-and DAP10-associated activating receptors (18). However, while deficiency in Vav1 alone primarily impaired the NKG2D-DAP10 cytolytic pathway, lack of Vav2 and Vav3 reduced cytotoxicity triggered by receptors that signal through ITAM-containing adaptors (18). These observations indicated an unexpected specialization of Vav proteins in regulating distinct cytotoxic pathways and implicated Vav1 in control of signals emanating from DAP10-coupled receptors. However the exact mechanism of Vav1 coupling to DAP10 remains elusive. In addition, a particular issue is how DAP10 controls cytoskeletal remodeling events during the cytolytic response.
Herein, we sought to elucidate the mechanism of Vav1 function in DAP10-mediated signaling events that control natural cytotoxicity. Using mice deficient in Vav1 and DAP12, we demonstrate a critical function for Vav1 in DAP10-induced PI3K activation, Factin polymerization, and MTOC polarization and provide evidence that Vav1 is recruited to DAP10 via Grb2.
Cytotoxicity assays
The NK cells were tested against target cells by standard 51 Cr release assay (18).
Pull-down assays
Biotinylated peptides comprising four amino acids flanking the tyrosine motifs of interest were obtained from BioSource International (biotinylated DAP10 peptide sequences were DGRVYINMPGRG, DGRVpY INMPGRG). GST-Src homology (SH)2 fusion proteins were provided by D. Billadeau (Mayo Clinic College of Medicine, Rochester, MN). Peptides were bound to streptavidin-Sepharose and mixed with fusion proteins for 1 h at 4°C. Bound fusion proteins were eluted and detected by Western blot analysis with rabbit anti-GST (Upstate Biotechnology). Alternatively, peptides were bound to streptavidin-Sepharose and used to pull down proteins from NK92 lysates. Where indicated, Grb2 was first depleted from the lysates by six serial immunoprecipitations with rabbit anti-Grb2 (Santa Cruz Biotechnology). After pull down, bound proteins were detected by Western blot with rabbit anti-Vav1 (Santa Cruz Biotechnology) or mouse anti-Grb2 (Santa Cruz Biotechnology).
Biochemistry
Purified splenic NK cells were cultured in IL-2 (1000 U/ml) for 7 days and then starved in serum-free medium for 6 h. NK cells (1.25 ϫ 10 6 /sample) were resuspended in HBSS and incubated on ice with biotinylated anti-NKG2D (Biolegend) at 1 g/1 ϫ 10 6 cells. After 15 min, streptavidin (Pierce) was added at 2 g/1 ϫ 10 6 cells, and cells were incubated at 37°C for the indicated time points. Cells were then lysed in radioimmunoprecipitation assay buffer and analyzed by Western blotting for phospho-serine 473 Akt (Cell Signaling Technology) or total Akt (Cell Signaling Technology).
Conjugate formation
Target cells were stained with CFSE, and NK cells were stained with hydroethidine. NK cells and targets were pelleted together, gently disrupted, and incubated at 37°C for 15 min. The percentage of cells forming conjugates was determined by FACS.
NK cell staining and imaging
Target cells were stained with CFSE or 7-amino-4-chloromethylcoumarin (Molecular Probes). NK cells and targets were briefly pelleted at a 1:1 ratio and immediately distributed onto poly-L-lysine-coated slides for incubation at 37°C for 30 min. Cells were fixed in paraformaldehyde (2%) and permeabilized in TX-100 (0.1%) before staining with rhodamine-phalloidin (Molecular Probes), rabbit anti-DAP10 (Santa Cruz Biotechnology), or FITC-anti-␣-tubulin (Sigma-Aldrich). Cells were visualized on a Zeiss confocal microscope equipped with LSM image analysis software or a Nikon fluorescence microscope. Images were acquired using a ϫ100 objective lens with a ϫ10 ocular lens. Conjugates were scored at random and defined as a NK cell conjugated to a single target cell. Two-dimensional images were captured in an optical slice perpendicular to the NK-target synapse and intersecting the center of the synapse. Quantitations were performed with ImageJ software (National Institutes of Health) to measure the length of the NK membrane at the synapse (in arbitrary units) and the pixel intensity of F-actin staining within a defined area. Statistical analyses were performed using Student's t test and the Mann-Whitney U test.
Vav1 controls cytotoxicity mediated by NKG2D-DAP10
To specifically examine the requirement of Vav1 in signaling downstream of NKG2D-DAP10, without the confounding effects of NKG2D-DAP12, we generated NK cells lacking both Vav1 and FIGURE 1. Vav1 is required for NKG2D-DAP10-mediated cytotoxicity. A, Purified NK cells from WT, Vav1 Ϫ/Ϫ , DAP12 Ϫ/Ϫ , and Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ mice were stained with the indicated Abs and analyzed by FACS for expression of common NK cell surface markers. B, Fresh NK cells (left panel) or NK cells cultured in IL-2 for 15 days (right panel) were tested for cytotoxicity against RMA-S targets or RMA-S targets expressing Rae-1␥ in standard chromium release assays. C, Purified NK cells were stained with CFSE, and RMA-S or RMA-S Rae-1␥ targets were stained with hydroethidine. NK cells and targets were pelleted together and incubated at 37°C for 15 min before fixation and analysis of conjugate formation by FACS.
DAP12. Genetic deletion of Vav1 or DAP12 has no discernible effect on NK cell numbers or expression of NK1.1 ( Fig. 1A and data not shown). However, both DAP12 Ϫ/Ϫ and Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells express slightly lower levels of NKG2D and dramatically reduced levels of Ly49D, as compared with wild-type (WT) cells (Fig. 1A), presumably because of a chaperone function conferred by DAP12 (14).
These results indicate that defects observed in Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cytotoxicity are not due to impaired conjugate formation but rather to specific defects in NKG2D-DAP10 signaling and postconjugation events.
Vav1 interacts with DAP10 via the adaptor Grb2
Having established the requirement for Vav1 in cytotoxicity mediated by NKG2D-DAP10, we pursued potential mechanisms of Vav1 recruitment to DAP10. Based on previously published findings (11,25), we hypothesized that Vav1 could interact with DAP10 directly via Grb2. To test this hypothesis, we performed pull-down assays and found that DAP10 peptides containing tyrosine phosphorylated, but not unphosphorylated, YxNM motifs can directly interact with the Grb2 SH2 domain ( Fig. 2A). As a positive control in this assay, the Grb2 SH2 domain interacts with CD22 phospho-peptides containing a YxN motif ( Fig. 2A). However, peptides comprising Y174 of Vav1, which lack the YxN motif, fail to interact with Grb2 SH2 fusion proteins, indicating specificity in the interaction of DAP10 tyrosine motifs with Grb2 ( Fig. 2A).
To test the possibility that Vav1 can interact with DAP10 by means of the adaptor function of Grb2, we used DAP10 YxNM peptides to pull down Vav1 from lysates of NK92 cells containing endogenous levels of Grb2 or from NK92 lysates depleted of Grb2 by serial immunoprecipitation (Fig. 2B). Strikingly, phosphorylated DAP10 YxNM peptides readily pull down Vav1 from NK92 lysates, while depletion of Grb2 from the lysate before pull-down abrogates the interaction of YxNM peptides with Vav1 (Fig. 2B). Importantly, depletion of Grb2 has no discernible effect on total levels of Vav1 present in the lysates (Fig. 2B). The observation that DAP10 peptides pull down Vav1 in a Grb2-dependent manner is consistent with previous findings that Vav1 constitutively associates with Grb2 via a SH3-SH3 interaction (25). In this context, our results suggest that Grb2 may simultaneously bind Vav1 through its SH3 domain and DAP10 YxNM motifs through its SH2 domain, although we cannot rule out the possibility that Vav1 may also be recruited through additional mechanisms. Nevertheless, these data implicate Grb2-Vav1 as a potential downstream signaling module used by NKG2D-DAP10. To address such a potential role for Vav1 as a downstream signaling effector of DAP10, we examined the induction of PI3K activity in Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells stimulated through NKG2D. To this end, WT, DAP12 Ϫ/Ϫ , or Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells were stimulated by cross-linking FIGURE 2. Vav1 interacts with DAP10 through Grb2. A, Biotin-conjugated peptides containing tyrosine motifs derived from the indicated proteins were used in pull-down assays to test direct interactions with the Grb2 SH2 domain. Phosphorylated (pY) or unphosphorylated (Y) peptides were bound to streptavidin-Sepharose and incubated with GST-Grb2-SH2. Bound protein complexes were detected by Western blotting with anti-GST. B, Biotinylated DAP10 YxNM peptides bound to streptavidin-Sepharose were used to pull down Vav1 from NK92 lysates that contained endogenous levels of Grb2 or were depleted of Grb2 by serial immunoprecipitation (IP). Bound protein complexes were analyzed by Western blotting with anti-Vav1. Additionally, whole cell lysates (WCL) from NK92 cells or Grb2-depleted lysates were analyzed by Western blotting to detect Grb2 and Vav1. C, Purified splenic NK cells were stimulated with anti-NKG2D and lysed at the indicated time points. Lysates were analyzed for phosphorylated Akt and total Akt by Western blot (WB). NKG2D with specific Abs, and phosphorylation of Akt kinase was measured at various time points as a surrogate assay of PI3K activation. Strikingly, while stimulation of WT and DAP12 Ϫ/Ϫ NK cells with anti-NKG2D leads to a strong induction of Akt phosphorylation by 5 min (Fig. 2C), Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells show essentially no Akt phosphorylation in response to NKG2D cross-linking at any of the time points tested (Fig. 2C). These results indicate that Vav1 is essential for normal regulation of PI3K activity in response to NKG2D-DAP10 signals.
Vav1 is essential for postconjugation events induced by NKG2D-DAP10 in NK cells
Given that Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells efficiently conjugate with RMA-S cells but fail to kill these targets, we examined the role of Vav1 in postconjugation events. Analyses of NK cell-target conjugates using DAP10 Ab staining revealed that WT, DAP12 Ϫ/Ϫ , and Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells all display a spherical morphology when conjugated to RMA-S targets in the absence of NKG2D ligands (Fig. 3, A and B). In contrast, WT and DAP12 Ϫ/Ϫ NK cells conjugated with RMA-S targets expressing Rae-1␥ adopt a compressed morphology marked by spreading of the plasma membrane along the contour of the target cell and expansion of the cell body at the target interface (Fig. 3, A and B). Strikingly, Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells completely fail to undergo this cellular compression and maintain a spherical morphology when conjugated to targets expressing Rae-1␥ (Fig. 3, A and B).
To quantitate this postconjugation plasticity in NK cells, we used a morphometric assay that measures the relative proportion of the NK cell membrane that is directly apposed to the target cell at the synapse. Accordingly, two-dimensional images of conjugates were acquired in a plane perpendicular to the plane of the synapse and intersecting the center of the synapse. Within this optical slice, the length of the NK cell membrane contacting the target cell was measured using ImageJ software, and this value was divided by the NK cell's circumference as measured in the same plane. NK cells derived from WT mice and DAP12 Ϫ/Ϫ mice allocate a larger proportion of their membrane to the contact site with Rae-1␥ targets as compared with Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells (Fig. 3C). Thus, Vav1 controls cellular plasticity in response to NKG2D-DAP10 engagement.
Because NK cells vary significantly in their size, we also devised an alternative measure of NK cell plasticity, which is based on measurement of the angle defined by the apex of a triangle bound within the confines of the NK cell membrane (Fig. 3D). The first of the three points defining the triangle is located at the center of the NK cell membrane region in contact with the target. The remaining two points of the triangle are located at the widest points of the NK cell such that the line connecting them is parallel to the NK-target contact site. An angle is defined as the angle formed by the triangle's apex located at the NK-target contact site. In this assay, NK cells maintaining a spherical morphology upon conjugation with target cells have a corresponding value of ϳ90 degrees. However, morphological changes associated with compression and widening of the NK cell base at the target cell surface are associated with values Ͼ 90 degrees. Because baseline in this assay is 90 degrees rather than 0 degrees, we expressed the data as Conjugates were fixed and stained with anti-DAP10 (red) to mark the NK cell membrane. B, Percentages of conjugates containing flattened NK cells were scored and tabulated. Conjugates were scored positive for flattening if the NK cell diameter at the synapse was wider than the diameter at the center of the cell. A total of 97 conjugates was scored. C, Quantitation of NK cell morphological changes was performed by measuring the length of the NK cell membrane in contact with the target and dividing this number by the circumference of the NK cell (as described in Results and Materials and Methods). Data represent the NK:target contact index or mean proportion of NK cell membrane contained within the synapse ϩ SD of n Ͼ 15 conjugates for each different conjugate pair. D, Quantitation of NK cell morphological changes was performed by constructing a triangle within the confines of the NK cell membrane (as described in Materials and Methods) and measuring the angle . The box plot represents n Ն 15 conjugates for each different conjugate pair. Medians are represented as thick horizontal lines, 25th and 75th percentiles as boxes, and 10th and 90th percentiles as whiskers.
a "box-and-whisker" plot wherein the boxes represent the 25th and 75th percentiles and the whiskers represent the 10th and 90th percentiles (Fig. 3D). Measurements in WT and DAP12 Ϫ/Ϫ NK cells conjugated with Rae-1␥ targets reveal values of that are 20 -40 degrees greater than a right angle (Fig. 3D). In stark contrast, Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells maintain a spherical morphology and a value near 90 degrees when conjugated to RMA-S targets with or without Rae-1␥ (Fig. 3D). Thus, we conclude from these experiments that Vav1 is critical for compression and flattening of the NK cell at the interface with target cells expressing NKG2D ligands.
Vav1 is required for actin accumulation at the NK-target contact site in response to NKG2D-DAP10 signals
Given that Vav proteins have the capability to regulate actin remodeling in T cells (26), we hypothesized that the morphological defects observed in Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells subsequent to conjugation are due to impaired cytoskeletal remodeling. Actin polymerization is absolutely required for NK cell function, as disruption of actin dynamics due to WASp mutations or pharmacologic inhibition with cytochalasin-D abolishes cytotoxicity (1, 5). Thus, we examined the requirement of Vav1 for actin polymerization during NK cell activation. We found that WT and DAP12 Ϫ/Ϫ NK cells exhibit robust F-actin accumulation at sites of contact with target cells expressing Rae-1␥ but not parental RMA-S cells lacking NKG2D ligands (Fig. 4, A-C). In contrast, Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells fail to accumulate F-actin at the contact sites with RMA-S target cells expressing Rae-1␥ (Fig. 4, A-C).
Quantitation of actin polymerization was performed in optical slices perpendicular to the synapse and intersecting the center of the synapse (Fig. 4B). In Fig. 4D, actin polymerization is reflected as the percentage of total F-actin within the NK cell that localizes at the target cell interface, and background actin polymerization is defined as the percentage of actin accumulated at the site of contact between NK cells and RMA-S targets. Using this assay, we find a 4-fold increase over background actin polymerization in both WT or DAP12 Ϫ/Ϫ NK cells conjugated with Rae-1␥ targets, as compared with conjugates with RMA-S targets (Fig. 4D). In contrast, no increase over background actin polymerization is observed in Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells conjugated with RMA-S targets ex-pressing Rae-1␥, indicating a strict requirement for Vav1 in actin polymerization downstream of NKG2D-DAP10 (Fig. 4D).
Vav1 is critical for MTOC polarization at the NK-target contact site in response to NKG2D-DAP10 signals
Downstream of actin polymerization, microtubule dynamics are required for MTOC polarization and subsequent degranulation of NK cells (1). Given the profound defects in actin dynamics and cytotoxicity in Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells, we speculated that microtubule dynamics may be disrupted as well. To examine this process, we conducted a quantitation of MTOC polarization by dividing the NK cell into three equal sections with one section facing the target (Fig. 5A). Results from these quantitations show that ϳ70% of WT and DAP12 Ϫ/Ϫ NK cells polarize MTOCs toward target cells expressing Rae-1␥ (Fig. 5B). In contrast, MTOCs are randomly distributed in Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells (ϳ33% of cells polarized), indicating a failure of polarization toward Rae-1␥ targets. These findings reveal an absolute requirement for Vav1 in regulating microtubule dynamics in the context of NKG2D-DAP10 signaling.
Discussion
Previous studies demonstrated the requirement for Vav proteins in NK cytotoxicity mediated by ITAM-containing adaptors such as DAP12 and FcR␥; however, evidence for the requirement for Vav proteins downstream of the DAP10-associated receptor NKG2D has been complicated by the fact that murine NKG2D associates with both DAP10 and DAP12 (13)(14)(15). In this report, we provide genetic evidence for the critical role of Vav1 in natural cytotoxicity induced by NKG2D-DAP10 in the absence of DAP12-generated ITAM signals. Moreover, we provide biochemical evidence that Vav1 can interact with DAP10 through the linker Grb2, although it is possible that Vav1 and Grb2 may also interact indirectly within a DAP10-signalosome.
DAP10 has been shown to recruit PI3K to its YxNM motif (10), and our data suggest a model, not mutually exclusive with this notion, in which DAP10 YxNM motifs recruit Grb2. Vav1 may be recruited directly to Grb2 and/or activated by phosphatidylinositol 3,4,5trisphosphate generated by PI3K (27). Indeed, a recent report (28) suggested that Vav1 can be activated by PI3K, although these studies did not distinguish between multiple individual NK-activating receptors that could have been engaged. In a separate report (17), Vav was proposed to act upstream of PI3K based on the observation that inhibition of PI3K with wortmannin failed to block phosphorylation of Vav in human NK cells stimulated with anti-NKG2D. Despite these apparently conflicting models, Vav1 may act both upstream and downstream of PI3K in the context of NKG2D-DAP10 signaling. Of note, evidence in T cells suggests that Vav and PI3K participate in a self-reinforcing, positive feedback loop downstream of the TCR (29). In the context of DAP10 signaling, our data provide an explanation for how Grb2-Vav may act cooperatively with PI3K.
In addition to demonstrating the requirement of Vav1 in NKG2D-DAP10 signaling, we identify a previously unknown role for Vav1 in the regulation of postconjugation events at the NK-target interface. Specifically, we show that Vav1 is essential for F-actin polymerization at the NK-target contact site and for polarization of MTOCs toward the target cell. The qualitative nature of the cytoskeletal defects observed in Vav1 Ϫ/Ϫ DAP12 Ϫ/Ϫ NK cells is underscored by the fact that postconjugation events are blocked completely. Based on these findings, we propose a novel model for Vav1 function in DAP10mediated cytotoxicity, implicating a critical and receptor-proximal role for Vav1 as a regulator of cytoskeletal dynamics.
In vivo, NKG2D-DAP10 signaling through Vav1 occurs in concert with many additional signaling pathways. NK cells possess a diverse repertoire of activating and inhibitory receptors, which interact with specific ligands on potential target cells and transduce opposing signals. NK cells must integrate and interpret these signals to discriminate between infected or transformed cells and healthy cells. The importance of Vav proteins for propagating activation signals downstream of receptors containing ITAMs is well documented (18); however, we report an additional role for Vav1 in NK cell activation initiated by DAP10 YxNM motifs. Given the widespread requirement for Vav proteins in NK cell activation, it follows that Vav would be a target for antagonism by NK inhibitory receptors. Indeed, recruitment of SHP-1 to ITIMs in inhibitory receptors appears to specifically target Vav1 and lead to its dephosphorylation (21). In this context, our findings suggest a possibility that dephosphorylation of Vav (30, 31) may be functionally linked to inhibition of postconjugation events such as actin polymerization and MTOC polarization. Of note, the NK inhibitory synapse lacks robust F-actin accumulation and MTOC polarity, which is consistent with a mechanism involving Vav dephosphorylation as a result of Src homology region 2 domain-containing phosphatase-1 recruitment to ITIM-containing NK inhibitory receptors (32,33). Thus, Vav may represent a critical point of convergence between opposing activation and inhibitory signals in NK cells.
While Vav1 is absolutely required for DAP10-mediated natural cytotoxicity, it is dispensable for several ITAM-mediated signaling events, including DAP12-mediated natural cytotoxicity (18), generation of Ag-specific T cell-APC immune synapses (34), and TCRinduced formation of signaling microclusters and F-actin polymerization (our unpublished observations). In this context, we note that the strict dependence of DAP10 signaling on Vav1 illuminates a highly specialized signal transduction module in NK cells.
Note added in proof. During review of this manuscript, another group reported that DAP10 signals through a GRB2/Vav1 complex in human NK cells (35). | 5,608.4 | 2006-08-15T00:00:00.000 | [
"Biology"
] |